Technology & Society

You should check out this informative Vox video that discusses why electric cars require artificial sound. The video provides captivating insights into the technology behind sound production in electric cars, known as “Acoustic Vehicle Alerting Systems,” and the creative designers who develop distinctive sounds that meet specific criteria.

While reading The Korean Herald at my local library, I came across a column by Robert J. Fouser about the “soundscape of Korean cities”. It’s possible that he’s correct about cities like Seoul being quieter due to reduced noise from cars, public transportation, and people. However, I can certainly confirm that there are other noise disturbances, such as the music coming from street shops and the sounds of people’s digital devices (notifications, audio from videos). Here’s an excerpt from the article:

The sounds of the digital revolution are everywhere, most noticeable in the beeps of notifications. Sometimes a beep nearby causes people to check their phones. And the ubiquitous KakaoTalk sounds are now embedded in the soundscape of Korean cities. By law, cameras on Korean mobile phones are required to produce a shutter click, which creates a burst of clicks when many people take pictures of the same thing.

It seems to me that the acceptance of noises greatly varies depending on the place. My local library is one of those places that is incredibly quiet and serene. There’s even a sign reminding people to be mindful of their noise levels when using the keyboard and mouse.

The questions raised by Sisi Wei, editor-in-chief at The Markup, in a recent article shed light on the dilemmas faced by journalists when covering AI-generated pictures. She questions whether the news articles should contain the generated images and, if so, how to label them or what kinds of disclaimers to include. As she notes, this issue is difficult because readers may not pay attention to the caption. The following is a quote from the article.

There’s no question to me that anyone who comes into contact with the internet these days will need to start questioning if the images they’re seeing are real. But what’s our job as journalists in this situation? When we republish viral or newsworthy images that have been altered or were generated by AI, what should we do to make sure we’re giving readers the information they need? Doing it in the caption or the headline isn’t good enough—we can’t assume that readers will read them.

Two interesting articles from MIT Technology Review:

  • This article examines what might happen with the biometric databases that were established in Afghanistan following the Taliban’s takeover of the country.
  • Another article discusses “TikTok’s decision to use a woman’s voice without her permission” as very literal example of how women’s voices are written out of the history of computing.

Facebook/Meta is shutting down its facial recognition system. They explain their choice in this blog post.

But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.

The Markup reviews reviewed different examples of controversial uses of machine learning algorithms in 2020:

Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials.