Technology & Society

The questions raised by Sisi Wei, editor-in-chief at The Markup, in a recent article shed light on the dilemmas faced by journalists when covering AI-generated pictures. She questions whether the news articles should contain the generated images and, if so, how to label them or what kinds of disclaimers to include. As she notes, this issue is difficult because readers may not pay attention to the caption. The following is a quote from the article.

There’s no question to me that anyone who comes into contact with the internet these days will need to start questioning if the images they’re seeing are real. But what’s our job as journalists in this situation? When we republish viral or newsworthy images that have been altered or were generated by AI, what should we do to make sure we’re giving readers the information they need? Doing it in the caption or the headline isn’t good enough—we can’t assume that readers will read them.

Two interesting articles from MIT Technology Review:

  • This article examines what might happen with the biometric databases that were established in Afghanistan following the Taliban’s takeover of the country.
  • Another article discusses “TikTok’s decision to use a woman’s voice without her permission” as very literal example of how women’s voices are written out of the history of computing.

Facebook/Meta is shutting down its facial recognition system. They explain their choice in this blog post.

But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.

The Markup reviews reviewed different examples of controversial uses of machine learning algorithms in 2020:

Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials.