Machine learning

The Markup reviews reviewed different examples of controversial uses of machine learning algorithms in 2020:

Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials.

The Register reports on a controversy surrounding the automatic image-cropping functionality of Twitter:

When previewing pictures on the social media platform, Twitter automatically crops and resizes the image to match your screen size, be a smartphone display, PC monitor, etc. Twitter uses computer-vision software to decide which part of the pic to focus on, and it tends to home in on women’s chests or those with lighter skin. There are times where it will pick someone with darker skin over a lighter-skinned person, though generally, it seems to prefer women’s chests and lighter skin.

It seems Twitter has not come up with a technical fix, but is instead resorting to. Read the full article here.

Caught this post on Hacker News by chance on how “With questionable copyright claim, Jay-Z orders deepfake audio parodies off YouTube”. The article discusses a controversy regarding copyright of deepfake audio that are created by a Youtube channel. The videos itself are absolutely fascinating, a technological showcase mixed with humour and creativity.

All videos can also be seen here.