Twitter’s Cropping Algorithm Shows Evidence of Racial Bias

PhD student and technologist Colin MadlandOpens in a new tab recently called out the video-conferencing app, Zoom, on Twitter. Madland claimed Zoom has a “crappy face-detection algorithm” that “erases black faces” when using the app’s virtual backgrounds. After posting tweetsOpens in a new tab regarding the error in Zoom’s algorithm, Madland noticed that Twitter itself seemed to have an issue: the social media platform’s own automatic image-cropping algorithm was selecting for white faces over black ones.

Error occurred!

Now, more Twitter users have posted a bevy of examplesOpens in a new tab of potential bias in the app’s image-cropping algorithm, which automatically detects faces. And several employees from the social media giant have also responded to the matter.

One of the most recent, and most stark, examples of potential bias in Twitter’s image-cropping algorithm was posted by cryptography and infrastructure engineer, Tony ArcieriOpens in a new tab. Arcieri put together a series of tweetsOpens in a new tab testing to see if the company’s image-cropping algorithm selected for Mitch McConnell’s face or Barack Obama’s. The engineer found that, in many cases, the algorithm did indeed select the former’s face over the latter’s; the algorithm even selected for “a lighter Obama over a darker one” when Arcieri adjusted the contrast of the former president’s face.

Furthermore, Twitter users found the algorithm’s bias toward lighter skin tone applied to all kinds of faces, not just human ones. Organizer Jordan SimonovskiOpens in a new tab, for example, found that the algorithm automatically selected for Lenny over Carl—two characters from The Simpsons with sharply varying (illustrated) skin tones. (Note: you need to view the tweets on Twitter, and open the images, in order to see the algorithm’s selections.)

Error occurred!

“[W]e tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do,” Liz Kelley, a Twitter communications person, tweeted. Kelley added that Twitter will open source the algorithm, so that the public can review and replicate it.

Twitter’s Chief Design Officer, Dantley DavisOpens in a new tab, said the algorithm’s errors are “100% our fault,” referring to Twitter’s team. “[T]he perception of bias in our product is… our responsibility to address,” Davis added in a later tweet.

Error occurred!

It should be noted that some users found ways to alter image comparisons in such a way that the algorithm no longer selected white faces over black ones. Placing glasses on Obama’s face, for example, made it recognizable to the algorithm.

But the general consensus surrounding this particular algorithm’s potential bias—as well as racial biases in algorithms in general—is that it is a problem still in need of a solution. Congresswoman Ayanna Pressley, for example, has said thatOpens in a new tab facial recognition technology is “systemically biased,” and has introduced a bill to curb its use. Even John Oliver has gone into depthOpens in a new tab explaining how facial recognition tech used by police may be racially biased.

Unfortunately, the problem of racial bias in algorithms appears to be a common issue amongst large technology companies. People have also accused FacebookOpens in a new tab, as well as AmazonOpens in a new tab, of deploying racially biased algorithms. Incidentally, the CEO of the latter behemoth corporation is now helping the Federal GovernmentOpens in a new tab write facial-recognition technology laws.

Twitter’s Cropping Algorithm Shows Evidence of Racial Bias_1

Esther Vargas 

What do you think about this potential issue with Twitter’s image-cropping algorithm? Do you have any thoughts on racial biases in facial recognition technology in general? Let us know your thoughts in the comments.

Featured Image: Esther VargasOpens in a new tab