A glimpse into the black box revealed a whole slew of problems.


You may remember the frenzy last year when it became apparent that the algorithm Twitter used to automatically crop images uploaded to the site prioritized white faces over Black ones, revealing yet another way that societal racism gets ingrained into the algorithms we interact with every day.

Twitter eventually abandoned the algorithm altogether, but it still wants to know where it went wrong. So last weekend, the social media company invited researchers at the Def Con hacker conference to test the algorithm for even more examples of algorithmic bias, NBC News reports.

It's an impressive mea culpa for a tech company that could have tried to quietly move on and distance itself from its algorithmic racism controversy — but instead it worked with researchers to find even more appalling bias built into the system.

White, Young, and Pretty

In addition to the racial bias that went viral last year, research teams participating in the challenge found that the image-cropping algorithm automatically discriminated against Muslims, people with various disabilities, and older people.

Various teams found that the algorithm would crop out people's wheelchairs, ignore faces with white hair, and also overlook faces of people wearing religious headscarves, according to NBC. Basically, if you uploaded a picture of a face that wasn't white, young, and conventionally pretty, Twitter would often crop it out in favor of someone who was.

"Almost every major AI system we've tested for major tech companies, we find significant biases," University of Toronto's Applied AI Group director Parham Aarabi, one of the researchers who partook in the challenge, told NBC. "Biased AI is one of those things that no one’s really fully tested, and when you do, just like Twitter's finding, you'll find major biases exist across the board."

READ MORE: Twitter’s racist algorithm is also ageist, ableist and Islamaphobic, researchers find [NBC News]

More on biased AI: Google's Hate Speech-Detecting AI Is Biased Against Black People

Share This Article