"We need to be careful about how this technology is used."
Judge, Jury, Executioner
Machine learning researchers are teaching neural networks how to superficially judge humans — and the results are as brutal as they are familiar.
A study about the judgmental AI, published in the prestigious Proceedings of the National Academy of Sciences journal, describes how researchers trained the model how to judge attributes in human faces, the way we do upon first meeting each other, and how they trained it to manipulate photos to evoke different judgments, such as appearing "trustworthy" or "dominant."
"Our dataset not only contains bias," Princeton computer science postdoctoral researcher Joshua Peterson wrote in a tweet thread about the research, "it deliberately reflects it."
https://twitter.com/joshuacpeterson/status/1517224879136796672
Human Error
The PNAS paper notes that the AI so mirrored human judgment that it tended to associate objective physical characteristics, such as someone's size or skin color, with attributes ranging from trustworthiness to privilege.
Indeed, in his thread Peterson explained that most of the 34 judgment values the researchers trained the AI to assign had corresponding political inferences. For instance, when using the study's interactive site, Futurism found that the algorithm marked white faces as more "conservative," and when one searches for "liberal" on the study's interactive site, most of the faces it comes up with are people of color.
In a press release, cognitive scientist and AI researcher Jordan W. Suchow of the Stevens Institute of Technology, who worked on the study, admitted that "we need to be careful about how this technology is used," since it could conceivably take on nefarious purposes like boosting or tarnishing a public figure's reputation.
Biased Much?
Though it's fairly esoteric, Suchow noted in the press release that this kind of machine learning can "study people’s biased first impressions of one another."
"Given a photo of your face, we can use this algorithm to predict what people’s first impressions of you would be," he added, "and which stereotypes they would project onto you when they see your face."
With AI bias being an increasingly salient issue, this paradigm twist is as delightful as it is telling. You can check out the interactive research yourself at OneMillionImpressions.com.
READ MORE: Deep models of superficial face judgments [Proceedings of the National Academy of Sciences]
More on AI: Welcome to “Robot Hell”! Meet the Deranged Genius Who Created an AI Version of @Dril
Share This Article