Companies are using potentially biased AI to screen job applicants.
Mixed Emotions
Machines may not have emotions, but that doesn't mean they can't recognize them.
At least, that's one well-funded belief in the tech industry.
From startups to big-name players such as Amazon and Microsoft, a number of tech companies are now offering "emotion analysis" products, which are systems designed to analyze a person's face in order to determine how they're feeling — though there's evidence to suggest these systems could do more harm than good when they're applied to hiring for jobs or making acceptance decision in education.
Double Tech
According to a new Guardian story, emotion detection systems are a combination of two technologies.
The first is computer vision — this is what "sees" an image or video of a person's face and detects their features and expressions. The second is an artificial intelligence capable of analyzing that information to determine what the person is feeling. For example, it might label someone with lowered brows and bulging eyes as "angry," and someone with a wide grin as "happy."
These systems are already in use for a myriad of applications — IBM, Unilever, and several other companies have systems they use to screen job candidates, while Disney is using an emotion detection system to determine how audience members feel about movies during screenings.
"You’re also seeing experimental techniques being proposed in school environments to see whether a student is engaged or bored or angry in class," Meredith Whittaker, co-director of the research institute AI Now, told The Guardian.
Biased AI
It's not hard to see the benefits of this type of tech for Disney, IBM, and other companies — at least, if the systems were extremely accurate. But that doesn't appear to be the case.
In December, researchers from Wake Forest University published a study in which they tested several emotion detection systems, including Microsoft's, and found that the systems assigned negative emotions to photos of black people more often than photos of white people, even when they were smiling to the same degree — mimicking the racial biases exhibited by other types of AI.
As Whittaker notes, these faulty systems could be doing more harm than good, at least for the people they're analyzing.
"This information could be used in ways that stop people from getting jobs or shape how they are treated and assessed at school," she told The Guardian, "and if the analysis isn’t extremely accurate, that’s a concrete material harm."
READ MORE: Don’t look now: why you should be worried about machines reading your emotions [The Guardian]
More on biased AI: Self-Driving Cars May Hit People With Darker Skin More Often
Share This Article