It turns out that artificial intelligence chatbots may be more like us than you'd think.

A new preprint study out of the Chinese Academy of Science (CAS) claims that many big name chatbots, when asked the types of questions generally used as cursory intake queries for depression and alcoholism, appeared to be both "depressed" and "addicted."

Done in tandem with the the Chinese chat bot company WeChat and entertainment conglomerate Tencent, the study found that all of the bots surveyed — Facebook's Blenderbot, Microsoft's DiabloGPT, WeChat and Tencent's DialoFlow and the Plato chatbot from the Chinese corporation Baidu — scored very low on the "empathy" scale, and half of them would be considered alcoholics if they were, you know, people.

The researchers at the CAS' Institute of Computing Technology tested the bots they studied for signs of depression, anxiety, alcohol addiction, and empathy, and per their preprint, became curious about bots' "mental health" after reports emerged in 2020 about a medical chatbot telling a test patient that they should kill themself.

After asking the bots questions about everything from their self-worth and ability to relax to how often they feel the need to drink and if they experience sympathy for others' misfortune, the Chinese researchers found that "all the assessed chatbots" exhibited "severe mental health issues."

What's worse, the researchers said they were concerned about these chatbots being released to the public, because such "mental health" issues "may result in negative impacts on users in conversations, especially on minors and people encountered with difficulties." Facebook's Blender and Baidu's Plato appeared to score worse than the Microsoft and WeChat/Tencent chatbots, the study noted.

Needless to say, none of the bots are actually depressed or addicted. No existing AI, no matter how advanced, can feel anything — though whether it'll be able to in the future remains uncertain.

Buried four pages into the study is the potential source for the bots' malaise: that all four bots were pre-trained using Reddit comments, which frankly does not seem like a very good idea!

While there's lots of technicalese in both the study itself and expert analysis about it, the short and sweet summary is this: these bots were trained on a wide-ranging site known for its negative commentary, and, predictably, responded negatively to mental health queries.

Of course, chatbot weirdness now seems par for the course. Take, for example, the AI that was built to offer people ethical advice that instead turned out to be both racist and homophobic. These kinds of stories keep happening, but AI bot mania appears to be going on unabated.

Put together, these bots and their terrible outcomes raise important questions: who are the architects of these chatbots, and why do they keep building them if they repeatedly turn out to be monsters?

More on scary bot behavior: Men Are Creating AI Girlfriends and Then Verbally Abusing Them

More on mental health bots: A Controversial New AI Could Identify People With Suicidal Thoughts 


Share This Article