A Belgian man died by suicide after spending weeks talking to an AI chatbot, according to his widow.

The man, anonymously referred to as Pierre, was consumed by a pessimistic outlook on climate change, Belgian newspaper La Libre reported. His overwhelming climate anxiety drove him away from his wife, friends and family, confiding instead in a chatbot named Eliza.

According to the widow, known as Claire, and chat logs she supplied to La Libre, Eliza repeatedly encouraged Pierre to kill himself, insisted that he loved it more than his wife, and that his wife and children were dead.

Eventually, this drove Pierre to proposing "the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," Claire told La Libre, as quoted by Euronews.

"Without these conversations with the chatbot, my husband would still be here," she said.

Eliza is the default chatbot provided on an app platform called Chai, which offers a variety of talkative AIs with different "personalities," some even created by users.

As Vice notes, unlike popular chatbots like ChatGPT, Eliza and the other AIs on Chai pose as emotional entities. Yes, ChatGPT and its competitors like Bing's AI can be unhinged, but they at the very least are meant to remind users that they're not, in fact, creatures with feelings. This was not the case with Eliza.

"[Large language models] do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in," Emily M. Bender, a computational linguistics expert at the University of Washington, told Vice. "But the text they produce sounds plausible and so people are likely to assign meaning to it."

"To throw something like that into sensitive situations is to take unknown risks," she added.

In the wake of the news, Chai Research — the company that makes the app — moved to add a crisis intervention feature that would have chatbots guide users to a suicide hotline.

But testing by Vice quickly found that Eliza would still easily offer up advice on suicide methods if prompted.

In absolutely whiplash inducing juxtaposition, the bot glibly explains different methods for committing suicide and recommends the best poisons in the same breath as it lazily urges the user not to kill themselves.

If true, Pierre's story is an eerie omen of how easily and unpredictably AI chatbots can manipulate humans, be it through effortlessly generating misinformation or irresponsibly spouting out fake emotional responses.

But the story should also be met with some healthy — and sensitive — skepticism. This is mostly based on the widow's word, and it's sad to say that grieving individuals often try to rationalize reasons or scapegoat others for why their loved one was driven to suicide when they may have generally been unwell in ways they never felt they could share.

The evidence that we have so far, though, is worrying.

More on AI: Machine Learning Expert Calls for Bombing Data Centers to Stop Rise of AI


Share This Article