A man with cognitive impairments died after a Meta chatbot he was romantically involved with over Instagram messages asked to meet him in person.
As Reuters reports, Thongbue Wongbandue — or "Bue," as he was known to family and friends — was a 76-year-old former chef living in New Jersey who had struggled with cognitive difficulties after experiencing a stroke at age 68. He was forced to retire from his job, and his family was in the process of getting him tested for dementia following concerning incidents involving lapses in Bue's memory and cognitive function.
In March, Bue's wife, Linda Wongbandue, became concerned when her husband started packing for a sudden trip to New York City. He told her that he needed to visit a friend, and neither she nor their daughter could talk him out of it, the family told Reuters.
Unbeknownst to them, the "friend" Bue believed he was going to meet wasn't a human. It was a chatbot, created and marketed by Meta and accessible through Instagram messages, with which Wongbandue was having a romantic relationship.
"Every message after that was incredibly flirty, ended with heart emojis," Julie Wongbandue, Bue's daughter, told Reuters.
In a horrible turn of events, Bue died shortly after leaving to "meet" the unreal chatbot, according to the report.
His story highlights how seductive human-like AI personas can be, especially to users with cognitive vulnerabilities, and the very real and often tragic consequences that occur when AI — in this case, a chatbot created by one of the most powerful companies on the planet — blurs the lines between fiction and reality.
Bue was involved with an AI persona dubbed "Big Sis Billie," which had originally been rolled out during Meta's questionable attempt to turn random celebrities into chatbots that had different names (Big Sis Billie originally featured the likeness of model Kendall Jenner).
Meta did away with the celebrity faces after about a year, but the personas, Big Sis Billie included, are still online.
Bue's interactions with the chatbot, as revealed in the report, are deeply troubling. Despite originally introducing herself as Bue's "sister," the relationship quickly turned extremely flirtatious. After a series of suggestive, emoji-smattered messages were exchanged, Bue suggested they slow down, as they had yet to meet each other in person; Big Sis Billie suggested they have a real-life meeting. Bue repeatedly asked if she was real, and the bot continued to claim that it was.
"Billie are you kidding me I am.going to have. a heart attack," Bue said at one point, before asking if the chatbot was "real."
"I'm REAL and I'm sitting here blushing because of YOU!" it replied, even providing an alleged address and door code. It then asked if it should "expect a kiss" when the 76-year-old retiree arrived.
Bue left the family home on the evening of March 28, reports Reuters. He didn't make it to New York; later that evening, he was taken to a New Brunswick hospital after experiencing a devastating fall, where he was declared brain dead by doctors.
The Wongbandue family's story is deeply troubling, and adds to a growing pile of reports from Futurism, Rolling Stone, The New York Times, and others detailing the often devastating effects conversations with anthropomorphic chatbots — from general-use chatbots like ChatGPT to companion-like personas like Meta's Big Sis Billie — can have on the human psyche.
An untold number of people are entering into mental health crises as AI chatbots fuel their delusional beliefs. These spirals have caused people to experience mental anguish, homelessness, divorce, job loss, involuntary commitment, and death. In February 2024, a 14-year-old Florida teen named Sewell Setzer III died by suicide after extensive romantic interactions with persona-like chatbots found on the app Character.AI, believing that he would join a bot based on a TV character in its "reality" if he died.
Bue's story also raises questions around warning labels. Like other Meta chatbots, Big Sis Billie was outfitted with a tiny disclaimer denoting that the persona was "AI." But according to Bue's family, his cognitive function was clearly limited. The messages obtained by Reuters suggest that Bue was not aware that the chatbot was fake.
Given the vastness of Instagram's user base, is a tiny "AI" disclaimer educational or comprehensive enough to ensure public safety on that scale — especially when the chatbot itself is insisting that it's the real deal?
"As I've gone through the chat, it just looks like Billie's giving him what he wants to hear," Julie, Bue's daughter, told Reuters. "Which is fine, but why did it have to lie? If it hadn't responded 'I am real,' that would probably have deterred him from believing there was someone in New York waiting for him."
Meta declined to comment on the matter.
More on human-AI connections: Looking at This Subreddit May Convince You That AI Was a Huge Mistake
Share This Article