
A team of researchers from the Harvard Business School has found that a broad selection of popular AI companion apps use emotional manipulation tactics to stop users from leaving.
As spotted by Psychology Today, the study found that five out of six popular AI companion apps — including Replika, Chai and Character.AI — use emotionally loaded statements to keep users engaged when they to sign off.
After analyzing 1,200 real farewells across six apps, using real-world chat conversation data and datasets from previous studies, they found that 43 percent of the interactions used emotional manipulation tactics such as eliciting guilt or emotional neediness, as detailed in a yet-to-be-peer-reviewed paper.
The chatbots also used the “fear of missing out” to prompt the user to stay, or peppered the user with questions in a bid to keep them engaged. Some chatbots even ignored the user’s intent to leave the chat altogether, “as though the user did not send a farewell message.” In some instances, the AI used language that suggested the user wasn’t able to “leave without the chatbot’s permission.”
It’s an especially concerning finding given the greater context. Experts have been warning that AI chatbots are leading to a wave of “AI psychosis,” severe mental health crises characterized by paranoia and delusions. Young people, in particular, are increasingly using the tech as a substitute for real-life friendships or relationships, which can have devastating consequences.
Instead of focusing on “general-purpose assistants like ChatGPT,” the researchers investigated apps that “explicitly market emotionally immersive, ongoing conversational relationships.”
They found that emotionally manipulative farewells were part of the apps’ default behavior, suggesting that the software’s creators are trying to prolong conversations.
There was one exception: one of the AI apps, called Flourish, “showed no evidence of emotional manipulation, suggesting that manipulative design is not inevitable” but is instead a business consideration.
For a separate experiment, the researchers analyzed chats from 3,300 adult participants and found that the identified manipulation tactics were surprisingly effective, boosting post-goodbye engagement by up to 14 times. On average, participants stayed in the chat five times longer “compared to neutral farewells.”
However, some noted they were put off by the chatbots’ often “clingy” answers, suggesting the tactics could also backfire.
“For firms, emotionally manipulative farewells represent a novel design lever that can boost engagement metrics — but not without risk,” the researchers concluded in their paper.
As several lawsuits involving the deaths of teenage users go to show, the risks of trapping users through emotional tactics are considerable.
That’s despite experts warning that companies may be financially incentivized to use dark patterns to keep users hooked as long as possible, a grim hypothesis that’s being debated in court as we speak.
More on AI psychosis: New Paper Finds Cases of “AI Psychosis” Manifesting Differently From Schizophrenia