"We were not consulted about that, and we did not authorize that."
Remember Tessa, the chatbot that was designed to help users combat disordered eating habits but ended up giving absolutely terrible, eating disorder-validating advice?
Well, if the story wasn't stupid enough already, it just got a hell of a lot dumber in a way that perfectly illustrates how AI is being rolled out overall: hastily, and in ways that don't make much sense for users — or even actively put them at risk.
To recap: back in May, an NPR report revealed that just four days after its burnt-out crisis helpline workers moved to unionize, the National Eating Disorder Association (NEDA) — the US' largest eating disorder nonprofit, according to Vice — decided to fire its entire crisis staff, and disband the helpline entirely in favor of a human-less chatbot named Tessa.
Tessa wasn't designed to help those in crisis situations, but was instead intended to coach users through a body positivity training course. And in its defense, its backers emphasized the claim that the bot was built on "decades of research," meanwhile hammering the point that it couldn't "go off the rails" like ChatGPT or other bots — until, uh, it did exactly that. Tessa was caught telling users to lose weight by cutting up to 1,000 calories daily, among a number of other terrible things. As a result, the bot has been taken down.
And now, in a new twist, NEDA is telling The Wall Street Journal that the bot was apparently meant to provide only static responses — and was reworked with generative AI capabilities by the creator of the bot's underlying tech, a software firm called Cass, without NEDA's explicit knowledge or authorization.
"We were not consulted about that," NEDA CEO Liz Thompson told the WSJ of the upgrade, "and we did not authorize that."
As NEDA tells it, Tessa was developed by researchers at multiple universities — Washington University School of Medicine and Stanford University School of Medicine both included — as a closed system with standardized, pre-programmed responses. The researchers say that was very intentional; when someone's at risk of self-harm, every word is important, and you never want to say the wrong thing.
"We can't yet trust AI to offer sound mental-health advice," Ellen Fitzsimmons-Craft, an associate professor of psychiatry at Washington University School of Medicine and one of the researchers behind Tessa, told the WSJ.
And most importantly, Tessa was tested as a closed system in a six-month clinical trial before it was officially made available on NEDA's website in February 2022. And it wasn't until later that year, NEDA says, when the bot was already serving at-risk people looking for help, that Cass, which markets itself as "the world's leading AI mental health assistant," updated all of its products with generative AI — Tessa included.
For its part, Cass CEO Michiel Rauws has defended his company's decisions, reportedly arguing that it never acted outside of its contract with NEDA.
"In most cases it performed really well," Rauws told the WSJ, reportedly adding, in his defense, that Tessa provided disclaimers and encouraged users to consult their healthcare provider, "and did and said the right things and helped people get access to care."
Still, that defense is pretty weak. Providing a disclaimer before your tech tells someone with disordered eating habits to drop up to two pounds a week and count calories doesn't mean that no harm will ultimately be done, nor does telling someone to talk to a doctor before engaging in any potentially destructive behavior. As Thompson, NEDA's CEO, told the WSJ: when it comes to helping folks in eating disorder crisis, "every word counts."
But that said, NEDA isn't off the hook either. Regardless of how safe Tessa was supposed to be, they still made the decision to remove human empathy from the process of eating disorder management entirely, and only after their staffers unionized — a move that backfired spectacularly, to put it bluntly. And at the end of the day, Cass didn't offer Tessa to NEDA's digital patrons. NEDA did.
And to that end: according to the WSJ, Thompson says that it's likely that the chatbot, as opposed to people, will likely return.
"We're not shutting down technology," Thompson told the newspaper. "But we have to be super careful with the people we serve."
More on AI: Copywriter Fired after Bosses Started Calling Her "ChatGPT"
Share This Article