There are numerous reasons we shouldn't trust AI chatbots with health advice.

The chatty assistants have a strong tendency to lie with an astonishing degree of confidence, which can cause plenty of mayhem. Case in point, a recent study found that OpenAI's ChatGPT was terrible at giving the correct diagnosis.

That also goes for the health of our furry companions. In an article this year in the literary magazine n+1, screenshots of which have been going viral on social media, writer Laura Preston recalled an incredibly odd scene she encountered while attending an AI conference back in April.

During a talk at the event, Cal Lai, the CEO of pet health startup AskVet, which recently launched a ChatGPT-based "answer engine for animal health" called VERA in February 2023, recalled a bizarre "story" of a woman whose elderly dog was having diarrhea.

The woman reportedly asked VERA for advice and got an unsettling answer.

"Your dog is at the end of his life," the chatbot responded, as quoted by Preston. "I recommend euthanasia."

According to Lai's story, the woman was clearly in distress and denial about her dog's state. Per Preston's recalling of the event, VERA sent her a "list of nearby clinics that could get the job done" since it "knew the woman's location."

While the woman didn't respond to the chatbot at first, she eventually gave in and euthanized her dog.

"The CEO regarded us with satisfaction for his chatbot’s work: that, through a series of escalating tactics, it had convinced a woman to end her dog’s life, though she hadn’t wanted to at all," Preston wrote in her essay.

"The point of this story is that the woman forgot she was talking to a bot," Lai told the audience, as quoted by Preston. "The experience was so human."

In other words, the CEO celebrated the fact that his company's chatbot had convinced a woman to end her dog's life — raising several burning ethical questions.

For one, did the dog really need to be euthanized, considering how unreliable these tools can be? And if the best plan of action was indeed for the dog to die, shouldn't that advice have come from a human veterinarian, who knows what they're doing?

Futurism has reached out to AskVet for comment.

Lai's story highlights a troubling new trend, with AI companies racing to replace human workers with AI assistants, from programmers to customer service agents.

Especially now that health startups are getting on board, experts are worried generative AI in the healthcare space could come with some substantial risks.

In one recent paper, researchers found that chatbots still had the "tendency to produce harmful or convincing but inaccurate content," which "calls for ethical guidance and human oversight."

"Additionally, critical inquiry is needed to evaluate the necessity and justification of LLMs’ current experimental use," they concluded.

Preston's essay is a particularly glaring example, especially considering that as pet owners, we are in charge of the health of our beloved companions, making us ultimately responsible for another being.

Meanwhile, screenshots of Preston's essay were met with sheer outrage on social media.

"I'm signing out again this is unironically one of the worst things I've ever read," one BlueSky user wrote. "No words for how much hate I feel right now."

More on AI chatbots in health: ChatGPT Is Absolutely Atrocious At Being a Doctor


Share This Article