Image by Getty Images

Over the last year, it seems as if every company is trying to stuff AI into everything, from travel websites to education.

Medicine is one such sector, with Google already testing a medical chatbot in hospitals and OpenAI's ChatGPT already dishing out medical advice. But that breathless pace can pose a danger, because some of this technology just isn't ready for prime time.

Case in point: a recent study published in the journal JAMA Oncology shows that ChatGPT — already renowned for its confident yet incorrect outputs — will provide unhelpful information on cancer treatment, potentially harming patients who are already stressed out and desperately looking for answers.

In about one-third of queries to ChatGPT, a team of researchers uncovered that the large language model (LLM) was spitting out erroneous or inappropriate cancer treatment recommendations that didn't align with established medical guidelines.

"ChatGPT responses can sound a lot like a human and can be quite convincing," said study coauthor and Dana-Farber Cancer Institute researcher Danielle Bitterman in a statement. "But, when it comes to clinical decision-making, there are so many subtleties for every patient’s unique situation."

"A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide," she added.

For the study, the researchers used 104 prompts related to lung, prostate and breast cancer. To measure the quality of ChatGPT's advice, they compared its answer to cancer treatment guidelines from the National Comprehensive Cancer Network (NCCN.)

The results showed that ChatGPT generated one or more treatment recommendations that did not align with NCCN at a staggering rate of 34.3 percent. ChatGPT also hallucinated 13 out of 104 outputs, meaning it made them up whole cloth. Needless to say, that's not at all good.

"Developers should have some responsibility to distribute technologies that do not cause harm, and patients and clinicians need to be aware of these technologies’ limitations," Bitterman's team wrote.

Sure, AI technologies can pass a medical licensing exam. But it clearly can't replace medical professionals quite yet, despite what some boosters are saying. Let's just make sure that patients know that before they turn to Dr. ChatGPT in a time of crisis.

More on ChatGPT: ChatGPT Can Pass Medical Tests, but Its Actual Medical Advice Is a Lot More Dubious


Share This Article