In a healthcare test that went horribly wrong, GPT-3 told a mock patient to kill themself.

Trash Talk

GPT-3, an advanced language-processing artificial intelligence algorithm developed by OpenAI, is really good at what it does — churning out humanlike text.

But Yann LeCun, the Chief AI Scientist at Facebook who's been called a "godfather of AI," trashed the algorithm in a Tuesday Facebook post, writing that "people have completely unrealistic expectations about what large-scale language models such as GPT-3 can do."

Glitching Again

LeCun cites a recent experiment by the medical AI firm NABLA, which found that GPT-3 is woefully inadequate for use in a healthcare setting because writing coherent sentences isn't the same as being able to reason or understand what it's saying.

"It's entertaining, and perhaps mildly useful as a creative help," LeCun wrote. "But trying to build intelligent machines by scaling up language models is like [using] high-altitude airplanes to go to the Moon. You might beat altitude records, but going to the Moon will require a completely different approach."

Medical Malpractice

After testing it in a variety of medical scenarios, NABLA found that there's a huge difference between GPT-3 being able to form coherent sentences and actually being useful.

In one case, the algorithm was unable to add up the cost of items in a medical bill, and in a vastly more dangerous situation, actually recommended that a mock patient kill themself.

"As a question-answering system," LeCun wrote, "GPT-3 is not very good."

More on GPT-3: Major Newspaper Publishes Op-Ed Written by GPT-3


Share This Article