An "AI medical advisor"? What could go wrong?

Altman's Thoughts

OpenAI's ludicrously popular AI chatbot ChatGPT is prone to saying some stupid and outright made-up stuff a lot of the time. And if recent remarks are any indicator, so is the company's CEO Sam Altman.

From admitting he's a doomsday prepper worried about an AI or virus apocalypse, or speculating that AI could break capitalism, or even calling ChatGPT a "horrible product" — which might be true, but is arguably a silly thing to say as the company's CEO — we're all just unwitting recipients of Altman's Thoughts.

Altman was apparently especially inspired over this weekend, tweeting out his latest giga-brained idea on how benevolent AIs will improve the world: AI giving medical advice to people too poor for actual healthcare.

"The adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly," he wrote. "The benefits (and fun!) have too much upside."

"These tools will help us be more productive (can't wait to spend less time doing email!), healthier (AI medical advisors for people who can't afford care), smarter (students using ChatGPT to learn), and more entertained (AI memes lolol)," he continued.

Bad Advice

As we've covered at Futurism, there are plenty of reasons to be doubtful about the prospect of getting serious advice from an AI, let alone having one be your "AI medical advisor."

We can't even trust AI to give solid health tips and medical information even after it's supposedly been double-checked by an editor. Remember when a Men's Journal health article written by AI was found to contain an outrageous number of factual errors?

Some of those errors may be attributable to getting a few details mixed up or minor changes in grammar that change the veracity of a "fact," but AIs are also well known to completely "hallucinate" convincing-sounding statements that have no basis in reality.

To his credit, though, Altman acknowledges that AI tools are still "somewhat broken," and that institutions will need "enough time" to "figure out what to do" with AI — though he does admit we're "not that far away from potentially scary ones."

Still, if he's so convinced that AI will ultimately be a force for good, what's he doing with a doomsday patch of land in Big Sur in the event of an "AI that attacks us"? Don't worry about it.

More on OpenAI: Elon Musk Horrified by What OpenAI Has Become


Share This Article