Image by sjharmon / Getty Images

You may remember a series of lawyers who have attempted to use AI tools in court — and were subsequently embarrassed and sanctioned when the chatbots screwed up, sometimes even inventing plausible-sounding cases that didn't actually exist.

So consider this: how would you feel if your doctor did the same thing, feeding your symptoms into an AI system to diagnose what's wrong with you?

That's a looming question, Politico reports in a fascinating story, that's currently stressing out regulators. And it has an alarming immediacy, because according to Politico's reporting doctors are already using unregulated and little-tested AI tools to aid in diagnosing patients — so this isn't some hypothetical conversation about a far-off future, but an already-happening-right-now phenomenon that could well be just one malpractice suit away from becoming a major medical and regulatory scandal.

"The cart is so far ahead of the horse, it’s like how do we rein it back in without careening over the ravine?" University of California public health researcher San Diego John Ayers asked Politico.

The obvious answer is that that the tech needs regulation, a concept that's got nominal buy-in from every stakeholder from the White House to OpenAI.

The problem, of course, is that actually doing so is way more easily said than done. As Politico points out, one key issue is that most medical products — think pharmaceuticals, surgery equipment, or other healthcare devices — can be approved once and generally trusted to keep working the same way for an indefinite period.

Not so with AI models, which are constantly in flux as their creators tweak them and add more data, meaning that an AI that gives a perfectly fine diagnosis one day might give a poor one after routine changes. And remember that a core reality of machine learning systems is that even their creators struggle to explain exactly how they work.

Government regulators like the FDA, Politico points out, are already spread thin to the breaking point. Asking them to create and maintain workflows to test medical AI systems on an ongoing basis would require politically impossible amounts of funding. So if these AI systems are already making inroads into regular medical practice, who's going to watch over them?

One idea, the outlet reports, is that medical schools and academic health centers could create labs that would constantly audit the performance of AI health care tools.

But even that idea involves a bit of hand-waving. Where would all those resources come from? And would the interactions between the patient populations at those mostly urban and affluent institutions accurately reflect the way AI would work in different and more challenged communities?

It's possible, in the long view, that AI could turn into an incredible boon for the medical system. Tech leaders certainly love to lean into that possibilty; OpenAI CEO Sam Altman, for instance, has publicly mused that future AI could provide high-quality medical advice to people who can't afford doctors.

Here in the present, though, the messy forays of AI into the medical system highlight just how uncomfortable certain realities of the tech are going to be — even in settings that are literally life or death.

More on AI: Scientists Test Drug Designed by AI on Human Patients


Share This Article