If they can't do it, nobody can.

Open and Shut

That was fast.

Less than six months after its public release, it appears that OpenAI has shut down its "AI classifier," an AI-detection tool that the ChatGPT creator had previously billed as a "classifier to distinguish between text written by a human and text written by AIs from a variety of providers."

"While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human," reads an OpenAI blog post introducing the tool, published January 31 of this year, "for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human."

Which, of course, are all very valid concerns, meaning a working AI detector would indeed be helpful to counter these and other AI misuse cases. But fast forward to last week, and on that same blog post, OpenAI quietly issued an update noting that its classification tool is no longer available — and that its woeful inaccuracy is to blame. (A link to the tool has also disappeared from OpenAI's website.)

"As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy," reads the note, posted on July 20. "We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated."

They tried, we guess? But it's depressing. If the experts that built ChatGPT can't build a tool that reliably detects its output, it's hard to imagine that anyone else can.

Unreliable Narrators

The failure of the tool feels significant. It could well be argued that the release of ChatGPT kickstarted the rapid public adoption of generative AI, and as these tools continue to gain steam, synthetic content is only going to proliferate. At the same time, it may also get increasingly more difficult to tell whether a piece of content was created by a human or not. A reliable AI detector would go a long way to prevent reality from completely melting away.

That said, it isn't terribly surprising to see OpenAI shutter the project, considering that 1. they never promised that the machine was completely accurate and 2. no one else has been able to produce a fully reliable AI detection tool.

Even so, it feels like a grim sign for any near-term AI detection solutions that OpenAI didn't even make it to half a year before throwing in the towel on its classifier — especially considering that, as it stands, reliable alternatives are pretty nonexistent.

More on OpenAI: Stanford Scientists Find That Yes, ChatGPT Is Getting Stupider


Share This Article