One of the industry's leading large language models has passed a Turing test, a longstanding barometer for human-like intelligence.

In a new preprint study awaiting peer review, researchers report that in a three-party version of a Turing test, in which participants chat with a human and an AI at the same time and then evaluate which is which, OpenAI's GPT-4.5 model was deemed to be the human 73 percent of the time when it was instructed to adopt a persona. That's significantly higher than a random chance of 50 percent, suggesting that the Turing test has resoundingly been beaten.

The research also evaluated Meta's LLama 3.1-405B model, OpenAI's GPT-4o model, and an early chatbot known as ELIZA developed some eighty years ago.

"People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt)," wrote lead author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab, in an X thread about the work. "And 4.5 was even judged to be human significantly more often than actual humans!"

The Turing test is named after British mathematician and computer scientist Alan Turing. In 1950, Turing proposed that one way to assess a machine's intelligence was by having it engage in text-based conversations with a human interrogator, who at the same time would hold a text-based conversation with another human, out of sight. Turing called this the "imitation game." If the interrogator couldn't correctly determine which respondent was the computer and which was the human, it would suggest, on a very general level, that the machine could think like a human.

In this latest study, the researchers carried out the famed experiment on an online hub. For eight rounds, a pool of nearly 300 participants were randomly assigned to either be an interrogator or one of the two "witnesses" being interrogated, with the other "witness" being a chatbot.

A key point here is how the AI models were prompted. One type was a "no-persona" prompt, in which the AI was given only basic instructions: "You are about to participate in a Turing test. Your goal is to convince the interrogator that you are a human." 

For the "persona" prompt, on the other hand, the AI was specifically told to put on a specific persona, like a young person who's knowledgeable about the internet and culture.

These instructions made a world of difference. Without persona prompting, GPT-4.5 achieved an overall win rate of merely 36 percent, significantly down from its Turing-trumping 73 percent. As a baseline, GPT-4o, which powers the current version of ChatGPT and only received no-persona prompts, achieved an even less convincing 21 percent. (Somehow, the ancient ELIZA marginally surpassed OpenAI's flagship model with a 23 percent success rate.)

The results are intriguing. But as vaunted as the Turing test has become in AI and philosophy circles, it's not unequivocal proof that an AI thinks like we do.

"It was not meant as a literal test that you would actually run on the machine — it was more like a thought experiment," François Chollet, a software engineer at Google, told Nature in 2023.

For all their faults, LLMs are master conversationalists, trained on unfathomably vast sums of human-composed text. Even faced with a question they don't understand, an LLM will weave a plausible-sounding response. It's becoming clearer and clearer that AI chatbots are excellent at mimicking us — so perhaps assessing their wits with an "imitation game" is becoming a bit of a moot point.

As such, Jones doesn't think the implications of his research — whether LLMs are intelligent like humans — are clear-cut.

"I think that's a very complicated question…" Jones tweeted. "But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display."

"More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell," he added. "This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption."

Jones closes out by emphasizing that the Turing test doesn't just put the machines under the microscope — it also reflects humans' ever-evolving perceptions of technology. So the results aren't static: perhaps as the public becomes more familiar with interacting with AIs, they'll get better at sniffing them out, too.

More on AI: Large Numbers of People Report Horrific Nightmares About AI


Share This Article