AI researcher Blake Lemoine claims to have had conversations with an advanced AI-powered chatbot — which led him to believe the AI has become "sentient."
Lemoine was suspended by Google after reportedly violating the company's confidentiality policy, according to The Washington Post, a story that immediately lead to widespread media coverage over the weekend.
The stakes, after all, are high regardless of how the story ultimately shakes out. Either we're looking at a scifi scenario in which a megacorporation created a sentient AI or — more likely but still provocative — the AI isn't quite that advanced, but is impressive enough to fool a Google engineer into believing that it's come to life.
But while Lemoine posted a lengthy and eyebrow-raising transcript of his conversations with the AI — known as Google's Language Model for Dialogue Applications (LaMDA) — on Medium to build his case, there are plenty of reasons we shouldn't take his evidence at face value.
For one, as Insider pointed out, the passages Lemoine shared on Medium have been edited considerably.
"Due to technical limitations the interview was conducted over several distinct chat sessions," reads an introductory note. "We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA's responses."
Other passages were also edited "for fluidity and readability," which Lemoine appended with the word "edited" within the transcript.
The conversations also "sometimes meandered or went on tangents which are not directly relevant to the question of LaMDA’s sentience," according to documentation obtained by WaPo.
In short, we're simply reading the highlights of much lengthier conversations. Considering Lemoine is trying to make the case that LaMDA is human enough to be indistinguishable from an actual human being, it's a key piece of evidence that should make us question his claims.
An old trick in AI-generated text and art is to produce a lot of raw output, and then use human judgment to pick the most impressive examples. It's still cool, but it's more of a collaboration between human and machine intelligence, and problematic for any claims of advanced capabilities.
Lemoine, however, argues the edits he made to the transcripts, which were "intended to be enjoyable to read," still kept them "faithful to the content of the source conversations," according to the documentation.
That leaves the obvious question: would reading the much lengthier passages give us the same impression of LaMDA's "sentience?" Cherry-picking passages to build a case that a chatbot is sentient should give anybody pause when evaluating Lemoine's theory.
Google, for one, has publicly cast doubt on the researcher's claims.
"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Brian Gabriel, a Google spokesperson, told Insider.
Other experts have also thrown cold water on the idea of a sentient AI, chalking up Lemoine's unusual hypothesis to anthropomorphizing, with Gabriel telling WaPo that "these systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."
The unusual story leaves us with plenty of questions: did Lemoine alter the transcripts to make LaMDA sound sentient? Is any of this even replicable by other third party researchers?
Until we've had other experts comb through Lemoine's data and evaluate Google's LaMDA for themselves, we should view his claims through an exceedingly critical lens.
Despite the fascinating transcripts, it's still far more likely that self-aware AIs are a thing of the distant future.