Back in June, former Google engineer Blake Lemoine decided to go on record with The Washington Post to make an extraordinary claim: that an experimental Google chatbot called LaMDA had become, in his opinion, sentient. A mind-bending he-said, she-said saga followed — at one point, the AI had even reportedly a hired lawyer. In the end, the company fired Lemoine.
But now, in a new interview with The Guardian, Lemoine — who has maintained that his beliefs about LaMDA are based in religion, not science — is claiming that he never set out to convince the public of LaMDA's sentience.
Rather, he now says, his intent was just to raise awareness about advanced AI technologies, regardless of any perceived sentience — and how little say the public has had in their development.
"I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people's lives is being held behind closed doors," he told the paper. "There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed."
The reversal does raise the question of whether Lemoine's conviction has been shaken by his many critics. Supporting the reading that he was quite serious about the sentience claims while still at Google is a new revelation from the Guardian interview: that for several weeks before he decided to go to the press, he repeatedly urged the tech giant to run some experimental sentience "tests" on the chatbot. Google — which has maintained throughout the saga that Lemoine's claims are unfounded — refused, according to the fired engineer.
Upon Google's refusal to develop those tests, Lemoine told the Guardian that he felt he had no choice but to go public. Not necessarily to spread the gospel of sentience, as he now tells it, but to blow the whistle on a powerful industry with the capacity to make big — and in some cases, maybe not so great — changes to our everyday existence.
Of course, part of the issue with asking Google to run sentience tests is that none yet exist. Sentience has a dictionary definition, sure, but it's still an ethereal concept — neither philosophy nor science have any firm grasp on how, exactly, to define it.
"It's a very vague concept in science generally," Michael Wooldridge, a professor of computer science at the University of Oxford Wooldridge, told the Guardian.
Though Wooldridge told the paper that he doesn't think LaMDA is sentient, he did say that he agrees that the AI industry has a wider problem with what he calls "moving goalposts" — in short, referring to the reality that not even those building these devices seem to have a firm grasp on how a lot of AI algorithms function, nor is there a reliable way of measuring their efficacy.
"I think that is a legitimate concern at the present time," he added, "how to quantify what we've got and know how advanced it is."
Google has repeatedly defended LaMDA, arguing that the tech is safe and ultimately will be applied in a number of useful and necessary ways.
But while these systems certainly have some practical uses, the potential for misuse seems undeniable. After all, they literally talk back to us. That makes them easy to anthropomorphize, and thus form influential connections with — connections that, some researchers believe, may be easily exploited.
"I worry that chatbots will prey on people," Margaret Mitchell, a former AI ethics researcher at Google, recently told The New York Times. "They have the power to persuade us what to believe and what to do."
In the end, no one really knows for sure either way — not the people building them, and definitely not the public. And sentience or no sentience, that seems to be Lemoine's point, at least these days. About LaMDA, of course, but about the greater landscape of AI development as well.
"What I'm trying to achieve is getting a more involved, more informed and more intentional public discourse about this topic," Lemoine continued to the Guardian, "so that the public can decide how AI should be meaningfully integrated into our lives."
"We have one possible world in which I'm correct about LaMDA being sentient, and one possible world where I'm incorrect about it," he added. "Does that change anything about the public safety concerns I'm raising?"
Sure. All fair. Still, while it's hard to disagree with a lot of these new statements, it also seems like Lemoine is attempting to rewrite pretty much everything he's previously said — which makes it equally hard to take any of it seriously. Plus, well, Lemoine's been known to cherrypick.
READ MORE: 'I Am, in Fact, a Person': Can Artificial Intelligence Ever Be Sentient? [The Guardian]
More on LaMDA: Google Engineer Says Lawyer Hired by "Sentient" AI Has Been "Scared Off" the Case
Share This Article