BIG WAVES WERE MADE earlier this year when a former Google engineer, Blake Lemoine, told a reporter at the Washington Post that a Google language-modeling AI — a chatbot called LaMDA — was sentient. Google refuted the claims, ultimately firing Lemoine, but not before the engineer's testimony sent the question of AI sentience and the ethics of language modeling programs ricocheting through public discourse.

"I know a person when I talk to it," the engineer told the Post, who broke the story back in June. "It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."

But here's the thing: Artificial intelligence isn't yet (and likely won't ever be) conscious. This is despite the fact that a lot of people — especially those building the 'intelligent' machines — just want it to be. To that end: Why?

As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine's. One prominent inventor, Philip Bosua, told the Times he believes OpenAI's GPT-3 (another language modeling system like Google's LaMDA) is also sentient. Yet another said that though he think's GPT-3 intelligence is somewhat "alien," it "still counts." There's a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those who say otherwise. The reasons for it might not be readily evident, but a bridge between the two demonstrating just how one would cross the threshold from non-believer to believer has...actually existed for decades.

Back in the 1960s, an MIT researcher named Joseph Weizenbaum developed an automated psychiatrist dubbed Eliza. Compared to the technologies of today, Eliza, an early chatbot, was extraordinarily simple — it would simply repeat words it was fed, or ask "patients" to expand on their own thoughts.

While the machine clearly wasn't conscious, Weizenbaum discovered that those who used the machine took to it like it was. They were willing to share deep secrets with it. They took comfort in the presumed wisdom it offered. They were treating the machine as if it really were human — despite the fact that all Eliza was really doing was, through human-made code, reflecting human thoughts back to the person on the other side of the screen.

Humans, in their endless search to make meaning, constantly anthropomorphize non-human things, all the time.

"I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines," Weizenbaum wrote of the experience. "What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

To err, after all, is human — and in this instance, painfully, existentially so. Humans, in their endless search to make meaning, constantly anthropomorphize non-human things, all the time. The way we name cars. The way we name WiFi networks. The way we tell stories filled with animals and objects that think and act like humans — from The Brave Little Toaster to Her and back. To that end, and even more obviously: Siri. Alexa. And so on. When our all-too-human tendency to anthropomorphize is applied to technologies, this is the Eliza effect: the tendency to read way, way too far into artificial intelligence. And then interpret those capacities as miraculously human, when really, it's an equation — and nothing more.

As Metz points out, the aspirational language assigned to AI technologies doesn't exactly help counter the Eliza effect. For example, calling a machine 'intelligent' insists that it is so, if artificially. And though intelligence — the capacity to gain and apply knowledge — isn't a synonym for sentience, the two are often equivocated.

Take animals: The animals we usually hold in higher regard than others are usually those that we consider more 'intelligent' than others, because they display some kind of behavior humans can identify with. They use tools (apes), they have funerals (elephants), they live in complex social groups (dolphins). And as humanity tends to measure intelligence strictly within the context of humanity, these creatures suddenly become a bit less animal, and a bit more human. In the case of artificial intelligence, a similar bias is literally built into the packaging (an important difference being that AI is still, you know, inanimate, and animals actually do think and feel, which is actually sentience).

"We call it 'artificial intelligence,' but a better name might be 'extracting statistical patterns from large data sets,'" Alison Gopnik, a Berkeley professor of psychology and a researcher in the university's AI department, explained to Metz.

"The computational capacities of current AI like the large language models," she added, "don't make it any more likely that they are sentient than that rocks or other machines are."

Whenever we give significant meaning to anything, we offer it significant power.

Of course, our proclivity to give these machines distinctly human features, be that names like Eliza (or, again: Alexa or Siri), or human-ish voices, or physical attributes can reaffirm that effect as well. It might be further argued that a version of the Eliza effect extends to technologies beyond AI. Social media algorithms, for example — especially when it comes to advertisements —  are often treated as mystery or magic, even extolled as godly or all-knowing. We talk about "the algo" like it knows something we don't, or everything we do.

But again, these algorithms didn't just happen. They were made by people. And given human information — about people, by people. In social media's case, that information is usually a bought-and-sold digital footprint. And the people doing the building and buying and selling also have names, which sometimes rhyme with Shmark Shmuckerberg.

All in all, Metz makes a compelling argument against AI's capacity for sentience. And as anyone concerned for the future would do well to remember: Whenever we give significant meaning to anything, we offer it significant power — and to that end, ascribing any machine any degree of true self-agency takes responsibility off the shoulders of whoever created it.


Share This Article