More and more people are saying this.

It's Alive!

One of the world's foremost philosophers of artificial intelligence is arguing that some chatbots might exhibit glimpses of sentience — but that doesn't necessarily mean what you think it means.

In an interview with the New York Times, Oxford academic Nick Bostrom said that rather than viewing the concept of sentience as all-or-nothing, he thinks of it in terms of degrees, and when that framework is applied to the rapidly-advancing world of AIs, things start to look different.

"I would be quite willing to ascribe very small amounts of degree to a wide range of systems, including animals," Bostrom, the director of Oxford's Future of Humanity Institute, told the NYT. "If you admit that it’s not an all-or-nothing thing, then it’s not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience."

Justice League

While there's been ample derision for those who have suggested that AIs may be getting a little bit sentient — including ex-Googler Blake Lemoine and OpenAI's Ilya Sutskever — Bostrom said the insistence on the opposite doesn't account for how smart these chatbots really are.

"I would say with these large language models [LLMs], I also think it’s not doing them justice to say they’re simply regurgitating text," Bostrom said. "They exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning."

What's more, the Sweden-born philosopher said that LLMs "may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans."

And if AIs become more sentient, he added, it could completely change the game.

"If an AI showed signs of sentience, it plausibly would have some degree of moral status," Bostrom said. "This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it."

While this line of reasoning is confounding and — per some detractors — uselessly premature, it's important to start thinking about this stuff now. After all, if someone who's spent their life studying this stuff thinks we need to take the concept of AI sentience seriously, then maybe we should listen.

More on AI morality: Experts Urge Personhood Rights for the "Conscious" AIs of the Future


Share This Article