First corporations, and now artificial intelligence — the push for nonhuman personhood continues apace, though this latest argument is decidedly more complicated than the former.
In an op-ed for the Los Angeles Times, philosophy expert Eric Schwitzgebel and "nonhuman" intelligence researcher Henry Shevlin argued that although AI technology is definitely not there yet, it has "become increasingly plausible that AI systems could exhibit something like consciousness" — and if or when that occurs, the algorithms, too, will need rights.
Citing last year's AI consciousness wars — which we covered extensively and even dipped our toes into — the researchers noted that "some leading theorists contend that we already have the core technological ingredients for conscious machines."
If machines were to ever gain consciousness, Schwitzgebel and Shevlin argue we would have to begin thinking critically about how the AIs are treated — or rather, how they may force our hands.
"The AI systems themselves might begin to plead, or seem to plead, for ethical treatment," the pair predicted. "They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals."
The "enormous" moral risks involved in such a collective decision would undoubtedly carry great weight, especially if AIs become conscious sooner rather than later.
"Suppose we respond conservatively, declining to change law or policy until there’s widespread consensus that AI systems really are meaningfully sentient," Shevlin and Schwitzgebel wrote. "While this might seem appropriately cautious, it also guarantees that we will be slow to recognize the rights of our AI creations."
"If AI consciousness arrives sooner than the most conservative theorists expect, then this would likely result in the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems — suffering on a scale normally associated with wars or famines," they added.
The "safer" alternative to this doomsday scenario would be to give conscious machines rights upfront — but that, too, would come with its own problems.
"Imagine if we couldn’t update or delete a hate-spewing or lie-peddling algorithm because some people worry that the algorithm is conscious," the experts posited. "Or imagine if someone lets a human die to save an AI 'friend.' If we too quickly grant AI systems substantial rights, the human costs could be enormous."
The only way to ensure neither of these outcomes occurs, the pair wrote, would be to stop giving an AI a conscience in the first place.
Fortunately, we still have plenty of time to make that happen.
"None of our current AI systems are meaningfully conscious," the theorists noted. "They are not harmed if we delete them. We should stick with creating systems we know aren’t significantly sentient and don’t deserve rights, which we can then treat as the disposable property they are."
Given how stoked some people in the machine learning community seem to be at the prospect of conscious AIs, algorithmic sentience, and even artificial general intelligence (AGI), however, that kind of caution likely isn't shared by many.
In fact, some scientists are already actively working towards that very end.
"Eventually, with the right combination of scientific and engineering expertise, we might be able to go all the way to creating AI systems that are indisputably conscious," Shevlin and Schwitzgebel concluded. "But then we should be prepared to pay the cost: giving them the rights they deserve."
More on our current AI future: Prime Minister of European Country Names AI as Advisor
Share This Article