Artificially intelligent (AI) robots and automated systems are already transforming society in a host of ways. Cars are creeping closer to Level 5 autonomy, factories are cutting costs by replacing human workers with robots, and AIs are even outperforming people in a number of traditionally white-collar professions.
As these systems advance, so will the potential that they are involved in criminal activity, and right now, no regulations are in place that say how the law should treat super-intelligent synthetic entities. Who takes the blame if a robot causes an accident or is implicated in a crime? What happens if a robot is the victim of a crime? Do self-aware robots deserve rights that are comparable to those given to human beings?
Before we can begin discussing robot rights, we need to articulate exactly what (or who?) counts in this equation, said MIT Media Lab researcher and robot ethics expert Kate Darling in an email correspondence with Futurism. In other words, clearly defined terminology is a prerequisite for any productive conversation regarding robot rights.
“If we want to use legislation to regulate robotic technology, we’re going to need to establish better definitions than what we’re operating with today,” she said. “Even the word ‘robot’ doesn’t have a good universal definition right now.”
Eyes on Today
Now is the time to put these definitions in place because artificially intelligent robots are already in our midst.
Autonomous delivery robots are a common sight in the Estonian capital of Tallinn. As such, the country’s government is being proactive with regards to robot regulations and legal recourse for issues regarding ownership and culpability.
“It all started out from the self-driving car taskforce,” Marten Kaevats, the national digital advisor for the government office of Estonia, told Futurism. “We quite soon discovered that these liability, integrity, and accountability issues are not specific to self-driving cars; they’re general AI questions.”
Kaevats is aware that any discussion of robots and AI can quickly devolve into talk of the singularity and superintelligence, but that’s not the focus right now. “We are trying to work on things that are actually already here,” he explained.
Still, Estonia is looking to put legislation in place that has the flexibility to respond to advances in technology. Kaevats acknowledges that it’s not possible to create regulations that are completely future-proof, but he sees a pressing need for laws that offer certain rights alongside certain liabilities.
As Kaevats pointed out, right now, self-aware artificial intelligences are so far off that there’s no reason to rush into giving robots similar rights to humans. In addition to considering the ethical ramifications of putting machines on par with humans, we need to examine how such laws might be open to abuse before regulations are established.
Production Line Patsy
Estonia isn’t the only place where conversations on robot rights are happening.
The journal Artificial Intelligence and Law recently published an article by University of Bath reader Joanna J. Bryson and academic lawyers Mihailis E. Diamantis and Thomas D. Grant. In the paper, the authors state that proposals for synthetic personhood are already being discussed by the European Union and that the legal framework to do so is already in place. The authors stress the importance of giving artificially intelligent beings obligations as well as protections, so as to remove their potential as a “liability shield.”
But granting them full rights?
When Bryson spoke to Futurism, she warned against the establishment of robot rights, relating the situation to the way the legal personhood of corporations has been abused in the past.
“Corporations are legal persons, but it’s a legal fiction. It would be a similar legal fiction to make AI a legal person,” said Bryson. “What we need to do is roll back, if anything, the overextension of legal personhood — not roll it forward into machines. It doesn’t generate any benefits; it only encourages people to obfuscate their AI.”
Bryson offered up the example of a driverless taxi, which could potentially be made fully independent from its owner or manufacturer, serving as a legally recognized individual, fulfilling its own contracts. This situation could be manipulated to reduce the amount of taxes paid on the vehicle’s earnings by whoever receives the profits.
Kaevats said that this won’t be a problem in Estonia — the country’s digital tax system is proactive enough to track any malicious activity. However, the potential for abuse certainly exists in regions with less technologically advanced tax codes.
Corporations can already use the letter of the law to withhold as much wealth as possible. The use of a synthetic person as a “fall guy” for illicit activity isn’t outside the realm of possibility, and giving a robot rights could serve to emancipate them from conventional ownership. At that point, the entity is the ultimate independent contractor, with companies able to absolve themselves of wrongdoing even if they instructed the machine to behave in the illegal way.
Legislation could certainly be written up that avoids these pitfalls, though, so policy makers just need to be sure that any rights given to synthetic entities don’t include loopholes that can be abused.
In the far more distant future, we’ll need to consider the issue of self-aware robots. How should we tackle synthetic personhood for those entities?
“If we discover that there are certain capacities that we want to create in artificial intelligence, and once you create those, you spontaneously get these cognitive features that warrant personhood, we’ll have to have this discussion about how similar they are to the human consciousness,” James Hughes, executive director of the Institute for Ethics and Emerging Technologies, told Futurism.
Traditionally, under the law, you’re either a person or you are property.
The creation of this level of technology won’t be happening anytime soon, if it happens at all, but its potential raises some thorny issues about our obligation to synthetic beings and the evolving nature of personhood.
“Traditionally, under the law, you’re either a person or you are property — and the problem with being property is that you have no rights,” bioethicist and attorney-at-law Linda McDonald-Glenn told Futurism. “In the past, we’ve made some pretty awful mistakes.”
According to Hughes, this situation calls for a test that determines whether or not a synthetic person is self-aware. In the meantime, Estonia has found a fairly simple way to determine the rights of their robots. Instead of using technology as the defining factor, the nation will grant rights based on registration under the mythologically inspired Kratt law.
Estonian folklore states that the Kratt is an inanimate object brought to life, just as artificial intelligence can give a machine the cognitive abilities it needs to complete a particular task. The Kratt law will determine what level of sophistication a robot needs to possess in order to be considered its own legal entity.
“This is what we want our governments to do,” said Bryson, praising European efforts to put well-thought-out legislation in place. “This is what governments are for and what law is for.”
In many ways, AI technology is still very young, but there’s no better time than now to start thinking about the legal and ethical implications of its usage.