Popular media frequently warn us about an impending robot revolution. A film or television show begins with a few guffaws and cackles about how artificially intelligent robots are "silly," but ends on a more somber note. "Ex Machina," "The Terminator," and "Westworld" are all terrific examples of humans ignoring the idea of sentience when they first encounter robots. But what if we ignore it in real life? Should we acknowledge it right up front?

Some people believe robots will never truly achieve consciousness because humans don't even understand it. Our idea of "human rights" is a relatively philosophical notion built on the idea of pain and suffering. Normally, robots do not need to be programmed to feel those emotions in order to carry out their functions, so the point is moot.

The other side of the argument is that our species evolved to understand pain for our own benefit. If we know fire hurts when we touch it, we won't touch it. However, an advanced AI may just program pain into itself to achieve a higher level of self-awareness. At that point, denying robots rights is simply a matter of economics, the same as when factions of humanity have denied such rights to other humans and to animals throughout our history.

The issue of machine rights is already coming up in relation to privacy concerns and various thresholds of consciousness, but the idea of human exceptionalism is worth considering. We don't want the species to go extinct, and the rights we decide to give to other species can have a direct effect on our own survival.


Share This Article