Today’s artificial intelligence algorithms are unfeeling tools that can automate various jobs. It’s possible, though, that someday we’ll figure out how to build AI with something resembling a conscious experience.
In preparation for that day, a pair of philosophy professors from Northeastern University and UC Riverside are arguing that we need to lay out ground rules now — suggesting in an article for Aeon that AI algorithms may someday deserve the same ethical treatment as animals.
As it stands, AI is nowhere near advanced enough to experience anything, including suffering. Kicking your Roomba won’t do anything other than maybe put you in the market for a new vacuum; insulting Alexa won’t make the smart assistant resent you.
But given humanity’s poor track record of animal and human welfare in scientific research, it may make sense to be prepared in case AI ever reaches that level. That’s why the two professors call for new oversight committees to evaluate the ethical risks of research as the field develops.
“In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study),” the professors write. “With AI, we have a chance to do better.”
READ MORE: AIs should have the same ethical protections as animals [Aeon]
More on consciousness: Artificial Consciousness: How To Give A Robot A Soul