Should We Make An “AI Kill Switch” or Give Robots Rights?

What should the criteria for personhood be when bots look and act just like us?

5. 7. 17 by Kelsey R. Marquart
Image by Getty

Crashcourse’s Hank poses an excellent question at the start of his video. What if your closest friends are a set of extremely advanced robots? How would you know? Hank introduces the Turing Test, an assessment to determine the strength of an artificial intelligence (AI) program. If you spoke to a robot and were unable to discern a difference between it and a human, then the AI capabilities of its programming would be exceptional.

Hank also spotlights opposing viewpoints which point out that while AI may be able to fool us into believing it is human, it may never truly encapsulate human thought, because humans themselves have yet to fully understand consciousness.

What do you think? If it talks like a human, walks like a human, behaves like a human, and feels like a human…is it human? Do differences in biology matter when the concepts of the mind are the same? Many question whether AI will need rights, and what those rights will look like. It is a question we must turn our attention to.

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy


Copyright ©, Singularity Education Group All Rights Reserved. See our User Agreement, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.