Should We Make An “AI Kill Switch” or Give Robots Rights?
What should the criteria for personhood be when bots look and act just like us?
Crashcourse’s Hank poses an excellent question at the start of his video. What if your closest friends are a set of extremely advanced robots? How would you know? Hank introduces the Turing Test, an assessment to determine the strength of an artificial intelligence (AI) program. If you spoke to a robot and were unable to discern a difference between it and a human, then the AI capabilities of its programming would be exceptional.
Hank also spotlights opposing viewpoints which point out that while AI may be able to fool us into believing it is human, it may never truly encapsulate human thought, because humans themselves have yet to fully understand consciousness.
What do you think? If it talks like a human, walks like a human, behaves like a human, and feels like a human…is it human? Do differences in biology matter when the concepts of the mind are the same? Many question whether AI will need rights, and what those rights will look like. It is a question we must turn our attention to.