"We have a new kind of agent in the world. It’s not an animal or a human; it’s a machine."
We’re entering an era in which the way software interacts with the world is so hidden and complex that researchers need to start studying algorithms the way we study animals.
That’s according to MIT technologist Iyad Rahwan in a compelling interview with New Scientist. Algorithms are sophisticated entities, Rahwan argues, and we’ll only be able to understand them if we watch how they behave in the context of their environment.
“We have a new kind of agent in the world. It’s not an animal or a human, it’s a machine,” Rahwan told New Scientist. “Today, only computer scientists explore the behaviours of algorithms. My team is studying machine behaviour as we would a human, animal, or corporation — in the wild.”
The problem, Rahwan told the magazine, is that you can see the stories that pop up in your Facebook News Feed, but you can’t access the algorithm that chose them. You can ride in an autonomous car, but you can’t review the code that told it how aggressively to shift lanes.
To date, Rahwan’s best-known work has been in creating AIs that explore ethical questions, like an algorithm that decides which pedestrian an autonomous car should kill and a “psychopath” AI that sees “violence and horror” in every image it evaluates.
But the New Scientist interview seems to be teasing something new: evaluating the behavior of existing AIs in the real world, like naturalists in an animal habitat. If Rahwan’s team can make inroads on that front, it could give us a new perspective on how algorithms affect people and society.
READ MORE: Why the quest for ethical AI is doomed to failure [New Scientist]
More on ethical AI: Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence