Despite conflicting options, we are getting closer to fully autonomous vehicles. On one side of the argument, Elon Musk boldly announced that every Tesla will be fully autonomous by 2017. On the other hand, Gill Pratt, head of Toyota’s Research Institute, asserts “we’re not even close to Level 5 [autonomy].”

We are inching ever-closer to literally putting our lives in the hands of technology – yet, we haven't found the solution to a vital moral dilemma: in the event of an emergency, whose lives should the car save? How will AI value one life above another? What ethical framework should be programmed into autonomous cars?

This new video by Veritasium explores these questions.

The video suggestions that the real moral dilemma has to do with accidents happening right now. 

More than 30 thousand people die from traffic accidents each year in the US alone, and more than two million are injured. Despite warnings against the dangers of using mobile devices while driving, many still text and drive. And in 2014, distracted driving resulted in 3,000 deaths and 430,000 injuries. Ninety-four percent of these collisions are caused by driver error. In short, if we wait longer to put self-driving vehicles on the road, more people are going to die.

Autonomous driving technology is going to save lives. So maybe the real question should be: why aren’t we doing more to get them on the road faster?


Share This Article