As nice as it'd be to have the option of catching up on some reading — or sleep — while an autonomous vehicle drives you to work, the real draw of self-driving cars is the idea that they'll be safer drivers than whoever just cut you off in the exit lane with inches to spare. After all, if the vast majority of traffic accidents are caused by human error, taking humans out of the equation should save lives, right?

In theory, sure. But in practice? Only if we can build autonomous vehicles safer than, well, the average driver. And right now, the entire auto industry is approaching that same goal from countless directions, and no one even knows what the measure of success is — or should — be.

To bring some orderliness to this currently chaotic situation, a group of 11 companies, including Intel, Audi, and Volkswagen, teamed up to publish a white paper titled "Safety First for Automated Driving," an exhaustive guide to developing safe autonomous vehicles.

The 146-page-long document's centerpiece are twelve guiding principles detailing the various capabilities a self-driving car must have before it can be considered "safe." Here's a quick primer on each of them.

Safe Operation: An autonomous vehicle must be able to cope with the loss of any of its critical components.

Safety Layer: The self-driving car must know its own limits and understand when it's safe to return control to the human driver.

Operational Design Domain (ODD): The autonomous vehicle must be prepared to assess the risks of typical driving situations.

Behavior in Traffic: The car's behavior needs to be predictable to other drivers on the road, and it needs to act according to traffic rules.

User Responsibility: The vehicle needs to be able to recognize a driver's state of alertness and communicate to them any tasks for which they are responsible.

Vehicle-Initiated Handover: Autonomous vehicles must be able to let drivers know when they need to takeover and make it easy for them to do so. If a takeover request is ignored, the vehicle also needs to have a way to cope with the situation while minimizing risk.

Driver-Initiated Handover: The driver needs to have a way to explicitly ask to take over operation of the self-driving car.

Effects of Automation: An autonomous vehicle must consider how automation could affect the driver even directly after the period of automated driving is over.

Safety Assessment: There needs to be a consistent way to verify and validate the autonomous vehicle's ability to meet safety goals.

Data Recording: If the self-driving car recognizes an event or incident, it needs to be able to record relevant data in a way that doesn't violate applicable data privacy laws.

Security: Safe autonomous vehicles will need to have some protection against security threats.

Passive Safety: The self-driving car needs to be prepared for any crash scenarios that might be unique to vehicle automation.

This all sounds well and good. Accomplishing all — let alone most, or even a majority — of these goals is going to be another matter.

Notably, a few major companies and tech players are missing from the list of people who assembled this list (i.e., Tesla, Waymo, et al). Hard not to wonder why: Maybe these companies, all of whom are seemingly behind in the race for self-driving vehicles, are looking to assemble some common ground to edge their behemoth competition out of (or maybe they simply have other ideas about safety).

Whatever the case may be, the autonomous road race won't be won by anybody who doesn't adhere to these concepts if they become law — in other words, consider this just another in a long series of shots in the war to earn pole position.

READ MORE: 11 companies propose guiding principles for self-driving vehicles [VentureBeat]

More on autonomous vehicles: This Guide Could Dictate How Cops Handle Autonomous Car Crashes


Share This Article