
Researchers have engineered the ultimate nemesis of self-driving cars: mirror-adorned traffic cones.
In a series of tests highlighted by The Register, a team from France and Germany showed that their optical sleight of hand could easily dupe lidar-equipped autonomous cars into not recognizing obstacles on the road.
Time after time, the experiments showed, the correct mirror placement left the cars oblivious and attempting to plow through sacrificial traffic cones. And if that was the disappearing act, they were also able to pull off a conjuring trick, deviously psyching out the car’s software into seeing obstacles that weren’t there.
“An adversary can inject phantom obstacles or erase real ones using only inexpensive mirrors,” the researchers warned in their new study, which is awaiting peer review. “These are practical threats capable of triggering critical safety failures, such as abrupt emergency braking and failure to yield.”
The experiments will add to the major safety concerns swirling around self-driving cars.
Lidar, short for light detection and ranging, uses rapid laser pulses to detect a car’s surroundings, similar to how sonar uses sound waves. Most autonomous vehicle companies rely on them — except for Tesla, whose CEO Elon Musk insists that the technology is an expensive “crutch.” (It’s worth noting, however, that his cameras only approach isn’t without its major flaws, like being blinded by sunlight.)
There’re already plenty of questions about self-driving cars’ ability to recognize everyday obstacles, including pedestrians. But there’s also concern how the tech will handle people deliberately trying to mess with it. Activists learned they could disable Waymo robotaxis by placing a traffic cone on their hoods — which in retrospect, is a perfect, dunce cap-esque symbol for the industry’s ongoing woes.
In the latest study, the researchers used cars powered by a popular open source software called Autoware. Making the cones disappear was fairly straight-forward: with enough tinkering, fully covering a traffic cone with a mirror was enough to turn them invisible to the lidar. The trick was to angle the mirror either towards the ground or the sky. This way, it deflects the laser pulses away from the obstacle. The researchers call this an ORA, or “object removal attack” — and it worked every single time.
The inverse, an “object addition attack” (OAA), requires more setup but can be just as effective. The researchers found that they could block a car from making a turn by putting a mirror on the corner of a sidewalk angled towards the vehicle. This causes the pulses to bounce off the vehicle’s body and create a “phantom echo” that, to the self-driving software, pops up along the path it’s turning into. And the more mirrors they used, the more the car’s software was convinced it was seeing an obstacle. With six mirrors, the lidar-dependent rides misclassified the phantom signals as a “CAR” with a 74 percent confidence.
This isn’t a deathblow for the tech, per se. But it is a big, fat reminder that there’s still some major limitations with the technology that warrants some — dare we say it — serious reflection.
More on self-driving cars: Tesla’s Robotaxis Have Already Gotten Into Numerous Accidents