There’s tons of enthusiasm over self-driving cars. But nearly twenty years and some $100 billion since the first demos, the technology just isn’t there yet. In fact, it may be further off from being fully — and safely — autonomous than we’re led to believe.
"You’d be hard-pressed to find another industry that’s invested so many dollars in R&D and that has delivered so little," said self-driving pioneer Anthony Levandowski, in a must-read new interview with Bloomberg.
As the cofounder of Google’s self-driving division, Levandowski is acknowledged by his peers as one of the key engineers that got the industry up and running. But in recent years, he's become persona non grata off the back of a calamitous lawsuit that virtually ended his career (but more on that later).
And he's not wrong. In spite of eyewatering sums spent on development over the past decade — not to mention the almost comically enthusiastic support of Tesla CEO Elon Musk — the internet is still frequently horrified by footage of the tech going haywire, screwing up, or facing yet another investigation from the government.
The problem? The industry still amounts to little more than a bunch of glorified tech demos, according to Levandowski.
"It’s an illusion," he told Bloomberg.
In demos, you see what the creators want you to see, and they control for things that they'd rather you didn't. To make it all seem high tech, monitors will show what the cars register in its camera, flashing with symbols and polygons to show that yes, the vehicle has some awareness of its surroundings.
What the demos won't show you, as Bloomberg reports, is a hilariously banal and longstanding problem for the tech: the fearsome left turn, or an "unprotected left turn," as the industry likes to insist. Basically, the elementary move of cutting left across traffic when there’s no light to make it easy has proved consistently difficult for AI drivers.
One notable incident Bloomberg cites is when Cruise LLC, a subsidiary of General Motors, recalled and updated the software of all of its self-driving cars in September after one of them couldn’t properly pull off a left turn and crashed, injuring two.
With an AI — especially one that has to drive a two ton vehicle — you can’t safely say that because it drove fine one time that it will in the future. To humans who’ve spent years growing up in the physical world with all of its structure and chaos, slight changes in the environment is the norm. Most of the time, we barely register them consciously and instead instinctively know whether to acknowledge or ignore.
To an AI, a slight change could be catastrophic. After all, how is it supposed to know what an appropriate response to a slight or sudden change is when it doesn’t understand everything it’s looking at? How will it handle when it's overcast, when there’s creatures teetering at the edge of the road, or when harmless birds plop down on the asphalt ahead and there’s traffic behind?
"Why are we driving around, testing technology and creating additional risks, without actually delivering anything of value?" Levandowski asked in the interview.
Of course, if anyone is going to have a gripe against the self-driving-car-o-sphere, it’d be Levandowski.
He's credited by some for kick-starting the industry with his 2008 demonstration of a self-driving car delivering a pizza across the city of San Francisco, with a police escort in tow. That stunt demonstrated to the business world that the technology wasn’t just a pipe dream, eventually leading to Levandowski co-founding Google’s self-driving program in 2009, which today is known as Waymo.
But when Levandowski left Google and started working with Uber in 2016, things got dicey. By the next year, Levandowski and Uber were getting sued by Google, who alleged that Levandowski stole trade secrets to use in Uber’s program. Then he got dumped by his new employers, forced into bankruptcy, and miraculously pardoned by then-president Donald Trump to avoid federal prison.
Still, he’s not alone in his thinking.
"It’s a scam," George Hotz, founder of the open source assisted driving company Comma.ai, told Bloomberg. "These companies have squandered billions of dollars."
Pretty much the entire industry hinges on its premise that self-driving cars will make our roads safer, and that humans are bad drivers. But as Hotz points out, in terms of how much we can intuit vs. an AI, humans are really good drivers.
Anecdotally and statistically, there are countless accounts of self-driving cars getting into accidents — some fatal — or bumbling into traffic contretemps that get laughed at online. Above all, there's no clear indicators that self-driving will deliver on its safety premise anytime soon.
Until we get there — or ditch the tech altogether — maybe we could at least require driving tests for self-driving cars in the meantime.
More on self-driving: Drivers Sue Tesla Because Full Self-Driving Isn't Actually Full Self-Driving
Share This Article