Will hiring humans to verify Uber's drivers be enough to protect riders?
In December 2018, a man driving an authorized Uber vehicle picked up an intoxicated woman leaving a Christmas party — and then brought her to his home and raped her. But the man, who The Age reports was sentenced to five and a half years in prison on Wednesday, was not an authorized Uber driver. Rather, he was able to easily fool Uber's verification system by holding up a photo of a real driver.
In other words, the AI technology that Uber uses to verify that its drivers are who they claim to be — like Amazon delivery drivers, Uber contractors take a selfie when signing on — wasn't sophisticated enough to spot a printed headshot. It's a horrifying story that illustrates the perils of big tech offloading security to dodgy AI systems.
Uber told Business Insider that it deployed a fix in response to the December incident.
The company hired an undisclosed number of humans to review the driver-verifying selfies this year, it said — and also implemented better AI.
Apparently, it works — The Age reports that drivers posting in online forums are talking less about operating scam rings and more about how difficult it is to get the system to approve a selfie.
But it all raises the question of why the improved security wasn't in place from the start — the unauthorized driver may have been fooling Uber's security protocols since 2016, according to The Age.
READ MORE: Uber introduces human reviewers to crack down on drivers evading security selfie system with printed photos after Australian rape case [Business Insider]
More on Uber drivers: Uber and Lyft Still Allow Racist Behavior, but Not as Much as Taxi Services