Yes, facial recognition is fun. We use it to effortlessly unlock our phones, find out which 18th century Renaissance painting our faces resemble the most, and make animated poop emojis grimace.
But there's a darker side, which you've probably been hearing about a lot more lately. Facial recognition technology facilitates government surveillance, can be used without participants' explicit consent, and can even perpetuate racial discrimination and wrongful prosecution.
So far, companies developing and using facial recognition technology haven't had any rules to follow. But as ethical concerns pile up, some nations have begun to pass laws to rein it in. The European Union has enacted its own far-reaching privacy regulations that require companies to get explicit consent from subjects who have their faces scanned (companies face significant fines if they don't follow the rules). But in the U.S., lawmakers have not passed an equivalent regulation.
Microsoft, one of the biggest names in facial recognition technology, thinks that's a mistake.
Last week, in an unprecedented move, Microsoft president Bradford L. Smith called for facial recognition regulations in the U.S. "We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology," Smith writes in a blog post.
Smith isn't just concerned about how the tech sector uses facial recognition — regulation would also keep the government's own use in check. “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Smith writes.
Smith is the most recent voice in a growing chorus against the problematic ways facial recognition is being used:
- At Google: Google employees signed a public letter demanding CEO Satya Nadella shut down Project Maven, a Department of Defense contract to create a “customized AI surveillance engine” that would allow military drones to use facial recognition software to identify threats.
- At Microsoft: Microsoft employees delivered their own letter denouncing the company's contract with the controversial Immigration and Customs Enforcement (ICE). While CEO Satya Nadella downplayed their relationship with the agency that was largely responsible for the separation of children and families, employees were rightfully worried that Microsoft could one day work with them.
- At Amazon: Amazon's employees watched in horror as U.S. government authorities forcibly separated migrant children from their parents, and called for CEO Jeff Bezos to cease all sales of the company's Rekognition software to law enforcement contractors.
There are, as Slate notes, cynical ways to read Smith's post. "Microsoft’s largest tech rivals—Apple, Google, Amazon, and Facebook—all use face recognition in various forms and are leaders in developing the technology. Microsoft may reckon that taking a stand for regulation would serve its interests better than continuing to compete on an unregulated playing field," Slate's senior tech writer Will Oremus points out.
Even so, the more earnest reading — that lawmakers can no longer ignore these concerns — shouldn't be dismissed offhand. We have already seen facial recognition technology incorrectly flag thousands of innocent people, and scan countless others at U.S. borders and airports without the option to opt out. Even Facebook was caught unprepared, scanning users' faces without explicit consent when the EU's General Data Protection Regulations (GDPR) were about to come into effect. All of these actions set a dangerous precedent, and could pose a threat to personal freedom if left completely unregulated.
Without sufficient oversight, border control and law enforcement could easily be swayed by subpar facial recognition software. And it's not a ridiculous idea — in the case of the South Wales police force used facial recognition to identify suspicious subjects at a soccer match, a whopping 87.5 percent of those identified were false positives. And without due process or regulated appeal procedures for the wrongly accused, innocent people could spend years behind bars as a result.
If the U.S. government doesn't intervene in a meaningful way, Silicon Valley will be forced to navigate a moral conundrum: throw its moral values out the window — "Don't Be Evil" be damned — or regulate itself. Neither option is particularly desirable — private companies will always have their own agendas, whether it is for the greater good or capital gain.
But at least Microsoft is setting the tone by exerting some pressure on regulators and taking a critical look at a technology that has "broad societal ramifications and potential for abuse," as Smith puts it. As the sentiment spreads throughout Silicon Valley, regulators may have no choice but to intervene, ensuring our personal freedoms and safety in the process.
Read more about facial recognition technology: New Facial Recognition Software Tracks and Protects Endangered Primates
Share This Article