The mystery of a John or Jane Doe is a classic staple of TV mysteries and detective shows the world over. But if New York City's police force has their way, unidentified persons — living or dead — will be a thing of the past. Completely.

Right now, the only way the NYPD can identify an unknown individual is if they've been arrested. But the Wall Street Journal reports that the NYPD's Real Time Crime Center and Facial Identification Sections are seeking access to the state's database of driver's licenses, which would expand their facial recognition capabilities to include millions of innocent drivers (along with criminal mugshots).

But, get this: It's already happening in plenty of places. Georgetown University's Center on Privacy and Technology discovered in 2016 that at least half of American adults are already searchable by license photos, as 26 states allow cops to search motor vehicle registries for matches. They also found that 16 states allow the FBI to do it, too.

That doesn't make it any less unsettling, of course. After all, other places where facial recognition's widely used (see: China) don't exactly have spotless reputations for protecting civil rights. But how bad would it actually be if cops could identify anyone they wanted, at any time, anywhere in the world?

The Good

Despite the Black Mirror-esque nature of the proposition here, there are potentially positive upshots to all this. Besides the obvious benefit of identifying suspects in active investigations, this could:

  • Assist lost dementia patients. Such a system could identify lost adults with cognitive disorders like Alzheimer's, who might not know who they are or where they came from. One example: Wall Street Journal points out that this would have been useful in the case of a Staten Island woman with Alzheimer's who police found in 2014, and who could only be identified thanks to a minor traffic violation.
  • Identify the unidentified in hospitals. Facial recognition could help hospitals put names to unconscious patients who come in without ID, as well as John/Jane Doe cases. Connecting these individuals with their families could save them a lot of agony in wondering what happened/is happening to loved ones.
  • Spot criminals laying low. As The Verge notes, artificially intelligent systems in 43 states are already helping identify criminals who hide under an assumed name by picking out duplicate faces on licenses (held by people hiding under different names in different states). If systems like these were tied into police and FBI databases, their powers would only expand; for example, it could spot criminals who were never licensed in their home state, but who live under a different name elsewhere.

The Bad, Ugly, and Downright Dystopian

Of course, it wouldn't shock you to learn that the privacy implications of widespread facial recognition in policing are enormous.

  • Chilling free speech. In recent years, the FBI's copped to monitoring protests by groups like Black Lives Matter, in addition to parades, vigils, and other various (and perfectly legal) public gatherings. And if you're even remotely familiar with the Bill of Rights, you know that the freedom to assemble peacefully is a fundamental, inalienable right. But what if law enforcement could identify any person at these events just by looking at them? Consider places in which the relationship between the community and the police needs work; would people in these places still protest if they worried police would seek them out for doing so?
  • Perpetuating racial bias. You would think a computer — which, by definition, is race-less — would have no chance of being racist towards any particular group! But you'd be wrong. Turns out that algorithms in many systems carry biases of the humans who programmed them — including, of course, facial recognition systems. A 2012 study found that six leading facial recognition algorithms were worse at matching black faces correctly, as opposed to Caucasians. Multiple other studies have replicated this, finding that algorithms work best for the faces they're trained on — which are most often white. And with this bias, there's the potential to lead to...
  • Mistaken identity arrests. This one is undoubtedly nightmarish: mistaken facial recognition software has the potential to send innocent people to jail (just because they look like someone else). Unsurprisingly, facial recognition companies claim this is unlikely; the Georgetown report says that FaceFirst, which sells face recognition software to police, claims "an identification rate above 95%." Yet the report also found that this figure was a decade old, and thus, likely, no longer valid. FaceFirst even puts language in some contracts protecting itself against potential mistakes, such as this line from a contract with the San Diego Association of Governments: "FaceFirst makes no representations or warranties as to the accuracy and reliability of the product in the performance of its facial recognition capabilities.” How's that for reassuring?

As the Georgetown report points out, identifying the flaws of this technology isn't necessary a play to stop it. Law enforcement that want to use facial recognition ostensibly have good intentions, and regulators could step in to make sure this technology is used in compliance with regulations and laws; for example, to protect against potential false matches, the FBI only uses facial recognition for investigative leads, not as evidence in a prosecution.

Just like legislation had to be written to keep police from wiretapping when phones became a household technology, facial recognition desperately needs guidelines and precedent. Only by doing so can we prevent it from becoming part of a full-throttle dystopian law enforcement abuse in the future, too — lest your face become a (literally) unwarranted legal liability.


Share This Article