AI may make video surveillance even more frightening.

Panopticon

Gone are the days when a store's security cameras only mattered to shoplifters.

Now, with the rising prevalence of surveillance systems constantly monitored by artificial intelligence, ubiquitous security systems can watch, learn about, and discriminate against shoppers more than ever before.

That's the gist of a new ACLU report titled "The Dawn of Robot Surveillance," about how emerging AI technology enables security companies to constantly monitor and collect data about people — opening new possibilities in which power is abused or underserved communities are overpoliced.

Breakdown

The report, first covered by Motherboard, breaks down how AI-driven surveillance systems could soon begin to impact our lives.

Instead of just keeping track of who's in a store, surveillance systems could use facial recognition to determine peoples' identities and gathering even more information about them. That data would then be out there, with no opportunity to opt out.

For people of color and other marginalized communities against whom AI algorithms are already biased, that could mean further stigmatization just for walking into a camera's view.

Preventative Measures

To prevent the worst consequences of this new smart surveillance tech, the ACLU report calls for strong legislation that would limit how the camera feeds can be used — especially to prevent mass data collection about people who are just going about their lives.

"Growth in the use and effectiveness of artificial intelligence techniques has been so rapid that people haven’t had time to assimilate a new understanding of what is being done, and what the consequences of data collection and privacy invasions can be," concludes the report.

READ MORE: AI Has Made Video Surveillance Automated and Terrifying [Motherboard]

More on surveillance: This Colorful Picture is Like an Invisibility Cloak for AI


Share This Article