The US Constitution gives residents the right to due process. However, there are concerns that computer algorithms are now undermining its capacity to do so.
Agencies in charge of public services responsible for everything from criminal justice, to health, to benefits and other means of financial support distributed by the government are using algorithms to increase efficiency as the 21st century moves on. These agencies decide who is granted bail, who is prioritized for certain services – all clearly matters that have a huge effect on a person’s life.
The algorithms and scoring systems used to make these decisions are often completely private. Given the impact they can have, there are calls for more transparency.
“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Microsoft researcher and AI Now founder Kate Crawford said in an interview with Wired.
The argument against disclosing too much about these systems is the fact that the algorithms are specially designed, and their developers consider them to be their intellectual property. However, Crawford maintains that it’s possible to offer up information about how they operate without allowing other entities to copy their methods.
In theory, an algorithmic approach to something like a bail hearing should be a truly impartial way of making such decisions. In practice, these systems can often take on biases from their creators.
In 2015, a study carried out at Carnegie Mellon University found that men were significantly more likely to be served ads for high paying jobs than women were. Last year, it emerged that algorithms were reinforcing prejudiced policing by targeting specific racial groups. All of this violates the US Constitution.
Artificial intelligence and machine learning techniques are only going to become more prevalent and more sophisticated in coming years. As the influence that these systems have over our lives continues to grow, it’s crucial that they’re being rigorously inspected to ensure that they’re doing their jobs in the proper fashion.
Imagine a future where an algorithm decides what constitutes a crime, whether you’re guilty or not, and what the proper punishment should be. If the people at the mercy of this system aren’t privy to its methodology, this could be a true dystopia – and it’s not a far-fetched scenario.
AI Now’s report suggests that governments take stock of the way that they employ algorithms. It also urges the companies who develop these tools to consider whether their products are biased, and calls for better checks on whether automated systems discriminate against certain groups.
Algorithms are already making decisions that could change a person’s life, so it’s high time that we looked a little closer at how they come to their conclusions.