We trust artificial intelligence algorithms with a lot of really important tasks. But they betray us all the time. Algorithmic bias can lead to over-policing in predominately black areas; the automated filters on social media flag activists while allowing hate groups to keep posting unchecked.

As the problems caused by algorithmic bias have bubbled to the surface, experts have proposed all sorts of solutions on how to make artificial intelligence more fair and transparent so that it works for everyone.

These range from subjecting AI developers to third party audits, in which an expert would evaluate their code and source data to make sure the resulting system doesn't perpetuate society’s biases and prejudices, to developing tests to make sure that an AI algorithm doesn’t treat people differently based on things like race, gender, or socioeconomic class.

Now scientists from IBM have a new safeguard that they say will make artificial intelligence more safe, transparent, fair, and effective. They propose that, right before developers start selling an algorithm, they should publish a Supplier’s Declaration of Conformity (SDoC). As a report or user manual, the SDoC would show how well the algorithm performed at standardized tests of performance, fairness and risk factors, and safety measures. And they should make it available to anyone who's interested.

In a research paper published Wednesday, the IBM scientists argue that this kind of transparency could help build public trust and reassure prospective clients that a particular algorithm will do what it’s supposed to without screwing anyone over based on biased training data. If a particular algorithm does seem likely to screw someone over, the client (and even interested citizens) would ideally be able to tell from the test results and choose not to put it to use.

In their paper, the IBM scientists draw on the examples given by SDoCs in other industries, which are rarely required by law but are encouraged in order to keep potential customers from going to more transparent competitors. For instance, consumers can trust the brakes of a car, the autopilot capabilities of an airplane, the resilience of a bridge because these things are exhaustively tested based on standard, well-known metrics. And yet, there’s no equivalent test to make sure that artificial intelligence tools will perform as claimed.

The researchers propose that an AI SDoC would answer questions like: “Was the dataset and model checked for biases?” and “Was the service checked for robustness against adversarial attacks?” In general, the questions would evaluate an algorithm based on its performance rather than checking out its components or its code as an auditor might. Here are a few more that an AI SDoC might include, as the researchers write in the paper:

Does the dataset used to train the service have a datasheet or data statement?

Was the dataset and model checked for biases? If yes, describe bias policies that were checked, bias checking methods, and results.

Was any bias mitigation performed on the dataset? If yes, describe the mitigation method.

Are algorithm outputs explainable/interpretable? If yes, explain how the explainability is achieved (e.g. directly explainable model, local explainability, explanations via examples).

What kind of governance is employed to track the overall workflow of data to AI service?

Asking developers to publish SDoCs won’t solve all of the problems our growing reliance on AI presents. We know how brakes stop a speeding car, but some of the more complex algorithms out there (like those that employ deep learning techniques) can be inscrutable. Plus, if a transparency report based on standardized testing is going to have an impact, everyone would have to play along.

Sure, developers would be motivated to start releasing SDoCs if their competitors are doing it. But the system will only work if customers, governments, and companies that use AI show that they actually care what these reports say. Will a police department like the LAPD, which has used blatantly racist policing algorithms in the past, necessarily care enough about the details of an SDoC to find a better system? Truth is, we don't know yet.

These reports are unlikely to force anyone to employ more ethical algorithms, or even to develop them. But if you combine these reports with other tools like third-party audits, the public can demand algorithms that treat everyone fairly.

More on how to make AI safe and fair: Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI


Share This Article