The 60-minute module could help ensure that AI works for everyone.

Bad Bots

Bias in AI is a truly worrisome issue. We've seen algorithms that are racist, sexist, and every other negative -ist you can think of. Even more troubling: if we eliminate all the human bias in our training data, an AI might still learn to be bigoted all on its own.

That's a concern researchers across the world are grappling with as we move toward a future in which AI is everywhere. One bright spot: Google, a leader in AI tech, just added an AI fairness module to its crash course on machine learning.

Dirty Data

Machine learning is a branch of AI in which we train algorithms using data sets. Since that data is often influenced by humans in some way — for example, a data set on arrests might include a racial bias based on the arresting officers' beliefs — machine learning is particularly susceptible to issues of unfairness.

Several years ago, Google created a Machine Learning Crash Course (MLCC) as part of an internal two-day boot camp to expose more of its engineers to machine learning. It released the MLCC online in February so that anyone could take advantage of the exercises, case studies, and lessons contained within it.

And on Thursday, the company added a new training module to the course, this time focused on fairness when building AI.

Good Guy Google

According to a Google blog post, upon completing the 60-minute-long fairness module, students will know the types of human biases that can crop up in machine learning models, what to look for in data when determining if it might contain human bias, and how to evaluate a machine learning model's predictions to see if they contain bias.

We must leave no stone unturned in the hunt for solutions to our AI bias problem, and by including a fairness module in its MLCC, Google is making a major contribution to the effort.

READ MORE: Google Machine Learning Crash Course Adds Lesson on Ensuring AI Fairness [9 to 5 Google]

More on AI fairness: To Build Trust in Artificial Intelligence, IBM Wants Developers to Prove Their Algorithms Are Fair


Share This Article