Removing Bias in AI Systems

We must ensure that our AI Systems are not biased. This can be an issue when building Deep Learning models from a biased training set.

Rembrandt

Advances in AI technology have enabled a large number of successful applications, especially in the area of Deep Learning. But the issue of learned bias has raised its ugly head and must be addressed. The good news is that the AI research community has been working on this problem and interesting and effective solutions are being developed.

There are many different types of biases. Here are some examples.

  • Gender bias
  • Economic bias
  • Racial bias
  • Sexual orientation bias
  • Age bias

If we train our AI systems from biased data, these biases will be learned. For example, if we train a Deep Learning system on images of doctors and an overwhelming percentage of the images are of male doctors, the system is likely to learn that doctors are men. In their 2016 paper, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” Tolga Bolukbasi, et al. showed a disturbing level of gender bias in word embeddings trained from Google News articles. But they also proposed an effective way of removing this bias from the models that are learned. The basic idea is to change the embeddings of gender neutral words, by removing their gender associations. The same approach can be taken for other forms of bias.

The fact that we have this problem to deal with in Machine Learning sheds a light on the extensive amount of bias that exists in the human world and unfortunately, it seems easier to fix the bias issue in AI systems than in humans.

“Debiasing humans is harder than debiasing AI systems.” – Olga Russakovsky, Princeton

Author: Steve Kowalski

Chief Technology Officer (CTO) - SaaS, Cloud, Agile