Can AI Help Battle Coronavirus?

The AI community has been marshaling its resources in the fight against Coronavirus with focus in three areas: diagnosis, treatment, and prediction. The biggest challenge thus far has been the lack of data, partly caused by a dearth of diagnostic testing. In this post, I give some examples of how AI is being applied in these three areas. Unfortunately, I don’t think AI will have a huge impact on our response to the COVID-19 epidemic, but what we learn here will help us in the future.

Prediction

There has been much discussion about “flattening the curve” so we don’t overwhelm our healthcare resources. The graphs being shown are based on predictions of how the disease can spread under different scenarios. We would like to know how many COVID-19 cases to expect, when and where they are likely to occur, and their expected severity. We would also like early identification of novel outbreaks.

In 2008, Google launched a project to predict and monitor flu called Flu Trends. It was shut down after it missed the peak of the 2013 flu season by 140 percent. But other companies learned from this epic failure and have since developed better solutions. At the end of February 2020, Metabiota was able to predict the cumulative number of COVID-19 cases a week ahead of time within 25% and also predict which countries would have the most cases.

Diagnosis

The most widely publicized AI success versus Coronavirus has been the development of Deep Learning models that can be used to analyze CT scans of lungs and distinguish COVID-19 pneumonia from other causes. Infervision and Alibaba have built models that demonstrate high accuracy. Here is a paper describing an approach by a Chinese team.

The issue here is that we would like an earlier diagnosis and not have to wait until there is pneumonia. Also, with the large number of cases, the capacity to perform CT scans could be exceeded.

Treatment

BioTech companies are using AI to identify already-approved drugs that can be re-purposed for Coronavirus and also to identify other molecules that could form the basis of an effective treatment.

Insilico is going after an enzyme, called 3C-like protease, that is critical for the coronavirus’s reproduction. They are using Generative Adversarial Networks (GAN) and other models in their drug discovery pipeline.

Conclusion

There have been great advances in AI technology this past decade, especially in the area of Deep Learning, that can be used for prediction, diagnosis, and treatment of infectious diseases. Our experience developing solutions for this current epidemic will help prepare us for the next one.

Featured photo of “Geek Machine” by Bob Mackie Copyright © 2020 Steve Kowalski

Removing Bias in AI Systems

We must ensure that our AI Systems are not biased. This can be an issue when building Deep Learning models from a biased training set.

Advances in AI technology have enabled a large number of successful applications, especially in the area of Deep Learning. But the issue of learned bias has raised its ugly head and must be addressed. The good news is that the AI research community has been working on this problem and interesting and effective solutions are being developed.

There are many different types of biases. Here are some examples.

  • Gender bias
  • Economic bias
  • Racial bias
  • Sexual orientation bias
  • Age bias

If we train our AI systems from biased data, these biases will be learned. For example, if we train a Deep Learning system on images of doctors and an overwhelming percentage of the images are of male doctors, the system is likely to learn that doctors are men. In their 2016 paper, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” Tolga Bolukbasi, et al. showed a disturbing level of gender bias in word embeddings trained from Google News articles. But they also proposed an effective way of removing this bias from the models that are learned. The basic idea is to change the embeddings of gender neutral words, by removing their gender associations. The same approach can be taken for other forms of bias.

The fact that we have this problem to deal with in Machine Learning sheds a light on the extensive amount of bias that exists in the human world and unfortunately, it seems easier to fix the bias issue in AI systems than in humans.

“Debiasing humans is harder than debiasing AI systems.” – Olga Russakovsky, Princeton