Can AI prevent future pandemics?
Jeremy Peckham - Research Lead at the AI, Faith and Civil Society Commission
Using AI as a Forecasting Tool?
Artificial intelligence (AI) has shown significant potential in predicting and mitigating the impact of pandemics. A notable example is BlueDot, a company whose AI-based system successfully identified the early signs of the COVID-19 outbreak in Wuhan, China, nine days before the World Health Organization (WHO) issued a public alert.
By analysing diverse data sources such as foreign language news reports, global ticketing data, and animal and plant disease networks, BlueDot was able to predict where and when infected residents were going next. The algorithm was able to predict that the virus would spread to Bangkok, Seoul, Taipei, and Tokyo shortly after the initial outbreak. (Bogoch 2020).
Limitations of AI in Forecasting
While BlueDot's forecasting was successful, using AI to predict pandemics faces inherent challenges. Predicting disease spread using AI is complex because it relies heavily on large, high-quality datasets, but even the best models can't guarantee complete accuracy and there is no guarantee that one virus will behave like another.
Some attempts have been made to use social media data to model and predict the progress of COVID 19 but the challenge with such data is that it accumulates noise and can be misleading. This noise makes it difficult to rely on social media data alone to accurately predict something like the spread of a disease. Google’s Flu predictor platform, based on analysing peoples search trends, overestimated the number of doctor visits in 2014 by over double. The system suggested that more people were seeking medical help than actually were, which showed that relying purely on search data for health predictions could be misleading. One of the conclusions from this work was that “big data” doesn’t substitute for traditional data collection and analysis. It was dubbed the “big data hubris”.
Human Values Risk Analysis
Truth and Reality – HIGH RISK
Ensuring the accuracy of AI predictions is critical to avoid misinformation and confusion.
Privacy and Freedom – HIGH RISK
AI-based forecasting must protect privacy, particularly when using sensitive data like travel and health information.
Authentic Relationships – MEDIUM RISK
The trust between AI developers and public health organizations must be maintained for successful collaboration.
Moral Autonomy – LOW RISK
No direct impact on moral autonomy
Dignity of Work – LOW RISK
No direct impact on dignity of work
Cognition and Creativity – LOW RISK
No direct impact on cognition and creativity
Policy Recommendations
The potential of AI in public health forecasting is immense, but ensuring its effective and ethical use requires attention to several key issues:
1. Governments should work to create standardised frameworks for sharing and managing health data to improve AI predictions.
2. AI developers must maintain transparency about the limitations of their forecasting models to avoid overreliance on inaccurate predictions.