How AI Can Help Tackle the Next Pandemic and the Associated Risks
The term Disease X has been coined to describe the next global pandemic that experts predict will inevitably occur. While it is impossible to pinpoint the exact nature of this future threat, scientists estimate a 25 percent chance of another pandemic on the scale of COVID-19 occurring within the next decade. The causative agent could be a known virus such as influenza or a coronavirus, or it could be an entirely new pathogen. The uncertainty surrounding Disease X highlights the urgent need for advanced predictive and preventive measures. One promising approach lies in leveraging artificial intelligence (AI) to identify and mitigate emerging threats before they spiral out of control.
Researchers at the University of California, Irvine (UCI) and the University of California, Los Angeles (UCLA) are at the forefront of developing AI-based early warning systems designed to detect potential pandemics. This initiative is part of the National Science Foundation’s Predictive Intelligence for Pandemic Prevention grant program. By analyzing a massive database of 2.3 billion US Twitter posts, the researchers aim to monitor public health trends and detect anomalies that may indicate the emergence of a new infectious disease. Led by Professor Chen Li at UCI and Dr. Wei Wang at UCLA, this project exemplifies how AI can be harnessed to enhance our preparedness for future pandemics.
The AI tool being developed by UCI and UCLA works by identifying and categorizing meaningful events from social media data to predict future pandemics and evaluate public health policies. It can also assess the effectiveness of treatments on the spread of viruses. However, one significant limitation is its reliance on Twitter, which is not accessible in all countries. Despite this drawback, the tool represents a significant step forward in utilizing AI for pandemic prediction and response. By continuously refining and expanding the data sources, researchers hope to create a more robust and inclusive system capable of providing early warnings for a wide range of infectious diseases.
Another notable AI tool in the fight against pandemics is Evescape, developed by Harvard Medical School and the University of Oxford. Evescape has demonstrated remarkable accuracy in predicting mutations in various viruses, including HIV and influenza. During the early stages of a pandemic, this tool can be invaluable in helping vaccine manufacturers and identifying potential treatments. Pharmaceutical companies like AstraZeneca are already leveraging AI to accelerate the discovery of new antibodies for vaccines. The Oslo-headquartered Coalition for Epidemic Preparedness Innovations (CEPI) has also funded Evescape, recognizing AI’s potential as a valuable tool for pandemic preparedness.
Despite the promise of AI in combating pandemics, it is crucial to acknowledge that the technology is still in its developmental stages. Dr. Philip Abdelmalik at the World Health Organization (WHO) emphasizes the importance of human involvement in ensuring AI’s effectiveness. While AI can identify potential threats and misinformation, its capabilities are limited by the quality and representation of the input data. Experts believe that although we are better positioned now for the next pandemic due to advancements in AI, there is still significant work to be done in fundamental biology and in building trust and relationships among researchers and information-sharing networks.
Trust and collaboration among researchers are essential components of preparing for and responding to future pandemics. The origins of the COVID-19 pandemic have been a contentious topic, with debates over whether the virus was naturally occurring or leaked from a lab conducting gain-of-function research. This controversy has strained international relations and diverted attention from other pressing threats, such as the potential for intentionally engineered pandemics. The dual-use nature of genetic engineering and AI presents both opportunities and risks, necessitating careful consideration and proactive measures to prevent misuse.
The potential for AI to be used in creating bioweapons is a growing concern. In 2001, researchers discovered the potential for genetic modification to create deadly viruses when a study on the mousepox virus went awry. Recent technological advancements have made it easier to engineer and manipulate viruses, raising the specter of individuals or groups using these tools to intentionally cause catastrophic pandemics. The integration of AI in genetic modification increases the risk of accidental or intentional outbreaks, underscoring the urgency of addressing this threat through robust regulatory frameworks and international cooperation.
The solutions required to mitigate the risks associated with AI and genetic engineering overlap significantly with those needed to improve outbreak response systems. Investing in public health and clinical infrastructure, enhancing pathogen detection and surveillance, and fostering international collaboration are critical steps in preventing future pandemics. The recent delay in the global pandemic agreement to 2025 highlights the need for more concerted efforts and proactive measures. As the advent of genomics and AI has given individuals unprecedented power to cause global pandemics, society must grapple with the ethical and practical implications of these technologies to minimize harm and maximize benefits.
The question of whether chatbots and large language models (LLMs) can be used to build bioweapons has been raised in several recent articles. Experts warn that the use of AI, particularly LLMs, could make bioterrorism more accessible to malicious actors. While the historical use of bioweapons and bioterrorism has been rare, the potential risks posed by AI should not be ignored. Discussions about AI and bioweapons have often focused on the capabilities of LLMs like ChatGPT, which have undergone stress tests to address potential biosecurity concerns. These tests have shown that the information generated by ChatGPT could be most useful to non-state actors without formal scientific training.
A thought experiment conducted at MIT demonstrated how easily a group of undergraduate students could use an LLM to access information on creating and ordering a dangerous virus. This experiment has raised concerns about how easy it could be for individuals with malicious intent to gain access to knowledge on bioweapons through AI. High-profile leaders, including the British Prime Minister and the US Biden administration, have expressed concerns about the intersection of AI and bioweapons. However, many uncertainties remain about how AI will impact the development and use of bioweapons, necessitating a balanced perspective and multilateral responses to these risks.
While AI can be used to design new harmful compounds or enhance existing pathogens, it is not a guarantee that these designs can be easily replicated or used as weapons. Challenges related to data availability and quality in the biological sciences could impact the effectiveness of AI in creating bioweapons. Additionally, the unpredictability and complexity of living organisms present significant hurdles for those seeking to use bioweapons. Although AI may make it easier for non-experts to access dual-use knowledge, much of this information is already available through other means. Maintaining a balanced perspective on the potential risks of AI in biosafety and biosecurity is crucial for developing effective responses.
The history of biological weapons programs, such as those conducted by the US, Soviet Union, and Japan during World War II, underscores the potential for harm posed by bioweapons. Despite international agreements like the Biological Weapons Convention, some nations, including Russia and North Korea, are believed to continue developing bioweapons. The advent of powerful new AI models and lab tools could make it easier for rogue actors or states to engineer highly contagious and lethal viruses. Governments, technology companies, and scientific researchers must work together to mitigate the risk of bioweapons and prevent a potential catastrophic event.
In conclusion, AI holds immense potential in helping tackle the next pandemic, from early detection and monitoring to accelerating vaccine development and treatment identification. However, the dual-use nature of AI and genetic engineering also presents significant risks, including the potential for creating bioweapons. To harness the benefits of AI while minimizing the risks, it is essential to invest in public health infrastructure, enhance pathogen surveillance, foster international collaboration, and establish robust regulatory frameworks. By taking proactive measures and engaging in thoughtful discussions about the ethical and practical implications of these technologies, society can better prepare for and respond to future pandemics.