The Dual-Edged Sword of AI: Potential for Pandemics and Bioweapons
The convergence of artificial intelligence (AI) and genetic engineering has opened a Pandora’s box of possibilities, both beneficial and potentially catastrophic. As AI continues to evolve, it brings with it the promise of unprecedented advancements in medicine, public health, and various other fields. However, this same technology also poses significant risks, particularly in the realm of bioweapons and engineered pandemics. The recent acknowledgment by OpenAI about the potential misuse of new models to create bioweapons has brought this issue to the forefront, highlighting the urgent need for robust safeguards and ethical considerations.
The debate over the origin of Covid-19 has been a contentious one, with theories ranging from natural evolution to a possible lab leak involving gain-of-function research. This controversy has not only strained international relations but has also overshadowed a more pressing concern: the potential for engineered pandemics. The fusion of genetic engineering and AI makes it increasingly feasible to manipulate viruses for both beneficial and destructive purposes. This dual-use nature of technology necessitates a comprehensive approach to mitigate risks while maximizing benefits.
Historically, the use of bioweapons has been rare, but the potential for their development has always been a concern. During World War II, the Japanese conducted experiments with biological agents, and both the United States and the Soviet Union stockpiled toxins during the Cold War. Despite treaties banning such weapons, clandestine programs continued, as evidenced by the Soviet Union’s bioweapons program that persisted even after signing a treaty. Today, countries like Russia and North Korea are believed to still be developing bioweapons, underscoring the persistent threat.
With advancements in AI, the creation of deadly pathogens has become more accessible. High-throughput genomic technologies and generative AI have made it easier to engineer viruses that can evade immune systems and render standard vaccines ineffective. This capability, while promising for medical breakthroughs, also raises the specter of an intentional or accidental pandemic. The manipulation of typically harmless viruses as delivery systems for gene therapies exemplifies this dual-edged sword, where the line between therapeutic and harmful applications can easily blur.
Proper investment in public health and clinical infrastructure is crucial to counteract the threat of AI-generated superbugs. This includes resources for pathogen detection and surveillance, development of diagnostic and vaccine platforms, and equitable access to preventive measures and care. The recent infections of cattle and humans with avian flu highlight the importance of surveillance and early identification of dangerous pathogens. Broader conversations about preemptive threat assessment and response are essential to stay ahead of nefarious actors.
The delay in forming a global pandemic agreement until 2025 is concerning, given that the current agreement has not been updated in 20 years despite significant technological advancements. Collaboration among nations is necessary to combat the growing threat of engineered pandemics. The authors of various studies compare the potential for global devastation from an engineered pandemic to that of the atomic bomb in 1945, emphasizing the gravity of the situation. The power to cause the next pandemic and millions of deaths lies in the hands of a single individual due to advancements in technology.
There is a moral responsibility to consider both the benefits and harms of genetic engineering and AI in the context of pandemic preparedness. The joint effort of academia and government is crucial in tackling the threat of engineered pandemics. It is essential to act now and have difficult debates on how to handle this technology to minimize harm and maximize benefits. Preparedness measures should be put in place to identify and contain any potential outbreaks, and governments and other institutions need to take biosecurity seriously and fund research and prevention efforts.
OpenAI’s stress test for biosecurity concerns revealed that non-experts could access and use information generated by AI models like ChatGPT to create biological weapons. This finding has sparked concerns among policymakers and led to further discussions about the security implications of AI and bioweapons. In November 2023, the UK hosted a summit on AI safety, where the potential for AI to facilitate the development of bioweapons was a key topic. The US Biden administration also issued an executive order on AI, emphasizing the need to address potential risks posed by emerging AI systems in relation to bioweapons.
Some experts warn that AI could make it easier to build biological weapons, but others argue that this view is based on hype and may not be entirely accurate. In the assessment of AI by UK intelligence experts, AI was deemed to be the equivalent of an extremely junior analyst, rather than a major threat. AI-generated material may not always be reliable, and there are many uncertainties regarding how AI may impact bioweapons. It is important to consider the limitations of AI, as well as the complex social, political, and organizational factors that influence decisions around bioweapons development and use.
Despite these concerns, AI also holds promise in helping tackle future pandemics. Disease X, predicted to be the next global pandemic, has a one in four chance of occurring in the next decade. Researchers in California are developing an AI-based early warning system to predict future pandemics, part of the US National Science Foundation’s grant program to prevent pandemics. By collecting billions of tweets on Twitter, the tool monitors public health trends, identifies significant events, and evaluates the effects of treatments on virus spread. However, reliance on data from platforms like Twitter, which are inaccessible in some countries, presents a challenge.
Another AI tool, Evescape, can predict new variants of coronavirus and other viruses like HIV and influenza. It can be used early in a pandemic to aid vaccine development and identify therapeutics. Pharmaceutical giant AstraZeneca also leverages AI to speed up the discovery of new antibodies. While AI is seen as a valuable tool in preparing for and responding to epidemics and pandemics, it still needs to develop and mature. Dr. Philip Abdelmalik from the WHO highlights the role of people in the efficacy of AI, raising concerns about the ethical use of AI and equitable representation.
Experts believe that progress in AI has improved our readiness for future pandemics, but there is still a long way to go in terms of fundamental biology and modeling. Trust and relationships are seen as crucial factors in responding to the next pandemic. The article underscores the need for continued progress in AI and other measures to prepare for and prevent future pandemics, as well as the importance of trust and cooperation in facing global health crises. As we navigate the dual-edged sword of AI, it is imperative to balance innovation with caution, ensuring that the technology serves humanity’s best interests without opening the door to unprecedented risks.