Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

See the article here:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

Related Posts

Comments are closed.