Once a recurring scourge that blinded, scarred and killed millions, smallpox was eradicated by a painstaking public health effort that saw the last natural infection occur in 1977. In what some consider an instructive moment in biosecurity (rather than a mere footnote), Janet Parker, a British medical photographer, died of smallpox the following year, after being exposed to Variola virus the causative agent of smallpox while working one floor above a laboratory at Birmingham University. The incident in which she lost her life was referred to as an unnatural infection one occurring outside the usual context of infectious disease.
The orthopox genus of which Variola virus is part holds a central role in the history of infectious disease and biodefence and has had a lasting impact on human society. Mousepox, cowpox, the clumsily named monkeypox and other pox viruses are all derived from the orthopox genus.
At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned.
There are only two known places in which Variola virus remains: in a high containment laboratory in Russian Siberia, and at a secure Centre for Disease Control (CDC) facility in Atlanta in the United States. Neither the Russian Federation nor the United States have yet destroyed their smallpox stockpiles, for reasons that relate more to the strictures of geopolitics than the needs of ongoing research. At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned. Saddam Hussein had deployed mustard gas and other chemical agents against Kurdish civilians at Halabja, killing more than 5,000 people. In the years preceding that atrocity, scores of military personnel bore the brunt of blistering agents, nerve agents and other chemical weapons in Iraqs protracted war against Iran.
Biological weapons were the next presumed step on Saddams ladder of escalation should he feel threatened by the US-led Coalition that gathered in the Saudi desert after his invasion of Kuwait. The weapons program Iraqi scientists had overseen since the 1980s had brought aflatoxins and botulinum toxin to the point of weaponisation, if not deployment. Bacillus anthracis, the bacterium that causes anthrax, was a proximate concern to Coalition troops as a potential battlefield weapon. But whether Saddam had access to Variola virus was the biggest question. Smallpox, a disease with pandemic potential, was a strategic weapon with international reach, one that might even be deployed behind Coalition lines by a small team.
Fear of such a scenario returned with the onset of the global War on Terror in the early 2000s, and so governments from Europe to Australia began stockpiling smallpox vaccines for use in the event of a future attack. After the Islamic State in the Levant (ISIL) suddenly seized swathes of territory in Iraq and Syria in mid-2014, the group repeatedly deployed chemical weapons against civilians, and reportedly made attempts at acquiring biological weapons as well. In 2016, as ISILs caliphate reached its brief zenith, a Canadian scientist on the other side of the world was working to create a safer vaccine against smallpox. The researcher was engaged by a US biotech company that wanted a smallpox shot that did not carry the risk of reversion, a situation in which inoculation can cause active infection something happily not possible with most vaccines or death.
As part of this effort, the researcher needed a related orthopox virus to use as a viral vector. To this end, their team embarked on de novo synthesis of horsepox, a less pathogenic orthopox virus. This step, the reconstruction of a hitherto eradicated pox virus, became known as a Rubicon in the field of biosecurity. For the first time, an orthopox virus was created from scratch using information and material derived from purely commercial sources and it only cost around $100,000.
Horsepox was, of course, not the first virus to be rebuilt or enhanced in a laboratory setting. In 2005, a team reconstructed some of the H1N1 virus responsible for the Spanish influenza pandemic that killed between 20 and 50 million people in 1918-19, using reverse genetic techniques that were cutting-edge at the time. In 2002, another research group at the State University of New York created the first entirely artificial virus, a chemically synthesised strain of polio. A year before, in 2001, an Australian team investigating contraceptives for use on the rodent population accidentally amplified a form of ectromelia, which causes mousepox, to a point that made it resistant to available pox vaccines.
What made the horsepox development such a watershed moment was the ease with which the necessary materials and genetic information were acquired. The team bought access to DNA fragments from a horsepox outbreak that occurred in Mongolia decades earlier, in 1976. A DNA synthesis company, GeneArt, was engaged to construct the DNA fragments. Hence, a small team seeking to obtain and propagate a similar pox virus with pandemic potential say, smallpox need not physically get hold of it in full form. Nor did they need access to a government-run lab, or the certification of tightly restricted procurement channels to do so. Instead, the virus could be recreated using means and material easily available to any private citizen, for minimal cost.
Such techniques, which are well established now, undeniably have many beneficial uses. At the onset of the Covid pandemic, when authorities in China were less than forthcoming with information, the genetic sequence of SARS-CoV-2 was published on the internet but only after some skittish manoeuvring by Western researchers and their colleagues based in China, who were under government pressure not to share the sequence. Belated though this development was, it allowed for scientists across the world to begin designing medical countermeasures. Similar processes are used to keep track of viral evolution during other epidemics, to monitor the emergence of new variants of concern, or to detect changes in a pathogen that could cause more severe disease.
Much has transpired in the fields of chemistry and synthetic biology since 2017, and even more has happened in the field of artificial intelligence. When chemistry, biology and AI are combined, what was achieved with horsepox by a small team of highly trained specialists could soon be done by an individual with scientific training below the level of a doctorate. Instead of horsepox or even smallpox, any such person could soon synthesise something far deadlier, such as Nipah virus. It might equally be done with a strain of avian influenza, which public health officials have long worried may one day gain the ability to spread efficiently between humans. Instead of costing $100,000, such a feat will soon require little more than $20,000, a desktop whole genome synthesiser and access to a well-informed large language model (LLM), if some of the leading personalities in generative AI are to be believed.
Some alarming conversation has been had in recent months over the potential for new artificial intelligence platforms to present existential risks. Much of this anxiety has revolved around future iterations of AI that might lead to a takeoff in artificial superintelligence that could surpass, oppress or extinguish human prosperity. But a more proximate threat is contained within the current generation of AI platforms. Some of the key figures in AI design, including Mustafa Suleyman, co-founder of Googles Deep Mind, admit that large language models accessible to the public since late 2022 have sufficient potential to aid in the construction of chemical or biological weapons.
Founded in 1984 at the height of the IranIraq war, the Australia Group initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the IranIraq frontline.
Details on such risks have so far been mostly vague in their media descriptions. But the manner in which LLMs could aid malicious actors in this domain is simply by lowering the informational barriers to constructing pathogens. In much the same way AI platforms can be used as a wingman for fighter pilots navigating the extremes of aerial manoeuvre in combat, an LLM with access to the right literature in synthetic biology could help an individual with minimal training overcome the difficulties of creating a viable pathogen with pandemic potential. While some may scoff at this idea, it is a scenario that AI designers have been actively testing with specialists in biodefence. Their conclusion was that little more than postgraduate training in biology would be enough.
This does not mean that (another) pandemic will result from the creation of a synthetic pathogen in the coming years. Avenues for managing such risks can be found in institutions that have already proven central to the control of biological and chemical weapons. One such forum the Australia Group could be the perfect place to kickstart a new era of counter-proliferation in the age of AI.
Founded in 1984 at the height of the IranIraq war, the Australia Group (AG) initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the IranIraq frontline. The AG has since evolved to harmonise regulation of many dual use chem-bio components via comprehensive common control lists. But the dawn of a new age in artificial intelligence, coming as it has after 20 years of frenetic progress in synthetic biology, presents new challenges. As an established forum, the Australia Group could provide an opportunity for the international community to get ahead of this new threat landscape before it is too late.
It has been nearly four years since SARS-CoV-2, the virus that causes Covid-19, went from causing a regional epidemic in the Chinese city of Wuhan to a worldwide pandemic. At the time of writing, the question of how the virus first entered the human population remains unresolved. There are several ingredients that make both a natural zoonotic event and an unnatural, research-related infection plausible scenarios. The first ingredients relate to the changing ecologies in which viruses circulate, the increasingly intense interface between humans and animals amid growing urbanisation, and the international wildlife trade. Regarding the latter possibility, that the virus may have emerged in the course of research gone awry, it is now a well-documented fact that closely related coronaviruses were being subjected to both in-field collection and laboratory-based experimentation in the years approaching the pandemic. (Whether or not a progenitor to SARS-CoV-2 was held in any nearby facility remains in dispute.)
Whatever the case, the next pandemic may not come as a result of a research-related accident, or an innocent interaction between human and animal it may instead be a feature of future conflict. Many of the same ingredients that were present in 2017 remain in place across the world today, with the new accelerant of generative AI as an unwelcome addition. Added to this is a new era of great power competition, an ongoing terrorist threat, and the rise of new sources of political extremism. The Australia Group has the chance to act now, before we see the use of chemical or biological weapons at any of these inflection points, which are all taking place amid a new age of artificial intelligence.
Excerpt from:
Managing risk: Pandemics and plagues in the age of AI - The Interpreter
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]