Redlining the robots – Cosmos

Posted: November 21, 2021 at 9:58 pm

Around the same time that Isaac Asimov published his short story introducing the laws of robotics in 1942, the worlds first nuclear reactor was being built under the viewing stands of a football field at the University of Chicago.

There had been some misgivings about initiating a chain reaction in the middle of a densely populated city, but Enrico Fermi, the Italian physicist leading the experiment, calculated that it was safe to do so. On its initial successful run, the Chicago Pile-1 reactor ran for four minutes, generating less than a watt of power about enough to illuminate one small Christmas tree ornament. The reaction was a major step in the development of nuclear energy, but it was also one of the earliest technical achievements of the Manhattan Project, the US-led initiative during the Second World War culminating in the atomic bombs that incinerated Hiroshima and Nagasaki two and half years later.

The scientists involved knew that their work had the potential for creation as well as destruction. The question was how to ensure that its beneficial use in power generation and medicine did not also lead to proliferation of weapons threatening the existence of humanity.

After the conclusion of the war, that was the subject of the very first resolution passed by the General Assembly of the United Nations. It created a commission tasked with recommending how to eliminate such weapons, while enabling all nations to benefit from peaceful uses of nuclear energy.

When it comes to artificial intelligence, the history of efforts to safeguard nuclear power is relevant. Like AI, this is an example of a technology with enormous potential for good and ill that has, for the most part, been used positively. Nuclear power, though currently out of favour, is one of the few realistic energy alternatives to hydrocarbons; its use in medicine and agriculture is more accepted and widespread. Observers from the dark days of the Cold War anticipated this, but would have been surprised to learn that nuclear weapons were not used in conflict after 1945 and that only a handful of states possess them the better part of a century later.

Secondly, the international regime offers a possible model of regulation of AI at the global level. The grand bargain at the heart of the International Atomic Energy Agency (IAEA) was that the beneficial purposes of technology could be distributed in tandem with a mechanism to ensure that those were the only purposes to which it was applied. That trade-off raised the level of trust between the then-superpowers, as well as between the nuclear haves and have-nots.

The equivalent weaponisation of AI either narrowly, through the development of autonomous weapons systems, or broadly, in the form of a general AI or superintelligence that might threaten humanity is today beyond the capacity of most states. For weapons systems, at least, that technical gap will not last long. Much as the small number of nuclear armed states is due to the decision of states not to develop such weapons and a non-proliferation regime to verify this, limits on the dangerous application of AI will need to rely on the choices of states as well as enforcement.

Clearly, it will be necessary to establish red lines to prohibit certain activities. Weaponised or uncontainable AI are the most obvious candidates. Mere reliance on industry self-restraint will not preserve such prohibitions. Moreover, if those red lines are to be enforced consistently and effectively then some measure of global coordination and cooperation is required. Here the analogy with nuclear weapons is most pertinent.

The effective regulation of AI requires norms and institutions that operate at the global level. The creation of an organisation like the IAEA, what I call the IAIA (an International Artificial Intelligence Agency), could permit a deal where countries with capacity, like the United States and China, agree to share some of that technology around the world, so that they all get the benefits of AI optimisation. But those benefits would come in exchange for a promise not to weaponise AI in terms of lethal autonomous weapons, as well as guarding against the possibility of a rogue or malignant super-intelligence that was uncontrollable or uncontainable.

This new grand bargain could offer benefits comparable to nuclear energys contributions to power, medicine and agriculture.

Moreover, unless there is some level of global regulation, it would be too easy for AI technology to move around the world. In the absence of global agreement, it would be impossible to enforce any of the red lines that we need to stop most immediately lethal autonomous weapons being given the power to make life-and-death decisions in the battlefield.

All this might sound nave, but in addition to nuclear weapons, weve seen other dangerous technologies outlawed: chemical and biological weapons, human cloning, and so on. The dangers posed by AI may seem further off into the future, but it is already clear that those dangers and the means of addressing them have moved beyond the realm of science fiction.

This is an edited excerpt from Simon Chestermans new book, We, The Robots?, in which he makes the case for a new global agency to regulate the development of artificial intelligence. You can read more about the book here.

The biggest news, in detail, quarterly. Buy a subscription today.

Read more from the original source:

Redlining the robots - Cosmos

Related Posts