Should Artificial Intelligence Be Regulated? – HuffPost

Posted: July 28, 2017 at 7:15 pm

By Anthony Aguirre, Ariel Conn and Max Tegmark

Should artificial intelligence be regulated? Can it be regulated? And if so, what should those regulations look like?

These are difficult questions to answer for any technology still in development stages regulations, like those on the food, pharmaceutical, automobile and airline industries, are typically applied after something bad has happened, not in anticipation of a technology becoming dangerous. But AI has been evolving so quickly, and the impact of AI technology has the potential to be so great that many prefer not to wait and learn from mistakes, but to plan ahead and regulate proactively.

In the near term, issues concerning job losses, autonomous vehicles, AI- and algorithmic-decision making, and bots driving social media require attention by policymakers, just as many new technologies do. In the longer term, though, possible AI impacts span the full spectrum of benefits and risks to humanity from the possible development of a more utopic society to the potential extinction of human civilization. As such, it represents an especially challenging situation for would-be regulators.

Already, many in the AI field are working to ensure that AI is developed beneficially, without unnecessary constraints on AI researchers and developers. In January of this year, some of the top minds in AI met at a conference in Asilomar, Calif. A product of this meeting was the set of Asilomar AI Principles. These 23 principles represent a partial guide, its drafters hope, to help ensure that AI is developed beneficially for all. To date, over 1,200 AI researchers and over 2,300 others have signed on to these principles.

Yet aspirational principles alone are not enough, if they are not put into practice, and a question remains: is government regulation and oversight necessary to guarantee that AI scientists and companies follow these principles and others like them?

Among the signatories of the Asilomar Principles is Elon Musk, who recently drew attention for his comments at a meeting of the National Governors Association, where he called for a regulatory body to oversee AI development. In response, news organizations focused on his concerns that AI represents an existential threat. And his suggestion raised concerns with some AI researchers who worry that regulations would, at best, be unhelpful and misguided, and at worst, stifle innovation and give an advantage to companies overseas.

But an important and overlooked comment by Musk related specifically to what this regulatory body should actually do. He said:

The right order of business would be to set up a regulatory agency initial goal: gain insight into the status of AI activity, make sure the situation is understood, and once it is, put regulations in place to ensure public safety. Thats it. Im talking about making sure theres awareness at the government level.

There is disagreement among AI researchers about what the risk of AI may be, when that risk could arise, and whether AI could pose an existential risk, but few researchers would suggest that AI poses no risk. Even today, were seeing signs of narrow AI exacerbating problems of discrimination and job loss, and if we dont take proper precautions, we can expect problems to worsen, affecting more people as AI grows smarter and more complex.

The number of AI researchers who signed the Asilomar Principles as well as the open letters regarding developing beneficial AI and opposing lethal autonomous weapons shows that there is strong consensus among researchers that we need to do more to understand and address the known and potential risks of AI.

Some of the principles that AI researchers signed directly relate to Musks statements, including:

3) Science Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

The right policy and governance solutions could help align AI development with these principles, as well as encourage interdisciplinary dialogue on how that may be achieved.

The recently founded Partnership on AI, which includes the leading AI industry players, similarly endorses the idea of principled AI development their founding document states that where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy and aligned with the ethics and preferences of people who are influenced by their actions.

And as Musk suggests, the very first step needs to be increasing awareness about AIs implications among government officials. Automated vehicles, for example, are expected to eliminate millions of jobs, which will affect nearly every governor who attended the talk (assuming theyre still in office), yet the topic rarely comes up in political discussion.

AI researchers are excited and rightly so about the incredible potential of AI to improve our health and well-being: its why most of them joined the field in the first place. But there are legitimate concerns about the possible misuse and/or poor design of AI, especially as we move toward advanced and more general AI.

Because these problems threaten society as a whole, they cant be left to a small group of researchers to address. At the very least, government officials need to learn about and understand how AI could impact their constituents, as well as how more AI safety research could help us solve these problems before they arise.

Instead of focusing on whether regulations would be good or bad, we should lay the foundations for constructive regulation in the future by helping our policy-makers understand the realities and implications of AI progress. Lets ask ourselves: how can we ensure that AI remains beneficial for all, and who needs to be involved in that effort?

See the original post:

Should Artificial Intelligence Be Regulated? - HuffPost

Related Posts