Artificial Intelligence Experts Respond to Elon Musk’s Dire Warning for US Governors – Discover Magazine (blog)

Posted: July 19, 2017 at 4:12 am

If you hadnt heard, Elon Musk is worried about the machines.

Though that may seem a quixotic stance for the head of multiple techcompanies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. Hes shared his fears of AI running amok before, likening it to summoning the demon, and Musk doubled down on his stanceat a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity.

Amid a discussion of driverless vehicles and space exploration, Musk called for greater government regulations surrounding artificial intelligence research and implementation, stating:

Until people see robots going down the street killing people, they dont know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, its too late, according to theMIT Tech Review.

Its far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When well actually reach that point is anyones guess, and were not at all close at the moment, as todays footage of a security robot wandering blindly into a fountain makes clear.

While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair.

To get some perspective on Musks comments,Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.

Elon Musks obsession with AI as an existential threat for humanity is a distraction from the real concern about AIs impact on jobs and weapons systems. What the public needs is good information about the actual consequences of AI both positive and negative. We have to distinguish between science and science fiction. In fictional accounts, AI is often cast as the bad guy, scheming to take over the world, but in reality AI is a tool, a technology and one that has the potential to save many lives by improving transportation, medicine, and more. Instead of creating a new regulatory body, we need to better educate and inform people on what AI can and cannot do. We need research on how to build AI guardiansAI systems that monitor and analyze other AI systems to help ensure they obey our laws and values. The world needs AI for its benefits, AI needs regulation like the Pacific ocean needs global warming.

Elon Musks remarks arealarmist. I recently surveyed300 leading AI researchers andthe majority of themthinkit will take at least 50 moreyearsto get tomachines as smart as humans. Sothis is not a problem that needs immediate attention.

And Im not too worried about what happenswhen we get to super-intelligence, astheresa healthy research communityworking onensuring that thesemachines wont pose an existential threat to humanity. I expecttheyll have worked out preciselywhat safeguards are needed by then.

But Elon is right about one thing: We do need government to startregulating AI now.However, it isthe stupid AI we have today that we need to start regulating.The biased algorithms. Thearms race to develop killer robots, where stupid AI will be giventhe ability to make life or death decisions. The threat to our privacy as the techcompanies get hold of all our personal and medical data. And the distortionof political debate that the internet is enabling.

The tech companies realizethey have a problem, and they havemade some efforts to avoid government regulation by beginning toself-regulate.Butthere are serious questions to be askedwhether they can be left to do this themselves.We are witnessing anAI race between the big tech giants, investing billionsof dollars in thiswinner takes all contest. Many other industries have seengovernment step in to prevent monopolies behaving poorly. Ive said thisin a talk recently, but Ill repeat it again: If some of the giants like Google and Facebookarent broken up in twenty years time, Ill be immensely worried for thefuture of our society.

There are no independent machine values; machine values are human values. If humanity is truly worried about the future impact of a technology, be it AI or energy or anything else, lets have all walks and voices of life be represented in developing and applying this technology. Every technologist has a role in making benevolent technology for bettering our society, no matter if its Stanford, Google or Tesla. As an AI educator and technologist, my foremost hope is to see much more inclusion and diversity in both the development of AI as well as the dissemination of AI voices and opinions.

Artificial Intelligence is already everywhere. Its ramifications of use rival that of the Internet,and actually reinforces them. AIis being embedded in almost every algorithm and system were building now and in the future. There is an essential opportunity to prioritize ethical and responsible design today for AI. However, this is more related to the greater immediate risk for AI and society, which isthe prioritization of exponential economic growth while ignoring environmental and societal issues.

In terms of whether Musks warnings of existential threats regarding Artificial Super-intelligence merit immediate attention, we actually risk large-scale negative and unintended consequences because were placing exponential growth and shareholder value abovesocietal flourishing metricsas indicators of successfor these amazing technologies.

To address these issues, every stakeholder creating AI must address issues of transparency, accountability and traceability in their work. They must ensure the safe and trusted access to and exchange of user data as encouraged by the GDPR (General Data Protection Regulation)in the EU. And they must prioritize human rights-centric well being metrics like the UN Sustainable Development Goals as predetermined global metrics of success that can provably increase human prosperity.

TheIEEE Global AI Ethics InitiativecreatedEthically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systemsto pragmatically help any stakeholders creating these technologies toproactively deal with the general types of ethical issues Musks concerns bring up. The group of over 250 global AI and Ethics experts were also the inspiration behind the series ofIEEE P7000 Standards Model Process for Addressing Ethical Concerns During System Designcurrently in progress, designed to create solutions to these issues in a global consensus building process.

My biggest concern about AI is designing and proliferating the technology without prioritizing ethical and responsible design or rushing to increase economic growth in a time we so desperately need to focus on environmental and societal sustainability to avoid the existential risks weve already created without the help of AI. Humanity doesnt need to fear AI, as long as we actnowto prioritize ethical and responsible design of it.

Elon Musks concerns about AI that will pose an existential threat to humanity are legitimate and should not be dismissedbut they concern developments that almost certainly lie in the relatively far future, probably at least 30 to 50 years from now, and perhaps much more.

Calls to immediately regulate or restrict AI development are misplaced for a number of reasons, perhaps most importantly because the U.S. is currently engaged in active competition with other countries, especially China. We cannot afford to fall behind in this critical race.

Additionally, worries about truly advanced AI taking over distract us from the much more immediate issues associated with progress in specialized artificial intelligence. These include the possibility of massive economic and social disruption as millions of jobs are eliminated, as well as potential threats to privacy and the deployment of artificial intelligence in cybercrime and cyberwarfare, as well as the advent of truly autonomous military and security robots. None of these more near term developments rely on the development of the advanced super-intelligence that Musk worries about. They are a simple extrapolation of technology that already exists. Our immediate focus should be on addressing these far less speculative risks, which are highly likely to have a dramatic impact within the next two decades.

More here:

Artificial Intelligence Experts Respond to Elon Musk's Dire Warning for US Governors - Discover Magazine (blog)

Related Posts