Sorry, Elon Musk. AI is not a bigger threat than North Korea – VentureBeat

Posted: August 13, 2017 at 2:16 am

Regulations, sanctions, rules they are not always pure evil as some might suggest. The regulations about how to keep a commuter rail safe or the sanctions the U.S. government uses to manage relations with foreign countries are necessary, not evil.

Yet, when it comes to AI, do we really need to worry?

Elon Musk has gone on the offensive attempting to convince us that AI needs to be more regulated because it could spin out of control. He tweeted that the dangers we face from AI and machine learning has vastly more risk than North Korea. He followed up that tweet by saying that everything thats a danger to the public is regulated, including cars and planes.

The problem with this line of thinking, of course, is that an AI is a piece of software. A plane weighs over 350,000 pounds and can fall out of the sky. Where are we in the continuum of machines taking over? In an infant stage not even crawling or walking. We might want to avoid hysterics.

Still, some of the reactions have been quite interesting.

One user said it was inappropriate to compare a nuclear threat to AI. One said the real danger is humans creating AI that doesnt work. Another pointed out the obvious if there is a nuclear war, it might not matter if the machines take over. Well all be dead.

The problem with the end of the world thanks to AI discussion is that we never get into specifics. Its a random tweet comparing machine intelligence to nuclear war. Its another random tweet talking about regulation. But what kind of AI should be regulated? By whom and where? What are the actual dangers? The problem with fear-mongering about AI is that there are no obvious examples of a machine actually causing mass destructionyet. We hear about failed automations, of cars driving themselves off the road, of a chatbot app crashing.

Musk has noted before that we should regulate now before it gets out of hand. Again, he hasnt explained what should be regulated Microsoft Word? Chatbots? The subroutines in a home sensor that shuts off your sprinkler system? Satellites? Autonomous trucks? Lets get the subject out in the open and get into the specifics of regulation and see where that takes us, because my guess is that the companies making chatbots dont need to be regulated as much as they need to be told to make better and more useful bots with the funding they already have.

Or is this all about the laws of robotics? If thats the case, we get into a brand new problem what is a robot? Im sure Isaac Asimov never predicted that there would be a catbot that tells us the weather forecast (if he did, I apologize to all science fiction fans everywhere). Lets regulate the catbots before they get out of hand, right? Next up the dogbots.

The issue is pretty clear: When you start talking about specific regulations and dangers, they become a bit laughable. What are we really asking Congress to do anyway? And, when you start talking about machines taking over because they want to destroy humanitywell, its too late. Youre a piece of toast and the bots won. We need to get granular, not broad.

Do you agree? Disagree? If you have a reasonable argument to make about the dangers (or maybe the catbots) please send them to me. I promise to respond if youre interested in a civil discourse.

Continue reading here:

Sorry, Elon Musk. AI is not a bigger threat than North Korea - VentureBeat

Related Posts