‘Crisis of Control’: AI Risks Could Lead to Utopia or Destruction – Voice of America (blog)

Posted: August 4, 2017 at 1:35 pm

Posted August 4th, 2017 at 11:00 am (UTC-4)

An illustration projected on a screen shows a robot hand and a human one moving towards each others during the AI for Good Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland, June 7, 2017. (Reuters)

Hardly a day goes by without news about a breakthrough in machine intelligence or some debate about its pros and cons, more recently between Facebooks Mark Zuckerberg and Tesla Motors Elon Musk. Adding his voice to the mix, author and IT specialist Peter Scott warns that rapid AI growth comes with serious risks that, if mitigated, could take humanity to a new level of consciousness.

If we build ethical artificial intelligence and it becomes superintelligent, it could become our partner

In Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, Scott, a former contractor with NASAs Jet Propulsion Laboratory, argues that there are two risks associated with rapid AI development. If these dangers are successfully mitigated, they will propel us into a new utopia, he said. Failing that, they could lead to the destruction of the human race.

FILE Product and graphic designer Ricky Ma, 42, poses with his life-size robot Mark 1, modeled after a Hollywood star, in Hong Kong, China, March 31, 2016. (Reuters)

The first risk is that AI could put biological weapons and weapons of mass destruction in the hands of average people so that someone in their garage could create a killer virus that could wipe out millions of people.

The second is that as the technology becomes more prevalent, someone could accidentally or deliberately cause a disaster through internet networks connecting global infrastructure. This crisis of control, as he calls it, is whether we can control what we create.

Will we be able to control the results of this technology, the technology itself? he asked. Theres always been a debate about technology going back to at least the atom bomb, if not the sword, but the further we get, the more volatility there is because of the large-scale potential effects of this technology.

There have been multiple revolutions throughout history that changed the way people lived and worked. But Scott said this time is different.

Where do we go from there? Whats left? There really isnt much room about that in what you would call a hierarchy.

FILE A woman inputs orders for a robot which works as a waitress in a restaurant in Xian, Shaanxi Province, China, April 20, 2016. (Reuters/China Stringer network)

One could argue that humans still need to program and maintain their intelligent machines. But that is also a knowledge-transfer function, said Scott. The point at which machines learn that job will transform the world in an instant because they will do it much, much faster. And the big question is when will that happen?

That could be in 10 or 50 years. Whenever it happens, humans need to come up with a new basis for employment that hasnt been done by machines, he said. And its very hard to see what that might be in an era where machines can think as well as a human being.

Alarm bells already are sounding off about the risks of automation to human workers. Scott predicts AI will take over jobs traditionally associated with the pinnacle of employment development such as chief executive officer, chief technology officer, and chief finance officer. It will take longer to automate jobs like therapists and psychologists that require sensory skills, and acute understanding of the human psyche, grounded in human experience

But the process has already begun, with AI systems like IBMs Watson already tackling complex medical problems. And the boundaries of what we call artificial intelligence keep getting moved, he said. AI, which was little more than parlor tricks back in the 1980s, now extends to chatbots,

FILE A man takes pictures with humanoid robot Jiajia, produced by University of Science and Technology of China, at Jiajias launch event in Hefei, Anhui province, April 15, 2016. Jiajia can converse with humans and imitate facial expressions, among other features. (Reuters/China Stringer Network)

humanoids like Chinas Jiajia robot, and voice assistants holding a conversation with humans the stuff of science fiction.

Science fiction writers have already tackled some of these dilemmas. In the 1940s, prominent science fiction writer and biochemist Isaac Asimov introduced the Three Laws of Robotics to govern the creation and ethics of intelligent machines.

There are similar efforts underway to create a set of AI ethics. In January, a group of AI experts came up with The Asilomar Principles, 23 statements they agreed upon on how to create ethical artificial intelligence.

But its not just about ethics. A new renaissance of the study of the human heart is needed, said Scott, to deal with the threats of not just machine intelligence but people who could wreak havoc if they get their hands on this technology. Given enough attention and funding, he said the next revolution will be in human consciousness.

His hope is that professions that repair wounds in the human heart will evolve in partnership with an ethical AI to develop medicines more quickly and cure cancer, disease, aging, and perhaps have something to teach us in psychology, in philosophy, ethics as well.

If we do that, then we will be able to coexist on a planet that has a new species of silicon beings that are many times more intelligent than us.

Originally posted here:

'Crisis of Control': AI Risks Could Lead to Utopia or Destruction - Voice of America (blog)

Related Posts