Watch Google’s Igor Markov explain how to avoid the AI apocalypse – VentureBeat

Posted: June 6, 2017 at 6:15 am

An attack by artificial intelligence on humans, said Google software engineerand University of Michigan professor Igor Markov, would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50percent of the population.

Virus particles were very small and there were no microscopes or notion of infectious diseases, there was no explanation, so the disease spread for many years, killed a lot of people, and at the end no one understood what happened, he said. This would be illustrative of what you might expect if a superintelligent AI would attack. You would not know precisely whats going on, there would be huge problems, and you would be almost helpless.

Rather than devising technological solutions, in a recent talk about how to keepsuperintelligent AI from harming humans, Markov looked to lessons from ancient history.

Markov joined sci-fi author David Brin and other influential names in the artificial intelligence community Friday at The AI Conference in San Francisco.

One lesson fromearly humans that could help in the fight againstAI: make friends. Domesticate AI the same way Homo sapiens turned wolves into their protectors and friends.

If you are worried about potential threats, then try to use some of them for protection or try to adapt or domesticate those threats. So you might develop a friendly AI that would protect you from malicious AI or track unauthorized accesses, he said.

Markovbegan and ended his presentationby calling himself an amateur and saying he doesnt have all the answers, but he also said he has been thinking about ways to prevent an AI takeover for more than a year.He now believes the most important way for humans to prevent the rise of malicious AIis to put in a series of physical world restraints.

The bottom line here is that intelligence either hostile or friendly would be limited by physical resources, and we need to think about physical resources if we want to limit such attacks, he said. We absolutely need to control access to energy sources of all kinds, and we need to be very careful about physical and network security of critical infrastructure because if that is not taken care of, then disasters can obviously happen.

Calling upon a background in hardware design, Markov suggested steps be taken to separate powerful systems and have deficiencies built in to act as a kill switch, because if superintelligent AI ever arises, it will likely be by accident.

He also implores thatlimits be placed on self repair, replication, or improvement of AI, and urges that specific scenarios be considered, such as a nuclear weapons attack or use of biological weapons.

Generally, each agent, each part of your AI ecosystem needs to be designed with some weakness. You dont want agents to be able to take over everything, right? So you would control agents through these weaknesses and separation of powers, he said. In the discipline of electronic hardware design, we use obstruction hierarchies. We go from transistors to CPUs to data centers, and each level typically has a well-defined function, so if youre looking at this from the perspective of security, if you are defending against something, you would want to limit or regulate every level, and you would want the same type of limitations for AI.

Markovs presentation relies on predictions made by Ray Kurzweil, who believes that in a decade, virtual reality will be indistinguishable from real life, after which computers will surpass humans. Then, through augmentation, humans will become more machine-like until we reach the Singularity.

Markov also pointed out that there is a range ofopinions on malicious AI. Stephen Hawking believes AI will eventually supersede humankind, telling the BBC, The development of full artificial intelligence could spell the end of the human race.

In contrast, former Baidu AI head Andrew Ng said last year that people should be as concerned about malicious AI as they are about overpopulation on Mars.

View original post here:

Watch Google's Igor Markov explain how to avoid the AI apocalypse - VentureBeat

Related Posts