Artificial Intelligence Poses 'Extinction Risk' To Humanity Says Oxford University's Stuart Armstrong

Artificial intelligence poses an "extinction risk" to human civilisation, an Oxford University professor has said.

Almost everything about the development of genuine AI is uncertain, Stuart Armstrong at the Future of Humanity Institute said in an interview with The Next Web.

That includes when we might develop it, how such a thing could come about and what it means for human society.

But without more research and careful study, it's possible that we could be opening a Pandora's box. Which is exactly the sort of thing that the Future of Humanity Institute, a multidisciplinary research hub tasked with asking the "big questions" about the future, is concerned with.

"One of the things that makes AI risk scary is that its one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, its actually surprisingly hard to get to an extinction risk," Armstrong told The Next Web.

Above: Student Alejadro Bordallo plays rock-scissors-paper with a robot programmed by scientists to use artificial intelligence to learn strategy as they play

The thing for humanity to fear is not quite the robots of Terminator ("basically just armoured bears") but a more incorporeal intelligence capable of dominating humanity from within.

The threat of such a powerful computer brain would include near-term (and near total) unemployment, as replacements for virtually all human workers are quickly developed and replicated, but extends beyond that to genuine threats of widespread anti-human violence.

"Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent."

Read the rest here:

Artificial Intelligence Poses 'Extinction Risk' To Humanity Says Oxford University's Stuart Armstrong

Related Posts

Comments are closed.