Hawking Concerned Advanced AI Could Spell The End Of Mankind

May 4, 2014

redOrbit Staff & Wire Reports Your Universe Online

One of the greatest thinkers in the world believes that artificial intelligence could be the worst thing to happen to humanity, and that the scenario depicted in the recently-released Johnny Depp film Transcendence should not be simply dismissed as a work of science fiction.

Writing in Thursdays edition of the British newspaper The Independent, internationally recognized theoretical physicist Stephen Hawking said that ignoring the deeper lessons of the movie in which Depps character has his consciousness uploaded into a quantum computer, only to grow more powerful and become virtually omniscient would be a mistake, and potentially our worst mistake in history.

Advancements in artificial Intelligence, including driverless vehicles and digital assistants such as Siri and Cortana, are often viewed as ways to make life easier for mankind, explained Daily Mail reporter Ellie Zolfagharifard.

However, Hawking expresses concern that they could ultimately lead to our downfall unless we prepare for the potential risks such as how to respond to technology that gains the ability to think independently and adapt to its environment.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyones list, Hawking wrote. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

One prime concern is the development of autonomous-weapons systems capable of selecting and eliminating targets weapons that the UN and Human Rights Watch have proposed banning via treaty. Such weaponized machines could grow into something straight out of the Terminator movies: becoming self-aware, constantly improving their own design and essentially becoming unstoppable, noted Slashgears Nate Swanner.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand, said Hawking. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

The Cambridge University Department of Applied Mathematics and Theoretical Physics Research Director also seems dubious about those who claim to be experts in artificial intelligence, according to CNET writer Chris Matyszczyk.

Read the original here:

Hawking Concerned Advanced AI Could Spell The End Of Mankind

Related Posts

Comments are closed.