Artificial intelligence could encourage war, experts fear

Hollywood fantasy: The reality of AI development is a little more nuanced and a lot less advanced than movies might have us believe. Photo: Supplied

It's the theme of so many dystopian sci-fi books and movies: a super intelligent machine in charge of lethal military hardware becomes self-aware and decides to wreak havoc. But could it actually happen?

At the Association for the Advancement of Artificial Intelligence's annual conference in Texas last month, a workshop was held on the ethics of AI development and a panel discussed whether or not so-called 'lethal autonomous weapons' should be banned.

"There are many arguments, legal, ethical, political and technical for a ban," Toby Walsh, head of theOptimisation Research Groupat Australia's research body NICTA and chair of the proceedings, told Fairfax Media.

"One that particularly appeals to me is that [autonomous weapons] will lower the barrierto war. If one side can launch an attack, without fear of bodies cominghome, then it is much easier to slip into battle," Professor Walsh said.

Advertisement

While the advent of drones might already have the bar falling, those in favour of a ban would look to stop a potential arms race in "killer robots" before it begins. One the other hand, there are plenty of voices against a ban.

"Machines are not inherently dangerous", said Francesca Rossi, president of the International Joint Conference on Artificial Intelligence and a participant in the AAAI panel, who points out the huge difference between super-intelligence and sentience.

"We should build [autonomous weapons]by specifying all the relevant context for the desired goal to be achieved by the machine. Otherwise, a goal could be reached by violating some basic assumptions on how we want a machine to behave. Since machines are not sentient, their behaviour depends on how a human built them,"Professor Rossi said.

The specification of "all" relevant context could prove a troublesome task, however, since as Professor Walsh points out, "many ethical principles that we holdas universal are not", and ethics and decision-making processes across different cultures vary.

More:

Artificial intelligence could encourage war, experts fear

Related Posts

Comments are closed.