This Is What Happens When We Debate Ethics in Front of Superintelligent AI – Singularity Hub

Posted: March 19, 2017 at 4:28 pm

Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.

In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and followwhich proves to be no easy task.

Complex moral dilemmas often dont have a clear-cut answer, and humans havent yet been able to translate ethics into a set of unambiguous rules. Its questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.

So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet its become a matter of mainstream debate in recent years.

OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to debate best practices for building beneficial AI.

Concerns have been voiced about AI being racist or sexist, reflecting human bias in a way we didnt intend it tobut it can only learn from the data available, which in many cases is very human.

As much as the engineers in the film insist ethics can be solved and there must be a definitive set of moral laws, the philosopher argues that such a set of laws is impossible, because ethics requires interpretation.

Theres a sense of urgency to the conversation, and with good reasonall the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehendyet most crucialfeatures of computing and AI is the speed at which its improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it, The intelligence explosion will be faster than we can imagine.

Futurists like Ray Kurzweil predict this intelligence explosion will lead to the singularitya moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are what that moment will look like for humanity, and what we can do to ensure artificial superintelligence benefits rather than harms us.

The engineers and philosopher in the film are mortified when the AI offers to act just like humans have always acted. The AIs idea to instead learn only from historys religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us.Or as the philosopher in the film so concisely puts it: "We can't rely on humanity to provide a model for humanity. That goes without saying."

If were unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans cant handle the awesome power weve bestowed on it, and it will take offor take over.

Image Credit:The Guardian/YouTube

Read more from the original source:

This Is What Happens When We Debate Ethics in Front of Superintelligent AI - Singularity Hub

Related Posts