We Need to Update Our Rules for Robotics – Futurism

Posted: February 28, 2017 at 8:09 pm

As robots become integrated into society more widely, we need to be sure theyll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimovs Three Laws of Robotics:

Today, more than 70 years after Asimovs first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimovs Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimovs I, Robot stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories Runaround and Catch that Rabbit, requiring human ingenuity to resolve. In the story Liar!, a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In Escape!, Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In The Evitable Conflict, the machines that control the worlds economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimovs later introduction of the Zeroth Law that can supersede the original three, potentially allowing a robot to harm a human being for humanitys greater good.

A robot may not harm humanity or, through inaction, allow humanity to come to harm.

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie Terminator 2: Judgment Day begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a perfect operational record). Skynet begins to learn at a geometric rate, scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a flash crash. Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and recent work on the cognitive science of morality and ethics focused onneuroscience,social psychology,developmental psychology, andphilosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimovs Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

Benjamin Kuipers, Professor of Computer Science and Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Original post:

We Need to Update Our Rules for Robotics - Futurism

Related Posts