Should Self-Driving Cars Make Ethical Decisions Like We Do? – Singularity Hub

Posted: July 11, 2017 at 10:26 pm

An enduring problem with self-driving cars has been how to program them to make ethical decisions in unavoidable crashes. A new study has found its actually surprisingly easy to model how humans make them, opening a potential avenue to solving the conundrum.

Ethicists have tussled with the so-called trolley problem for decades. If a runaway trolley, or tram, is about to hit a group of people, and by pulling a lever you can make it switch tracks so it hits only one person, should you pull the lever?

But for those designing self-driving cars the problem is more than just a thought experiment, as these vehicles will at times have to make similar decisions. If a pedestrian steps out into the road suddenly, the car may have to decide between swerving and potentially injuring its passengers or knocking down the pedestrian.

Previous research had shown that the moral judgements at the heart of how humans deal with these kinds of situations are highly contextual, making them hard to model and therefore replicate in machines.

But when researchers from the University of Osnabrck in Germany used immersive virtual reality to expose volunteers to variations of the trolley problem and studied how they behaved, they were surprised at what they found.

We found quite the opposite, Leon Stfeld, first author of a paper on the research in journal Frontiers in Behavioral Neuroscience, said in a press release. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.

The implication, the researchers say, is that human-like decision making in these situations would not be that complicated to incorporate into driverless vehicles, and they suggest this could present a viable solution for programming ethics into self-driving cars.

Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma, Peter Knig, a senior author of the paper, said in the press release. Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines act just like humans.

There are clear pitfalls with both questions. Self-driving cars present an obvious case where a machine could have to make high-stakes ethical decisions that most people would agree are fairly black or white.

But once you start insisting on programming ethical decision-making into some autonomous systems, it could be hard to know where to draw the line.

Should a computer program designed to decide on loan applications also be made to mimic the moral judgements a human bank worker most likely would if face-to-face with a client? What about one meant to determine whether or not a criminal should be granted bail?

Both represent real examples of autonomous systems operating in contexts where a human would likely incorporate ethicaljudgements in their decision-making. But unlike the self-driving car example, a persons judgement in these situations is likely to be highly colored by their life experience and political views. Modeling these kinds of decisions may not be so easy.

Even if human behavior is consistent, that doesnt mean its necessarily the best way of doing things, as Knig alludes to. Humans are not always very rational and can be afflicted by all kinds of biases that could feed into their decision-making.

The alternative, though, is hand-coding morality into these machines, and it is fraught with complications. For a start, the chances of reaching an unambiguous consensus on what particular ethical code machines should adhere to are slim.

Even if you can, though, a study in Science I covered last June suggests it wouldnt necessarily solve the problem. A survey of US residents found that most people thoughtself-driving cars should be governed by utilitarian ethics that seek to minimize the total number of deaths in a crash even if it harms the passengers.

But it also found most respondents would not ride in these vehicles themselves or support regulations enforcing utilitarian algorithms on them.

In the face of such complexities, programming self-driving cars to mimic peoples instinctive decision-making could be an attractive alternative. For a start, building models of human behavior simply required the researchers to collect data and feed it into a machine learning system.

Another upside is that it would prevent a situation where programmers are forced to write algorithms that could potentially put people in harms way. By basing the behavior of self-driving cars on a model of our collective decision making we would, in a way, share the responsibility for the decisions they make.

At the end of the day, humans are not perfect, but over the millennia weve developed some pretty good rules of thumb for life and death situations. Faced with the potential pitfalls of trying to engineer self-driving cars to be better than us, it might just be best to trust those instincts.

Stock Media provided by Iscatel / Pond5

See the rest here:

Should Self-Driving Cars Make Ethical Decisions Like We Do? - Singularity Hub

Related Posts