Study Finds That Human Ethics Could Be Easily Programmed Into Driverless Cars – Futurism

Posted: July 8, 2017 at 3:40 am

In BriefA study has found that it would be fairly simple to programautonomous vehicles to make similar moral decisions as humandrivers. In light of this, the question becomes whether we wantdriverless cars to emulate us or behave differently. Programming Morality

A new study from The Institute of Cognitive Science at the University of Osnabrck has found that the moral decisions humans make while driving are not as complex or context dependent as previously thought. Based on the research, which has been published inFrontiers in Behavioral Neuroscience,these decisions follow a fairly simple value-of-life-based model, which means programming autonomous vehicles to make ethical decisions should be relatively easy.

For the study, 105 participants were put in a virtual reality (VR) scenario during which they drove around suburbia on a foggy day. They then encountered unavoidable dilemmas that forced them to choose between hitting people, animals, and inanimate objects with their virtual car.

The previous assumption was that these types of moral decisions were highly contextual and therefore beyond computational modeling. But we found quite the opposite, Leon Stfeld, first author of the study, told Science Daily. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.

Alot of virtual ink has been spilt online concerning the benefits of driverless cars. Elon Musk is in the vanguard, stating emphatically that those who do not support the technology are killing people.His view is that the technology can be smarter, more impartial, and better at driving than humans, and thus able to save lives.

Currently, however, the cars are large pieces of hardware supported byrudimentary driverless technology. The question of how many lives they could save is contingent upon how we choose to program them, and thats where the resultsof this study come into play. If we expect driverless cars to be better than humans, why would we program them like human drivers?

As Professor Gordon Pipa, a senior author on the study, explained, We need to ask whether autonomous systems should adopt moral judgements. If yes, should they imitate moral behavior by imitating human decisions? Should they behave along ethical theories, and if so, which ones? And critically, if things go wrong, who or what is at fault?

The ethics of artificial intelligence (AI) remains swampy moral territory in general, and numerous guidelines and initiatives are being formed in an attempt to codify a set of responsible laws for AI.The Partnership on AI to Benefit People and Society is composed of tech giants, including Apple, Google, and Microsoft, while the German Federal Ministry of Transport and Digital Infrastructure has developed a set of 20 principles that AI-powered cars should follow.

Just how safe driverless vehicles will be in the future is dependent on how we choose to program them, and while that task wont be easy, knowing how we would react in various situations should help us along the way.

Read more:
Study Finds That Human Ethics Could Be Easily Programmed Into Driverless Cars - Futurism

Related Posts