How Do Isaac Asimov’s Laws Of Robotics Hold Up 75 Years Later? – Newsy

ByTyler Adkisson July 13, 2017

Imagine sitting in a self-driving car that's about tocrash into a crowd. The car has to choose between hitting everyone or running off the road, putting your life at risk. So how does it make that decision?

For simple bots, Isaac Asimov's "Three Laws of Robotics" might help. But, for more complex machines, researchers aren't so sure the 75-year-old set of rules will work.

According to Asimov's laws, robots can't injure humans or allow them to be harmed; they have to obey orders humans give them; and they must protect themselves. But there's a caveat. If the laws conflict, the earlier law takes precedent.

Single-function robots something with a straightforward job, like a Roomba could in theory follow those laws. But with some of the robots engineers are working on, like the U.S. military'srobot army, it gets complicated.

Robots may not function properly even if they're built to follow the laws. In one experiment, for example, researchers programmed a robot to save another bot if it got too close to a "danger zone."

Related StoryThis Robotic Exoskeleton Helps You Stay On Your Feet

Saving one robot was easy, but when two were in danger, the rescue bot got confused. In about 40 percent of trials, it couldn't decide which to save and did nothing.

So while Asimov's laws might help retain some order between humans and robots, it doesn't seem like our futurewill line upwith hismostly subservientrobots at least for now.

Read the rest here:

How Do Isaac Asimov's Laws Of Robotics Hold Up 75 Years Later? - Newsy

Related Posts

Comments are closed.