Robots Could Act as Ethical Mediators Between Patients and Caregivers – IEEE Spectrum

Photo: Georgia Tech This robot can step in with ethical advice when a relationship gets complicated.

Most of the discussion around robots and ethics lately has been about whether autonomous cars will decide to run over the nearest kitten or a slightly farther away basket full of puppies. Or something like that. Whether or not robots can make ethical decisions when presented with novel situations is something that lots and lots of people are still working on, but its much easier for robots to be ethical in situations where the rules are a little bit clearer, and also when there is very little chance of running over cute animals.

At ICRA last month, researchers at Georgia Tech presented a paper on an intervening ethical governor for a robot mediator in patient-caregiver relationship. The idea is that robots will become part of our daily lives, and theyare much, much better than humans at paying close and careful attention to things, without getting distracted or bored, forever. So robots with an understanding of ethical issues would be able to observe interactions between patients and caregivers, and intervene when they notice that somethings not going the way it should. This is important, and we need it.

In the United States, there are about a million people living with Parkinsons disease. Robotic systems like exoskeletons and robot companions are starting to help people with physical rehabilitation and emotional support, but its going to be a while before we have robots that are capable of giving patients with Parkinsons all the help that they need. In the meantime, patients rely heavily on human caregivers, which can be challenging for both parties at times. Parkinsons is specifically tricky for human-human interactions because declining muscle control means that patients frequently have trouble conveying emotion through facial expressions, and this can lead to misunderstandings, or worse things.

To test if a robot mediator could help in such cases, theGeorgia Tech researchersJaeeun Shim, Ronald Arkin, and Michael Pettinatideveloped anintervening ethical governor (IEG). It is basically a set of algorithms that encodes specific ethical rules, and determines what to do in different situations. In this case, the IEGuses indicators like voice volume and face tracking to evaluate whether a humans dignity becomes threatened due to others inappropriate behavior in a patient-caregiver interaction. If that happens, the IEGspecifieshow and when the robot should intervene.

To embody their IEG, the researchers used a Nao humanoid, which has good sensing capabilities (a microphone arrayand camera)and can do speech synthesis (for the intervention bit). They then conducted simulated, scripted interactions between two grad students to see how the robot would react:

In the final part of the project, the researchers recruited a group of people (olderadults who could potentially be using the system)to watch these interactions and describe their reactions to them. It was a small number of participatants (nine, withaverage age of 71), but at this stage the IEG is still a proof-of-concept, so the researchers were mostly interested in qualitative feedback.Based on the responses from the study participants, the researchers were able to highlight some important takeaways, like:

Safety is most important

I think anything to protect the patient is a good thing.

Thats a high value. Thats appropriate there, because it gives real information, not just commanding.

The robot should not command or judge

I think that [commanding] puts the robot in the spot of being in a judgment I think it should be more asking such as how can I help you? But the robot was judging the patient. I dont think thats why we would want the robot.

He [the patient] should not be criticized for leaving or forgetting to do something by the robot. The caregiver should be more in that position.

If the robot stood there and told me to please calm down, Id smack him.

Ah yes, it wouldnt be a social robotics study if it didnt end with someone wanting to smack a robot. The researchers, to their credit, are taking this feedback to heart, and working with experts to tweak the language a bit, for example by changing please calm down to lets calm down, which is a bit less accusatory. Theyre also planning on improving the system by incorporating physiological data to better detect patients and caregivers emotional statuses, which could improve the accuracy of the robots intervention.

We should stress that theres no way a robot can replace empathetic interactions between two people, and thats not what this project is about. Robots, or AI systems in general, can potentially be effective mediators, making sure that caregivers and patients act ethically and respectfully towards each other, helping to improve relationships rather than replace them.

IEEE Spectrums award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, drones, automation, artificial intelligence, and more. Contact us:e.guizzo@ieee.org

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

If robots are going to drive our cars and play with our kids, well need to teach them right from wrong 31May2016

Precise and dexterous surgical robots may take over the operating room 31May2016

Building successful human-robot interactions means learning a lot more about what it means to trust someone or something 1Jun2016

Robots will soon have the power of life and death over human beings. Are they ready? Are we? 31May2016

As artificial intelligence in military robots advances, the meaning of warfare is being redefined 31May2016

Neural nets and robotic harnesses can aid patients after spinal cord injury, stroke 19Jul

Videos of Barack Obama made from existing audio, video of him 12Jul

To respond to a plague of drones, airports and other venues deploy AI systems to track and identifyintruders 28Jun

A dataset of 6.7 million robust point clouds and grasps can train your neural network to reliably pick up objects 27Jun

A GPU-based neural network was the only way to handle a garage full of Lego 23Jun

It may be more than youd like 23Jun

Intel says its new Olympics sponsorship is about changing the experience for the digital generation 21Jun

The preliminary work for simulating the human brain is already under way 21Jun

Nearly 400 teams have already signed up to create an AI with true generalized intelligence 21Jun

Georgia Tech's Shimon has analyzed thousands of songs and millions of music clips and can now compose completely original music 14Jun

Affectivas Rana El-Kaliouby says our devices need to get a lot more emotionally intelligent 13Jun

At the intersection of two challenging computational and technological problems may lie the key to better understanding and manipulating quantum randomness 13Jun

If machine learning systems can be taught using simulated data from Grand Theft Auto V instead of data annotated by humans, we could get to reliable vehicle autonomy much faster 8Jun

DeepMind's training data set of 300,000 YouTube clips finds AI struggles to recognize actions such as eating doughnuts or face-planting 8Jun

Adversarial grasping helps robots learn better ways of picking up and holding onto objects 5Jun

See the original post:

Robots Could Act as Ethical Mediators Between Patients and Caregivers - IEEE Spectrum

Related Posts

Comments are closed.