Artificial intelligence experts unveil Baxter the robot – who you control with your MIND – Express.co.uk

Posted: March 6, 2017 at 3:14 pm

The incredible work undertaken by Artificial Intelligence geniuses has been backed by private funding from Boeing and the US National Science Foundation.

A team from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University that allows people to correct robot mistakes instantly with nothing more than their brains.

Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task.

Jason Dorfman, MIT CSAIL

Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word

CSAIL director Daniela Rus

The teams novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.

While the system currently handles relatively simple binary-choice activities, the studys senior author says that the work suggests that we could one day control robots in much more intuitive ways.

CSAIL director Daniela Rus told Express.co.uk: Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word.

Jason Dorfman, MIT CSAIL

A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we havent even invented yet.

In the current study the team used a humanoid robot named Baxter from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.

The paper presenting the work was written by BU PhD candidate Andres F. Salazar-Gomez, CSAIL PhD candidate Joseph DelPreto, and CSAIL research scientist Stephanie Gil under the supervision of Rus and BU professor Frank H. Guenther.

Jason Dorfman, MIT CSAIL

The paper was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA) taking place in Singapore this May.

Past work in EEG-controlled robotics has required training humans to think in a prescribed way that computers can recognise.

Rus team wanted to make the experience more natural and to do that, they focused on brain signals called error-related potentials (ErrPs), which are generated whenever our brains notice a mistake.

Jason Dorfman, MIT CSAIL

As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.

Rus added: As you watch the robot, all you have to do is mentally agree or disagree with what it is doing.

You dont have to train yourself to think in a certain way - the machine adapts to you, and not the other way around.

The work in progress identified that ErrP signals are extremely faint, which means that the system has to be fine-tuned enough to both classify the signal and incorporate it into the feedback loop for the human operator.

In addition to monitoring the initial ErrPs, the team also sought to detect secondary errors that occur when the system doesnt notice the humans original correction.

Scientist Stephanie Gil said: If the robots not sure about its decision, it can trigger a human response to get a more accurate answer.

These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.

While the system cannot yet recognise secondary errors in real time, Gil expects the model to be able to improve to upwards of 90 per cent accuracy once it can.

In addition, since ErrP signals have been shown to be proportional to how egregious the robots mistake is, the team believes that future systems could extend to more complex multiple-choice tasks.

Jason Dorfman, MIT CSAIL

1 of 9

Salazar-Gomez notes that the system could even be useful for people who cant communicate verbally: a task like spelling could be accomplished via a series of several discrete binary choices, which he likens to an advanced form of the blinking that allowed stroke victim Jean-Dominique Bauby to write his memoir The Diving Bell and the Butterfly.

Wolfram Burgard a professor of computer science at the University of Freiburg who was not involved in the research added: This work brings us closer to developing effective tools for brain-controlled robots and prostheses.

Given how difficult it can be to translate human language into a meaningful signal for robots, work in this area could have a truly profound impact on the future of human-robot collaboration."

Go here to read the rest:

Artificial intelligence experts unveil Baxter the robot - who you control with your MIND - Express.co.uk

Related Posts