Facebook is ditching plans to make an interface that reads the brain – MIT Technology Review

Posted: July 16, 2021 at 1:16 pm

The UCSF team made some surprising progress and today is reporting in the New England Journal of Medicine that it used those electrode pads to decode speech in real time. The subject was a 36-year-old man the researchers refer to as Bravo-1, who after a serious stroke has lost his ability to form intelligible words and can only grunt or moan. In their report, Changs group says with the electrodes on the surface of his brain, Bravo-1 has been able to form sentences on a computer at a rate of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1s efforts to move his tongue and vocal tract as he imagines speaking.

To reach that result, Changs team asked Bravo-1 to imagine saying one of 50 common words nearly 10,000 times, feeding the patients neural signals to a deep-learning model. After training the model to match words with neural signals, the team was able to correctly determine the word Bravo-1 was thinking of saying 40% of the time (chance results would have been about 2%). Even so, his sentences were full of errors. Hello, how are you? might come out Hungry how am you.

But the scientists improved the performance by adding a language modela program that judges which word sequences are most likely in English. That increased the accuracy to 75%. With this cyborg approach, the system could predict that Bravo-1s sentence I right my nurse actually meant I like my nurse.

As remarkable as the result is, there are more than 170,000 words in English, and so performance would plummet outside of Bravo-1s restricted vocabulary. That means the technique, while it might be useful as a medical aid, isnt close to what Facebook had in mind. We see applications in the foreseeable future in clinical assistive technology, but that is not where our business is, says Chevillet. We are focused on consumer applications, and there is a very long way to go for that.

FACEBOOK

Facebooks decision to drop out of brain reading is no shock to researchers who study these techniques. I cant say I am surprised, because they had hinted they were looking at a short time frame and were going to reevaluate things, says Marc Slutzky, a professor at Northwestern whose former student Emily Mugler was a key hire Facebook made for its project. Just speaking from experience, the goal of decoding speech is a large challenge. Were still a long way off from a practical, all-encompassing kind of solution.

Still, Slutzky says the UCSF project is an impressive next step that demonstrates both remarkable possibilities and some limits of the brain-reading science. It remains to be seen if you can decode free-form speaking, he says. A patient who says I want a drink of water versus I want my medicinewell those are different. He says that if artificial-intelligence models could be trained for longer, and on more than just one persons brain, they could improve rapidly.

While the UCSF research was going on, Facebook was also paying other centers, like the Applied Physics Lab at Johns Hopkins, to figure out how to pump light through the skull to read neurons noninvasively. Much like MRI, those techniques rely on sensing reflected light to measure the amount of blood flow to brain regions.

Its these optical techniques that remain the bigger stumbling block. Even with recent improvements, including some by Facebook, they are not able to pick up neural signals with enough resolution. Another issue, says Chevillet, is that the blood flow these methods detect occurs five seconds after a group of neurons fire, making it too slow to control a computer.

Follow this link:

Facebook is ditching plans to make an interface that reads the brain - MIT Technology Review

Related Posts