Thursday 30th of May 2024
Reading time: 3 min
Neural networks could enhance the design of hearing aids, cochlear implants and brain-machine interfaces. Study shows that neural network models used for machine learning have an internal organisation that matches activity in the human auditory cortex. Models trained to distinguish different sounds in the same environment are paving the way for a better representation of auditory activity in the human brain. They could also help people with disabilities who have difficulty expressing themselves.
Neural networks that imitate the structure and the functioning of the human auditory system could be used to enhance the design of hearing aids, cochlear implants and brain-machine interfaces. There is a surprisingly simple reason for this: there is a strong correspondence between the internal organisation of these computer models, when they are trained to perform listening tasks, and the patterns of activity in the human auditory cortex. This finding has recently been presented in a research article published in PLOS Biology, which studied the effectiveness of auditory analysis conducted by deep neural networks whose potential has already been extensively explored in the field of computer vision. We wanted to find out if artificial neural networks could behave in the same way as those in the human brain, and if they could effectively predict responses similar to those of the brain, explains Greta Tuckute, a final-year PhD candidate at the Department of Brain and Cognitive Sciences at MIT. The study evaluated processing undertaken by the models and compared it with brain functioning in a bid to obtain new insights on how future systems could be developed to better represent activity in the cerebral auditory space.
The aim for these models, which may be integrated in future neurotechnology devices, is to ensure that they are capable of transcribing what is happening in the soundscape
The goal is to build largescale models that mediate human behaviour and brain responses that may in the future be integrated in neurotechnological devices to enable them to transcribe what is happening in the soundscape, explains Jenelle Feather, a researcher at the Flatiron Institutes Centre for Computational Neuroscience (New York). To this end, the researchers studied how neural networks classify and distinguish all the sound that can be heard at any given time in an auditory environment, determining for example how trained computer models recognise and distinguish birdsong when someone is speaking nearby and there is also noise from a passing car. One of the interesting things about our study is that we tested a wide range of models and we observed that some of them were better than others at predicting cerebral response.
To obtain these results, the research team used functional magnetic resonance imaging (fMRI) to record cerebral responses to different natural sounds before analysing processing of the same stimuli by artificial neural networks. Using different methods, the modelled representations of activity on each of the neural networks were then compared with the activation patterns on the brain. In one of the methods employed by the authors, they measured the internal unit responses from the artificial neural networks while they listened to a subset of 165 everyday sounds, and then fitted a simple regression model to predict how the brain would respond to the same set of sounds. The models accuracy in predicting brain responses to a different set of sounds was then evaluated. The models which generated representations closest to those observed in the brain were those that had been trained to perform more than one task and to deal with auditory input that included background noise.
Similarities between artificial and biological models highlight the potential usefulness of further comparisons of brain imaging and computation by neural networks. With an accurate model of brain responses, we could potentially stimulate cerebral activity to reinforce encoding of the auditory environment, which could help people with disabilities assimilate more information from sound. Accurate models of brain function could also help patients with locked-in syndrome, who are unable to speak even though their brains are very active. In the future, it may be possible to decode their brain responses to a conversation and even their intended speech. In the promising field of neurotechnology medical devices, several other research teams are already working on complementary projects to better monitor and enhance the stimulation of other areas of the brain.
Read the original:
Neurotechnology: auditory neural networks mimic the human brain - Hello Future Orange - Hello Future
- Signal and noise: how timing measurements and AI are improving ... - ATLAS Experiment at CERN [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Elon Musk Hints at Finalizing Tesla FSD V12 Code, Needs More ... - autoevolution [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Research on key acoustic characteristics of soundscapes of the ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Fast Simon Launches Vector Search With Advanced AI for ... - GlobeNewswire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The TALOS-AI4SSH project: Expanding research and innovation ... - Innovation News Network [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Industry 4.0: The Transformation of Production - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- ASU researchers bridge security and AI - Full Circle [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Spatial attention-based residual network for human burn ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Is running AI on CPUs making a comeback? - TechHQ [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI's Transformative Impact on Industries - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Simulation analysis of visual perception model based on pulse ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Tuning and Optimizing Your Neural Network | by Aye Kbra ... - DataDrivenInvestor [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Portrait of intense communications within microfluidic neural ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- New Optical Neural Network Filters Info before Processing - RTInsights [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future of Telecommunications: 3D Printing, Neural Networks ... - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Types of Neural Networks in Artificial Intelligence - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Evolution of Artificial Intelligence: From Turing to Neural Networks - Fagen wasanni [Last Updated On: August 6th, 2023] [Originally Added On: August 6th, 2023]
- Using Photonic Neurons to Improve Neural Networks - RTInsights [Last Updated On: August 6th, 2023] [Originally Added On: August 6th, 2023]
- Distributed constrained combinatorial optimization leveraging hypergraph neural networks - Nature.com [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]