Ding Dong Merrily on AI: The British Neuroscience Association’s Christmas Symposium Explores the Future of Neuroscience and AI – Technology Networks

A Christmas symposium from the British Neuroscience Association (BNA) has reviewed the growing relationship between neuroscience and artificial intelligence (AI) techniques. The online event featured talks from across the UK, which reviewed how AI has changed brain science and the many unrealized applications of what remains a nascent technology.

Opening the day with his talk, Shake your Foundations: the future of neuroscience in a world where AI is less rubbish, Prof. Christopher Summerfield, from the University of Oxford, looked at the idiotic, ludic and pragmatic stages of AI. We are moving from the idiotic phase, where virtual assistants are usually unreliable and AI-controlled cars crash into random objects they fail to notice, to the ludic phase, where some AI tools are actually quite handy. Summerfield highlighted a program called DALL-E, an AI that converts text prompts into images, and a language generator called gopher that can answer complicated ethical questions with eerily natural responses.

What could these advances in AI mean for neuroscience? Summerfield suggested that they invite researchers to consider the limits of current neuroscience practice that could be enhanced by AI in the future.

Integration of neuroscience subfields could be enabled by AI, said Summerfield. Currently, he said People who study language dont care about vision. People who study vision dont care about memory. AI systems dont work properly if only one distinct subfield is considered and Summerfield suggested that, as we learn more about how to create a more complete AI, similar advances will be seen in our study of the biological brain.

Another element of AI that could drag neuroscience into the future is the level of grounding required for it to succeed. Currently, AI models are provided with contextual training data before they can learn associations, whereas the human brain learns from scratch. What makes it possible for a volunteer in a psychologists experiment to be told to do something, and then just do it? To create more natural AIs, this is a problem that neuroscience will have to solve in the biological brain first.

The University of Oxfords Prof. Mihaela van der Schaar looked at how we can use machine learning to empower human learning in her talk, Quantitative Epistemology: a new human-machine partnership. Van der Schaars talks discussed practical applications of machine learning in healthcare by teaching clinicians through a process called meta-learning. This is where, said van der Schaar, learners become aware of and increasingly in control of habits of perception, inquiry, learning and growth.

This approach provides a potential look at how AI might supplement the future of healthcare, by advising clinicians on how they make decisions and how to avoid potential error when undertaking certain practices. Van der Schaar gave an insight into how AI models can be set up to make these continuous improvements. In healthcare, which, at least in the UK, is slow to adopt new technology, van der Schaars talk offered a tantalizing glimpse of what a truly digital approach to healthcare could achieve.

Dovetailing nicely from van der Schaars talk was Imperial College London professor Aldo Faisals presentation, entitled AI and Neuroscience the Virtuous Cycle. Faisal looked at systems where humans and AI interact and how they can be classified. Whereas in van der Schaars clinical decision support systems, humans remain responsible for the final decision and AIs merely advise, in an AI-augmented prosthetic, for example, the roles are reversed. A user can suggest a course of action, such as pick up this glass, by sending nerve impulses and the AI can then find a response that addresses this suggestion, by, for example, directing a prosthetic hand to move in a certain way. Faisal then went into detail on how these paradigms can inform real-world learning tasks, such as motion-tracked subjects learning to play pool.

One fascinating study involved a balance board task, where a human subject could tilt the board in one axis, while an AI controlled another, meaning that the two had to collaborate to succeed. After time, the strategies learned by the AI could be copied between certain subjects, suggesting the human learning component was similar. But for other subjects, this wasnt possible.

Faisal suggested this hinted at complexities in how different individuals learn that could inform behavioral neuroscience, AI systems and future devices, like neuroprostheses, where the two must play nicely together.

The afternoons session featured presentations that touched on the complexities of the human and animal brain. The University of Sheffields Professor Eleni Vasilaki explained how mushroom bodies, regions of the fly brain that play roles in learning and memory, can provide insight into sparse reservoir computing. Thomas Nowotny, professor of informatics at the University of Sussex, reviewed a process called asynchrony, where neurons activate at slightly different times in response to certain stimuli. Nowotny explained how this enables relatively simple systems like the bee brain to perform incredible feats of communication and navigation using only a few thousand neurons.

Wrapping up the days presentations was a lecture that showed an uncanny future for social AIs, delivered by the Henry Shevlin, a senior researcher at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

Shevlin reviewed the theory of mind, which enables us to understand what other people might be thinking by, in effect modeling their thoughts and emotions. Do AIs have minds in the same way that we do? Shevlin reviewed a series of AI that have been out in the world, acting as humans, here in 2021.

One such AI, OpenAIs language model, GPT-3, spent a week posting on internet forum site Reddit, chatting with human Redditors and racking up hundreds of comments. Chatbots like Replika that personalize themselves to individual users, creating pseudo-relationships that feel as real as human connections (at least to some users). But current systems, said Shevlin, are excellent at fooling humans, but have no mental depth and are, in effect, extremely proficient versions of the predictive text systems our phones use.

While the rapid advance of some of these systems might feel dizzying or unsettling, AI and neuroscience are likely to be wedded together in future research. So much can be learned from pairing these fields and true advances will be gained not from retreating from complex AI theories but by embracing them. At the end of Summerfields talk, he summed up the idea that AIs are black boxes that we dont fully understand as lazy. If we treat deep networks and other AIs systems as neurobiological theories instead, the next decade could see unprecedented advances for both neuroscience and AI.

Originally posted here:
Ding Dong Merrily on AI: The British Neuroscience Association's Christmas Symposium Explores the Future of Neuroscience and AI - Technology Networks

Related Posts

Comments are closed.