New ‘Liquid’ AI Learns Continuously From Its Experience of the World – Singularity Hub

Posted: February 6, 2021 at 8:45 am

For all its comparisons to the human brain, AI still isnt much like us. Maybe thats alright. In the animal kingdom, brains come in all shapes and sizes. So, in a new machine learning approach, engineers did away with the human brain and all its beautiful complexityturning instead to the brain of a lowly worm for inspiration.

Turns out, simplicity has its benefits. The resulting neural network is efficient, transparent, and heres the kicker: Its a lifelong learner.

Whereas most machine learning algorithms cant hone their skills beyond an initial training period, the researchers say the new approach, called a liquid neural network, has a kind of built-in neuroplasticity. That is, as it goes about its worksay, in the future, maybe driving a car or directing a robotit can learn from experience and adjust its connections on the fly.

In a world thats noisy and chaotic, such adaptability is essential.

The algorithms architecture was inspired by the mere 302 neurons making up the nervous system of C. elegans, a tiny nematode (or worm).

In work published last year, the group, which includes researchers from MIT and Austrias Institute of Science and Technology, said that despite its simplicity, C. elegans is capable of surprisingly interesting and varied behavior. So, they developed equations to mathematically model the worms neurons and then built them into a neural network.

Their worm-brain algorithm was much simpler than other cutting-edge machine learning algorithms, and yet it was still able to accomplish similar tasks, like keeping a car in its lane.

Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving, Mathias Lechner, a PhD student at Austrias Institute of Science and Technology and study author, said. However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters.

Now, in a new paper, the group takes their worm-inspired system further by adding a wholly new capability.

The output of a neural networkturn the steering wheel to the right, for instancedepends on a set of weighted connections between the networks neurons.

In our brains, its the same. Each brain cell is connected to many other cells. Whether or not a particular cell fires depends on the sum of the signals its receiving. Beyond some thresholdor weightthe cell fires a signal to its own network of downstream connections.

In a neural network, these weights are called parameters. As the system feeds data through the network, its parameters converge on the configuration yielding the best results.

Usually, a neural networks parameters are locked into place after training, and the algorithms put to work. But in the real world, this can mean its a bit brittleshow an algorithm something that deviates too much from its training, and itll break. Not an ideal result.

In contrast, in a liquid neural network, the parameters are allowed to continue changing over time and with experience. The AI learns on the job.

This adaptibility means the algorithm is less likely to break as the world throws new or noisy information its waylike, for example, when rain obscures an autonomous cars camera. Also, in contrast to bigger algorithms, whose inner workings are largely inscrutable, the algorithms simple architecture allows researchers to peer inside and audit its decision-making.

Neither its new ability nor its still-diminutive stature seemed to hold the AI back. The algorithm performed as well or better than other state-of-the art time-sequence algorithms in predicting next steps in a series of events.

Everyone talks about scaling up their network, said Ramin Hasani, the studys lead author. We want to scale down, to have fewer but richer nodes.

An adaptable algorithm that consumes relatively little computing power would make an ideal robot brain. Hasani believes the approach may be useful in other applications that involve real-time analysis of new data like video processing or financial analysis.

He plans to continue dialing in the approach to make it practical.

We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process, Hasani said. The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.

At a time when big players like OpenAI and Google are regularly making headlines with gargantuan machine learning algorithms, its a fascinating example of an alternative approach headed in the opposite direction.

OpenAIs GPT-3 algorithm collectively dropped jaws last year, both for its sizeat the time, a record-setting 175 billion parametersand its abilities. A recent Google algorithm topped the charts at over a trillion parameters.

Yet critics worry the drive toward ever-bigger AI is wasteful, expensive, and consolidates research in the hands of a few companies with cash to fund large-scale models. Further, these huge models are black boxes, their actions largely impenetrable. This can be especially problematic when unsupervised models are trained on the unfiltered internet. Theres no telling (or perhaps, controlling) what bad habits theyll pick up.

Increasingly, academic researchers are aiming to address some of these issues. As companies like OpenAI, Google, and Microsoft push to prove the bigger-is-better hypothesis, its possible serious AI innovations in efficiency will emerge elsewherenot despite a lack of resources but because of it. As they say, necessity is the mother of invention.

Image Credit: benjamin henon / Unsplash

Originally posted here:

New 'Liquid' AI Learns Continuously From Its Experience of the World - Singularity Hub

Related Posts