UMass and Baylor College researchers say they know how to expand AI memory – MassLive.com

Posted: September 18, 2020 at 1:05 am

Artificial intelligence experts at the University of Massachusetts Amherst and the Baylor College of Medicine in Houston, Texas report that they have successfully addressed what they call a major, long-standing obstacle to increasing AI capabilities by drawing inspiration from a human brain memory mechanism known as replay.

They say it will help AI programs retain information, rather than forgetting it when new information is stored. In other words, AI will be that much closer to showing skills present in the human brain.

Gido van de Ven and Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, wrote in NatureCommunications that they have developed what they call a surprisingly efficient" new network to protect deep neural networks from catastrophic forgetting, which occurs when, upon learning new lessons, the networks forget what they had already learned.

Deep neural networks are the main drivers behind recent AI advances, but progress has been held back by this forgetting.

One solution would be to store previously encountered examples and revisit them when learning something new. Although such replay or rehearsal solves catastrophic forgetting, constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly, they wrote.

Unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories, the researchers wrote.

Siegelmann said the teams major insight was in recognizing that replay in the brain does not store data. (Rather), the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories.

Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.

The abstract generative brain replay proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.

For example, if our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks, van de Ven wrote.

See more here:

UMass and Baylor College researchers say they know how to expand AI memory - MassLive.com

Related Posts