How to Build a Mind? This Theory May Guide Us Toward an Answer – Singularity Hub

Posted: May 30, 2017 at 2:45 pm

From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It's usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published June 19, 2016.We hope you enjoy it!

How do intelligent minds learn?

Consider a toddler navigating her day, bombarded by a kaleidoscope of experiences. How does her mind discover whats normal happenstance and begin building a model of the world? How does she recognize unusual events and incorporate them into her worldview? How does she understand new concepts, often from just a single example?

These are the same questions machine learning scientists ask as they inch closer to AI that matches or even beats human performance. Much of AIs recent victories IBM Watson against Ken Jennings, Googles AlphaGo versus Lee Sedol are rooted in network architectures inspired by multi-layered processing in the human brain.

In a review paper, published in Trends in Cognitive Sciences, scientists from Google DeepMind and Stanford University penned a long-overdue update on a prominent theory of how humans and other intelligent animals learn.

In broad strokes, the Complementary Learning Systems (CLS) theory states that the brain relies on two systems that allow it to rapidly soak in new information, while maintaining a structured model of the world thats resilient to noise.

The core principles of CLS have broad relevance in understanding the organization of memory in biological systems, wrote the authors in the paper.

Whats more, the theorys core principles already implemented in recent themes in machine learning will no doubt guide us towards designing agents with artificial intelligence, they wrote.

In 1995, a team of prominent psychologists sought to explain a memory phenomenon: patients with damage to their hippocampus could no longer form new memories but had full access to remote memories and concepts from their past.

Given the discrepancy, the team reasoned that new learning and old knowledge likely relied on two separate learning systems. Empirical evidence soon pointed to the hippocampus as the site of new learning, and the cortex the outermost layer of the brain as the seat of remote memories.

In a landmark paper, they formalized their ideas into the CLS theory.

According to CLS, the cortex is the memory warehouse of the brain. Rather than storing single experiences or fragmented knowledge, it serves as a well-organized scaffold that gradually accumulates general concepts about the world.

This idea, wrote the authors, was inspired by evidence from early AI research.

Experiments with multi-layer neural nets, the precursors to todays powerful deep neural networks, showed that, with training, the artificial learning systems gradually learned to extract structure from the training data by adjusting connection weights the computer equivalent to neural connections in the brain.

Put simply, the layered structure of the networks allows them to gradually distill individual experiences (or examples) into high-level concepts.

Similar to deep neural nets, the cortex is made up of multiple layers of neurons interconnected with each other, with several input and output layers. It readily receives data from other brain regions through input layers and distills them into databases (prior knowledge) to draw upon when needed.

According to the theory, such networks underlie acquired cognitive abilities of all types in domains as diverse as perception, language, semantic knowledge representation and skilled action, wrote the authors.

Perhaps unsurprisingly, the cortex is often touted as the basis of human intelligence.

Yet this system isnt without fault. For one, its painfully slow. Since a single experience is considered a single sample in statistics, the cortex needs to aggregate over years of experience in order to build an accurate model of the world.

Another issue arises after the network matures. Information stored in the cortex is relatively faithful and stable. Its a blessing and a curse. Consider when you need to dramatically change your perception of something after a single traumatic incident. It pays to be able to update your cortical database without having to go through multiple similar events.

But even the update process itself could radically disrupt the existing network. Jamming new knowledge into a multi-layer network, without regard for existing connections, results in intolerable changes to the network. The consequences are so dire that scientists call the phenomenon is catastrophic interference.

Thankfully, we have a second learning system that complements the cortex.

Unlike the slow-learning cortex, the hippocampus concerns itself with breaking news. Not only does it encode a specific event (for example, drinking your morning coffee), it also jots down the context in which the event occurred (you were in your bed checking email while drinking coffee). This lets you easily distinguish between similar events that happened at different times.

The reason that the hippocampus can encode and delineate detailed memories even when theyre remarkably similar is due to its peculiar connection pattern. When information flows into the structure, it activates a different neural activity pattern for each experience in the downstream pathway. Different network pattern; different memory.

In a way, the hippocampus learning system is the antithesis of its cortical counterpart: its fast, very specific and tailored to each individual experience. Yet the two are inextricably linked: new experiences, temporarily stored in the hippocampus, are gradually integrated into the cortical knowledge scaffold so that new learning becomes part of the databank.

But how do connections from one neural network jump to another?

The original CLS theory didnt yet have an answer. In the new paper, the authors synthesized findings from recent experiments and pointed out one way system transfer could work.

Scientists dont yet have all the answers, but the process seems to happen during rest, including sleep. By recording brain activity of sleeping rats that had been trained on a certain task the day before, scientists repeatedly found that their hippocampi produced a type of electrical activity called sharp-wave ripples (SWR) that propagate to the cortex.

When examined closely, the ripples were actually replays of the same neural pattern that the animal had generated during learning, but sped up to a factor of about 20. Picture fast-forwarding through a recording thats essentially what the hippocampus does during downtime. This speeding up process compresses peaks of neural activity into tighter time windows, which in turn boosts plasticity between the hippocampus and the cortex.

In this way, changes in the hippocampal network can correspondingly tweak neural connections in the cortex.

Unlike catastrophic interference, SWR represent a much gentler way to integrate new information into the cortical database.

Replay also has some other perks. You may remember that the cortex requires a lot of training data to build its concepts. Since a single event is often replayed many times during a sleep episode, SWRs offer a deluge of training data to the cortex.

SWR also offers a way for the brain to hack reality in a way that benefits the person. The hippocampus doesnt faithfully replay all recent activation patterns. Instead, it picks rewarding events and selectively replays them to the cortex.

This means that rare but meaningful events might be given privileged status, allowing them to preferentially reshape cortical learning.

These ideasview memory systems as being optimized to the goals of an organism rather than simply mirroring the structure of the environment, explained the authors in the paper.

This reweighting process is particularly important in enriching the memories of biological agents, something important to consider for artificial intelligence, they wrote.

The two-system set-up is natures solution to efficient learning.

By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences, says Stanford psychologist and article author Dr. James McClelland in a press interview.

According to DeepMind neuroscientists Dharshan Kumaran and Demis Hassabis, both authors of the paper, CLS has been instrumental in recent breakthroughs in machine learning.

Convolutional neural networks (CNN), for example, are a type of deep network modeled after the slow-learning neocortical system. Similar to its biological muse, CNNs also gradually learn through repeated, interleaved exposure to a large amount of training data. The system has been particularly successful in achieving state-of-the-art performance in challenging object-recognition tasks, including ImageNet.

Other aspects of CLS theory, such as hippocampal replay, has also been successfully implemented in systems such as DeepMinds Deep Q-Network. Last year, the company reported that the system was capable of learning and playing dozens of Atari 2600 games at a level comparable to professional human gamers.

As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of gameplay and replays them in interleaved fashion. This greatly amplifies the use of actual gameplay experience and avoids the tendency for a particular local run of experience to dominate learning in the system, explains Kumaran.

Hassabis agrees.

We believe that the updated CLS theory will likely continue to provide a framework for future research, for both neuroscience and the quest for artificial general intelligence, he says.

Image Credit: Shutterstock

Read the rest here:

How to Build a Mind? This Theory May Guide Us Toward an Answer - Singularity Hub

Related Posts