Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs – SemiEngineering

New techniques based on intensive computing and massive amounts of distributed memory.

The pace of deep machine learning and artificial intelligence (AI) is changing the world of computing at all levels of hardware architecture, software, chip manufacturing, and system packaging. Two major developments have opened the doors to implementing new techniques in machine learning. First, vast amounts of data, i.e., Big Data, are available for systemsto process. Second, advanced GPU architectures now support distributed computing parallelization. With these two developments, designers can take advantage of new techniques that rely on intensive computing and massive amounts of distributed memory to offer new, powerful compute capabilities.

Neuromorphic computing-based machine learning utilizes techniques of Spiking Neural Networks (SNN), Deep Neural Networks (DNN) and Restricted Boltzmann Machines (RBM). Combined with Big Data, Big Compute is utilizing statistically based High-Dimensional Computing (HDC) that operates on patterns, supports reasoning built on associative memory and on continuous learning to mimic human memory learning and retention sequences.

Emerging memories range from Compute-In-memory SRAMs (CIM), STT-MRAMs, SOT-MRAMs, ReRAMs, CB-RAMs, and PCMs. The development of each type is simultaneously trying to enable a transformation in computation for AI. Together, they are advancing the scale of computational capabilities, energy efficiency, density, and cost.

To read more, click here.

See the original post:
Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs - SemiEngineering

Related Posts

Comments are closed.