News & Events

Rain Demonstrates AI Training on Analog Chip

Rain Neuromorphics has trained a deep learning network on an analog chip—a crossbar array of memristors—using the company’s analog-friendly training algorithms.

The process required many orders of magnitude less energy compared with today’s GPU systems. While Rain’s initial work has proven AI can be trained efficiently using analog chips, commercial realizations of the technology may still be a few years away.

In a paper co-authored with memristor pioneer Stanley Williams, Rain describes training single- and two-layer neural networks to recognize words written in braille. The setup uses a combination of two 64 x 64 memristor crossbar arrays (in this case, not the 3D ReRAM-based chip the company previously showed), combined with training algorithms using a technique called activity difference, which includes Rain’s earlier work on equilibrium propagation. Rain calls this hardware-algorithm combination memristor activity-difference energy minimization (MADEM).

Backpropagation, the training algorithm used almost exclusively in AI systems today, is incompatible with analog hardware since it is sensitive to the small variabilities and mismatches in on-chip analog devices. While compensation techniques have been used to make analog inference chips, these techniques have yet to prove successful for backpropagation-based training. Rain’s approach, which uses activity difference techniques, calculates local gradients instead of backpropagation’s repeated use of global gradients. The technique builds on previous work on equilibrium propagation training algorithms and is mathematically equivalent to backpropagation; in other words, it can be used to train mainstream deep learning networks.

Compared to training that uses backpropagation on a GPU, the time to train was reduced by two orders of magnitude (to tens of microseconds) and the energy consumed was reduced by five orders of magnitude (to hundreds of nanoJoules). Scaled-up versions of MADEM should still boast a four order of magnitude advantage in terms of energy consumption, per Rain’s projections.

“Over the course of the next 10 years, we intend to close the gap between what’s done today and the 100,000x that we know is possible,” Rain Neuromorphics CEO Gordon Wilson told EE Times. “The caveat is that this is not a product today. But this is a rigorous experiment that has done hardware-based measurements working with the noise of the system… making the analog work with you as opposed to fighting against it.”

Static device nonidealities are accounted for in the training process, while dynamical nonidealities, such as temporal stochasticity, can actually be used to improve performance. These nonidealities can be used for regularization—lowering the complexity of the neural network during training to avoid overfitting.

“Our goal is to have the same accuracy as backpropagation, to take all the wins that have been demonstrated in the digital world with deep learning, we want to be able to move all that to an ultra-efficient platform,” Wilson said. “To do that, you need an algorithm that’s as smart as backpropagation, and you need a hardware substrate that’s as scalable as the GPU, but with orders of magnitude less power consumption.”

Rain’s recent work has been enabled by hardware and algorithm co-design, a trend Wilson sees as critical to next-generation AI systems.

By EETimes

Link:https://www.eetimes.com/rain-demonstrates-ai-training-on-analog-chip/

商品分類

最新訊息