Accelerating Machine Learning with Non-Volatile Memory: Exploring device and circuit tradeoffs - Pritish Narayanan: 2016 International Conference on Rebooting Computing
Large arrays of the same nonvolatile memories (NVM) being developed for Storage-Class Memory (SCM) -- such as Phase Change Memory (PCM) and Resistance RAM (ReRAM) -- can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as synaptic "weight." This allows the all-important multiply-accumulate operation within these algorithms to be performed efficiently at the weight data. In contrast to other groups working on Spike-Timing Dependent Plasticity (STDP), we have been exploring the use of NVM and other inherently-analog devices for Artificial Neural Networks (ANN) trained with the backpropagation algorithm. We recently showed a large-scale (165,000 two-PCM synapses) hardware-software demo (IEDM 2014) and analyzed the potential speed and power advantages over GPU-based training (IEDM 2015). In this paper, we extend this work in several useful directions. We assess the impact of undesired, time-varying conductance change, including drift in PCM and leakage of analog CMOS capacitors. We investigate the use of non-filamentary, bidirectional ReRAM devices based on PrCaMnO, with an eye to developing material variants that provide suitably linear conductance change. And finally, we explore tradeoffs in designing peripheral circuitry, balancing simplicity and area-efficiency against the impact on ANN performance.
Pritish Narayanan gives a talk on accelerating machine learning, at ICRC 2016.