Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware - Emre Neftci: 2016 International Conference on Rebooting Computing

1009 views
Download
  • Share

In recent years the field of neuromorphic computing gained significant momentum, enabling systems that consume orders of magnitude less power than traditional ones. However, their wider use is still hindered by the lack of algorithms that can harness their full potential. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a "train-and-constrain" methodology that enables the mapping of machine learned RNNs to spiking neurons. This "train-and-constrain" method consists of first training RNNs, then discretizing the weights and finally converting them to spiking RNNs. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture (including adapting the network to constraints associated with the system). Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ~17 uW.

Emre Neftci gives a talk on neuromorphic hardware, at ICRC 2016.

Advertisment

Advertisment