How to further improve support for neuromorphic DNNs

Presenters:

INRC Members

Abstract:

INRC members share their latest results and analyses on what future neuromorphic systems should support to improve the performance of their deep learning systems.

Live speakers:

  • Low-Pass RNN using Sigma-Delta Neurons on Loihi and

    A Loihi Implementation of Backpropagation using Gated Synfire Chains [Alpha Renner]

    • Deep dive presentation

    • Slides

    • This talk is about 2 independent projects, both implemented on Loihi. The first part will be about an efficient ANN to SNN mapping technique for recurrent neural networks. For this, we use Loihi’s multi-compartment neurons to get sigma-delta behavior that can be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using backpropagation, without simulating spikes. This model dramatically improves the inference performance of simple RNNs. We benchmark the model on the spoken command task.

      The second part will be about an on-chip implementation of backpropagation. So far, on-chip learning is either avoided or only done for the last layer using the delta rule. In this project, we implement backpropagation on Loihi for a proof-of-principle binary 3-layer network and demonstrate learning of XOR and MNIST digit classification. The implementation uses gated synfire chains enabling routing of information through the network.

  • Local temporal credit assignment with eligibility traces and unsupervised predictions [Guillaume Bellec]

    • The constant bombardment of sensory information is organized by the cortex into a hierarchy of useful features. It is thought that this organization arises through unsupervised learning by processing the incoming information but how this is learnt through synaptic plasticity remains unclear. We argue that this organization can emerge with learning rules relying solely on eligibility traces and predictive coding even in the complete absence of feedback pathways. Indeed, we show that a constrastive, local and predictive plasticity (Clapp) rule is sufficient to build effective deep representations via predictive coding. Moreover Clapp can be combined naturally with the eligibility propagation (e-prop) theory to learn with recurrent network layers additionally performing non-trivial temporal computations. Put together, e-prop and Clapp suggest that advanced sensory processing can emerge in complex network architectures with simple learning principles observed in the brain experimentally and compatible with neuromorphic computer architectures.

Pre-recorded speakers:

  • Deep Reinforcement Learning with Spiking Neural Network for Robot Navigation and Continuous Control [Guangzhi Tang]

    • Presentation

    • Slides

    • Energy-efficient control is crucial for robots with limited on-board resources. The high energy consumption of deep reinforcement learning (DRL) and DNN has limited their use in many robot applications. In this talk, we will present our works that combine the energy-efficiency of neuromorphic computing with the optimality of DRL. We will first present our hybrid DRL framework that trains an SNN to learn control policies for mapless navigation. Our trained SNN had a higher success rate than DDPG when validated in complex environments. We will then present our population-coded spiking actor network (PopSAN) that supports a wide spectrum of DRL algorithms. The trained SNN achieved state-of-the-art performance on OpanAI gym continuous control benchmarks. These works reinforce our ongoing effort to design efficient algorithms for controlling autonomous robots with neuromorphic hardware.

  • Lava Implementation of Biologically Plausible Deep Learning with Structured Neurons [Laura Kriener]

    • Presentation

    • Slides

    • In a remarkable reversal of the brain-as-inspiration-for-AI paradigm, the human or even superhuman performance of deep networks for certain application has motivated a renewed search for deep learning in the brain. However, several elements of classical error-backpropagation learning appear to be at odds with neurobiology. One solution by Sacramento et al. (2018) introduced a cortical microcircuit architecture amenable to hierarchical implementation using neurons with simplified dendritic compartments in which error-driven synaptic plasticity adapts the network towards a desired output. The model does not require separate phases and synaptic learning operates continuously, driven by local dendritic prediction errors.
      In this talk we present first results of our work aiming at an implementation of such dendritic microcircuits on the new generation of Loihi chips. To this end we formulate the model within the Lava framework, which allows us to simulate and explore the increased flexibility in neuron and synaptic dynamics that the new chip generation will provide.

  • Relational reasoning on Loihi [Arjun Rao & Philipp Plank]

    • Presentation

    • In this talk we present a spiking neural network on Loihi performing a relational reasoning task in form of question-answering. Relational reasoning requires a neural network to retain some memory of the inputs over a larger time scale. We show how a combination of spike frequency adapting neurons with standard LIF neurons (LSNNs) can be implemented on Loihi to serve as a sufficient long short-term memory unit, how to combine this units with deep feed-forward networks to solve the bAbI question-answering task and a quantitative benchmarking of this network against a GPU. We will highlight some challenges we faced for offline training as well as implementing this network on Loihi and discuss what was important to increase efficiency when running a large network on Loihi.

  • Inference and learning in a neuromorphic processor [Qinru Qiu, Haowen Fang, Zaidao Mei, Daniel Rider]:

    • Presentation

    • In this talk, we will share some of our recent results in Spiking Neural Network (SNN) learning and inference and our experience in implementing them on the Loihi processor. We will first present the Error-Modulated Spike-Timing-Dependent Plasticity (EMSTDP) algorithm, which performs supervised learning in a biologically plausible manner. Through a sequence of approximations, the model realizes backpropagation in the spike domain such that a deep SNN can be trained using traces, weight update rules and neuron circuits supported by the Loihi Processor. Then we will present a spatiotemporal model that considers SNN as a network of trainable IIR filters. Utilizing neuron and synapse temporal dynamics, the spatiotemporal SNN can classify or generate multi-dimensional time sequences. In addition to introducing the Loihi implementation of these learning and inference models, how the hardware constraints can impact the performance of those models will also be discussed.

Online Content

https://intel-ncl.atlassian.net/wiki/spaces/INRC/pages/1080328193/Tutorials+and+Related+Presentations?atlOrigin=eyJpIjoiZjNkNTg4ODFiOGVkNDJiOTkzODkzNzg3ZjQ1ZDU5ODUiLCJwIjoiYyJ9

Recording of “Latest DNN Results on Loihi” session

Pre-requisites/

co-requisites

Recording

Recording

Link to Presentation

Alpha Renner

Guiangzhi Tang

Laura Kriener

Please use the comment section on this page to ask questions or comment about this specific presentation.