Deep learning with Spiking Neural Networks

Deep learning with Spiking Neural Networks

@Sumit Bam Shrestha leads the deep SNN research in Intel’s Neuromorphic Computing Lab.

Artificial Neural Networks (ANNs) are used for a wide variety of tasks ranging from artificial intelligence, image processing, natural language processing, video compression, autonomous driving, drug discovery, and many more. With the recent progress of dedicated accelerators for ANNs combined with the explosion of data, the industry can now deploy incredibly powerful deep learning systems specific to each application domain. These models also consume a huge amount of power for both inference and training. In this blog, I will introduce Spiking Neural Networks (SNNs) - a radically more efficient evolution of ANNs - and I’ll explain how deep learning with SNNs works. This approach can enable many valuable applications for industrial systems while dramatically cutting energy usage.

Spiking Neural Networks (SNNs), sometimes dubbed the third generation of ANNs, supplant the non-linear activation functions in ANNs with spiking neurons. Below is a visual illustration of the fundamental difference in the operation of an artificial neuron and a spiking neuron.

A ReLU artificial neuron. It processes its dense numeric inputs and responds non-linearly.
A spiking neuron responds to sparse input spike events and responds with spikes that are governed by its internal dynamics. Note the response of a spiking neuron is influenced by its past inputs. In contrast, an artificial neuron responds only to its current inputs.

Spiking neurons are mostly, but are not limited to, the mathematical models that describe the behavior of biological neurons at various degrees of realism. They differ from the usual non-linear activation functions in two fundamental ways.

  1. The input and output are a brief pulse of activity, called spikes. Spikes are temporally sparse events in time that naturally lead to the notion of sparse message passing and event based synaptic accumulation, stateful dynamics.

  2. Spiking neurons have a dynamic state where the state variables change over time. Sometimes the state dynamics also have a recurrence. Typically, the response of a spiking neuron depends on its inputs as well as outputs in recent history.

As an example, the typical internal dynamics of one particular spiking neuron model – the Adaptive Leaky Integrate and Fire (LIF) neuron – are illustrated below. Note the internal dynamics of the neuron and output spiking events that occur when the voltage crosses the green threshold region.

An example plot of the dynamics of an Adaptive LIF spiking neuron. The neuron’s current and voltage decay both exhibit exponentially decaying dynamics. The neuron’s current state increases when it receives an input spike while the voltage state follows the current state. When the voltage state exceeds the threshold, the neuron emits spikes along with a momentary increase in the threshold indicated by the green sub-threshold region.

Why SNNs?

In addition to biological relevance, the dynamics of a spiking neuron offer a different set of unique features compared to non-linear activation functions in ANNs. There are theoretical foundations that back up the computational capacity of SNNs. In principle, a network of spiking neurons can simulate any feedforward neural network[1]. This means that SNNs can solve complex mappings just as ANNs can. However, that does not mean that you should try to approximate every ANN with an SNN. In practice, SNNs and ANNs thrive in different applications. Some tasks are more suited for ANNs whereas some are more suited for SNNs. For example, a single spiking neuron can be configured to perform tasks like coincidence detection, and element distinctness which requires a network of sigmoidal neurons[2]. In recent work, it was shown that a network of resonate-and-fire spiking neurons can compute optical flow with 90x fewer operations, with better accuracy, compared to a leading DNN solution[3].

So what are the kinds of tasks that SNNs are good at? While this topic is still under exploration, here are a few general properties we look for:

  • Spatiotemporal workloads: Problems that require processing distributed information that varies over time can leverage the stateful dynamics of spiking neurons. Some of the spatiotemporal workloads where SNNs have shown impressive performance are keyword spotting[3, 4], optic-flow estimation[3], gesture recognition[5], tactile texture classification[6], and so on.

  • Processing sparse event-based data: SNNs are a natural fit for processing event-based data from neuromorphic sensors, e.g. dynamic vision sensors[7], artificial cochlea[8], and other bio-inspired sensors that speak the same language of spikes. More importantly, SNNs can process the events at a very fine temporal resolution from these sensors due to their event-based computation and cater to applications demanding low latency.

  • Power and Latency constrained applications: SNNs are at the core of neuromorphic computing. The sparse message-passing paradigm makes neuromorphic chips like Loihi extremely efficient. This makes SNNs on neuromorphic chips an appealing solution for applications that are power and latency constrained. SNNs mapped to Loihi have shown combined power & latency efficiency gains (as measured by Power Delay Product) on the order of 1000x for visual-tactile sensing[6] sequential MNIST classification[5] , LASSO regression[5], and constraint satisfaction problems[5], and 100x for robotic navigation[10], arm control[12] and adaptive control[5] compared to conventional solutions using CPUs/GPUs.

In this blog, I have briefly introduced SNNs and pointed to their unique offerings. For an in-depth introduction to SNNs, I recommend reading Exploring Neuromorphic Computing for AI: Why Spikes? (Part One, Part Two).

Why deep learning with SNNs?

For the right kind of applications, SNNs can be very useful. In some cases, it is possible to design algorithmic solutions using SNNs. However, this is not always feasible. Deep learning with backpropagation has been extremely successful in training ANNs with a lot of data. It is natural to leverage the immense tailwind of deep learning development and apply it to deep SNNs. It is a powerful combination.

The field of deep SNNs is not as mature as conventional deep learning. Nevertheless, some applications have been demonstrated including action recognition[5, 9], robotics control[10], relational learning[11], and tactile sensing[6] and have shown energy gains when implemented on dedicated neuromorphic hardware that takes advantage of sparsity and stateful dynamics.

It is equally important to understand the applications where deep SNNs don’t demonstrate efficiency benefits, for e.g. rate conversion of ANNs, non-temporal problems like image classification, batch efficient applications like recommender systems, and so on. When developing a deep SNN solution for an application, one must try and exploit the strengths of SNNs to reap the unique benefits of SNNs.

Next in line

In this blog, I introduced the potential of Deep Learning with SNNs and explained why they are important. This is a highly active area of research, and my colleagues and I in Intel’s Neuromorphic Computing Lab will continue to share more details on different aspects of training deep SNNs, the challenges, best practices, and more. In the meantime, feel free to tinker with lava-dl and its tutorials to train deep SNNs. The subsequent blogs in this series will be populated below as they become available.


  1. Maass, W. Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Advances in Neural Information Processing Systems 9, 211–217, 1996 ↩︎

  2. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 1659 – 1671, 1997. ↩︎

  3. Orchard, G. et al. Efficient neuromorphic signal processing with loihi 2. 2021 IEEE Workshop on Signal Processing Systems (SiPS), 2021. ↩︎

  4. Yin, B et al. Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. Nature Machine Intelligence, 905 – 913, 2021. ↩︎

  5. Davies, M. et al. Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook. Proceedings of the IEEE, 2021. ↩︎

  6. Tasbolat, T. et al. Event-driven visual-tactile sensing and learning for robots. Robotics: Science and Systems 2020. ↩︎

  7. https://www.techinsights.com/blog/image-sensor/dynamic-vision-sensors-brief-overview-image-sensor-techstream-blog ↩︎

  8. https://inilabs.com/products/dynamic-audio-sensor/ ↩︎

  9. Amir, A. et al. A low power, fully event-based gesture recognition system. Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. ↩︎

  10. Guangzhi, T. et al. Deep reinforcement learning with population-coded spiking neural network for continuous control. Conference on Robot Learning, PMLR, 2021. ↩︎

  11. Rao, A. et al. A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware. Nature Machine Intelligence 4.5 (2022): 467-479. ↩︎

  12. DeWolf, T et al. Neuromorphic control of a simulated 7-DOF arm using Loihi. Neuromorphic Computing and Engineering 3 014007, 2023.