...
...
Tuesday, December 5, 2023, @ 8:00-9:00am PT you are invited to attend an INRC Forum talk from Michael Jurado of
...
Georgia Institue of Technology.
Title:
Spikemoid loss and the sparsification trainingEnhancing Performance and Efficiency of SNNs: From Spike-Based Loss Improvements to Synaptic Sparsification Techniques.
Abstract:
The introduction of offline training capabilities like Spike Layer Error Reassignment in Time (SLAYER) and advancements in the probabilistic interpretations of Spiking Neural Network (SNN) output reinforce SNNs as a viable alternative to Artificial Neural Networks (ANNs). However, special care must be taken during Surrogate Gradient (SG) training to achieve desired performance and efficiency. This talk will cover our recent work in improving spike-based loss functions for SNNs as well as sparsifying SNNs for low cost, high performant neuromorphic computing.
Spikemax was previously introduced as a family of differentiable loss methods which use windowed spike counts to form classification probabilities. We modify the Spikemaxs loss method to use rates and a scaling parameter instead of counts to form Scaled-Spikemax. Our mathematical analysis shows that an appropriate scaling term can yield less coarse probability outputs from the SNN and help smooth the gradient of the loss during training. Experimentally, we show that Scaled-Spikemax achieves faster training convergence than Spikemax and results in relative improvements of 4.2% and 9.9% in accuracy for NMNIST and N-TIDIGITS18, respectively. We then extend Scaled-Spikemax to construct a spike-based loss function for multi-label classification called Spikemoid. The viability of Spikemoid is shown via the first known multi-label classification results on N-TIDIGITS18 and 2NMNIST, a novel variation of NMNIST that superimposes event-driven sensory data.
However, SNNs trained through SG methods oftentimes use dense or convolutional connections which are not always suitable for Loihi2. In order to minimize core usage and power consumption on chip, we employ synaptic pruning techniques as part of our SNN training pipelines. We demonstrate the effectiveness of synaptic pruning techniques for ANN to SNN conversion of vgg16 on Loihi1 as well as for a lava-dl trained SNN for the Intel DNS Challenge. This later approach involved the use of Gradual Magnitude Pruning (GMP) applied during SLAYER training, which reduced the memory footprint of the baseline SDNN by 50-75%. We highlight infrastructure changes to netX which enable conversion of lava-dl trained SNNs into sparsity aware lava processes.
Meeting link to join is available to INRC members and affiliates on the /wiki/spaces/forum/pages/1983578113.
...