Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Interactive continual learning in neuromorphic hardware for assistive robots

...

Learning from examples is a key achievement of modern data-driven AI. Continual and lifelong learning from new examples, on the other hand, still eludes most deep learning-based AI systems today. The reason for this lies in the core of the deep learning algorithm: backproperror backpropagation.

Backprop Backpropagation involves gradient-based adjustment of the neural network's parameters (weights), and it 's is a slow, incremental process that changes millions or billions of parameters that contribute to the errors produced by a network for a given batch of data samples.

...

We ran the system on Loihi using NxSDK, Intel’s previous generation of hardware and software. The classification neural network consumed 150x less energy than state-of-the-art continual learning architectures solving the same task on a conventional processor. This is one of the first examples of exploiting the vast algorithmic space of stateful neural networks with different topologies – a feedforward feature extractor, a layer of plastic weights, and a neural state machine.

...

If you're interested in learning more about this project or want to share feedback, let me know in the comments.

...

References

Elvin Hajizada, Patrick Berggold, Massimiliano Iacono, Arren Glover, and Yulia Sandamirskaya. 2022. Interactive continual learning for robots: a neuromorphic approach. In Proceedings of the International Conference on Neuromorphic Systems 2022 (ICONS '22). https://doi.org/10.1145/3546790.3546791 (pdf)

Dongchen Liang, Raphaela Kreiser, Carsten Nielsen, Ning Qiao, Yulia Sandamirskaya, and Giacomo Indiveri. 2019. Neural state machines for robust learning and control of neuromorphic agents. In IEEE Journal on Emerging and Selected Topics in Circuits and Systems. https://doi.org/10.1109/JETCAS.2019.2951442 (pdf)