Blog from September, 2022

Join us Tuesday, October 18, 2022 @ 9:00-10:30am PDT / 18:00-19:30 CET for another great INRC Forum, this time featuring an update on robotics at Intel and fortiss!

Agenda

  • Yulia Sandamirskaya (Deactivated) will share an update on the latest robotics R&D in Intel Neuromorphic Computing Lab and outlook to new capabilities in Lava.

  • Research Talk by INRC member Evan Eames and amaya of fortiss.

  • Mathis Richter hosts Lava Open-Door / Community Meeting. Ask questions and get feedback from Lava developers.

Spiking Reinforcement Learning for Force Feedback Based Robotic Object Insertion on Loihi

Abstract: Robotic object insertion is a classic task in robotics and has been approached with a variety of different learning algorithms and machine learning frameworks. Here we present the first successful implementation of the peg-in-hole task on neuromorphic architecture without vision. The task was first trained within the Neurorobotics Platform using the Spiking Reinforcement Learning technique proposed in Tang 2020. The trained policy was then moved onto neuromorphic hardware, specifically the Intel Loihi Research Chip. Finally, the network was connected to a real KUKA robotic arm with a force-torque sensor implemented on the end effector, allowing the neuromorphic hardware to control the movement of the arm, and guide it through the insertion, in real time. Domain randomization and system identification were used for closing the Sim2Real gap.

Bio: Evan did his studies in astrophysics, specifically early universe cosmology. After his PhD he switched into ML, working for three years as a Data Science Consultant specializing in industrial ML solutions. On feeling that he’d arrived at the “ceiling” of how far one can go with ML within industry Evan switched back into research, this time taking up a postdoctoral position in Neuromorphic Computing within fortiss – the Bavarian State Research Institute. Thus far his work has focused mainly on creating real Neurorobotic use cases, and also incorporating elements of event-based vision. He also enjoys collecting random facts for pub trivia nights, and writing non-fiction in cafés.

Camilo started his career studying mechatronics with a focus on robotics and automation. After finishing his engineering studies and gathering some experience in industry, he decided to follow his passion by doing an MSc. in Robotics, Cognition, and Intelligence. During his master’s degree he became involved in bio-inspired technologies and brain-inspired AI, and accordingly, he wrote his master’s thesis alongside the Neuromorphic Computing competence field within fortiss – the Bavarian state research institute for software-intensive systems and AI. Now he is a full-time researcher at fortiss, and his fields of research include event-based vision, neuromorphic computing, reinforcement learning, and robotics.

How to join:

INRC Members can find the meeting link on INRC Fall Forum Schedule. If you are interested in becoming a member, join the INRC.

Interactive continual learning in neuromorphic hardware for assistive robots

This month, my team’s work on continual learning for robots was featured in a number of tech news articles (Is Intel Labs' brain-inspired AI approach the future of robot learning?) after our project was recognized with the Best Paper award at this year's International Conference for Neuromorphic Systems (Intel goes to ICONS 2022). In this short article, I’ll dive into the research behind the headline, to give you a better understanding of the neuromorphic technology we developed. You can also read the full paper here: ACM Digital Library or PDF

Learning from examples is a key achievement of modern data-driven AI. Continual and lifelong learning from new examples, on the other hand, still eludes most deep learning-based AI systems today. The reason for this lies in the core of the deep learning algorithm: error backpropagation.

Backpropagation involves gradient-based adjustment of the neural network's parameters (weights), and it is a slow, incremental process that changes millions or billions of parameters that contribute to the errors produced by a network for a given batch of data samples.

To enable the gradient-based learning algorithm to converge to a good solution, each example must only change the network a tiny amount. Moreover, it is important that examples come in a balanced sequence: a network that has only seen "cats" and then starts seeing all the "dogs", will have a difficult time learning the second, new concept without forgetting the first one. This catastrophic forgetting is a major problem for neural networks today.

However, when we think about future robotic applications – in homes, hospitals, or retail stores – we would benefit from more flexible learning models that are small enough to compute locally and can be trained on the job.

In such a setting, a user may show objects of interest to a robot, one by one, and might add a couple more later, instead of relying only on a pretrained and rigid network model. After all, this is how we expect a human apprentice to learn new tasks.

To obtain the best performance from AI models, we need a hybrid solution that combines slow gradient-based learning for feature extraction with a different type of learning – fast, one shot learning from examples.

My research team at Intel's Neuromorphic Computing Lab develops neural algorithms that enable breakthroughs in human-centered robotics. Elvin Hajizada focuses on continual object learning in collaboration with Prof. Gordon Cheng from TU Munich and EDPR at IIT in Genoa. The paper we presented at ICONS was our first neuromorphic architecture tackling the challenge of continual learning for robots using the Intel Loihi neuromorphic research processor.

Our neural architecture combines a deep convolutional neural network for feature extraction with a continual learning layer of dynamic weights. These weights continually change according to a dynamical rule -- a three-factor learning rule.

The learning rule determines when each weight should increase or decrease, creating a "memory trace" of the visual appearance of an object and updating it each time recognition fails or an error occurs. Each significantly different object's view is represented by its own set of weights and a separate group of output neurons, adjusting the complexity of object representation to complexity of object’s appearances under different viewing angles. Separate representations for each object alleviate the problem of catastrophic forgetting.

The third factor in the learning rule controls the learning dynamics and makes the learning process autonomous. This is done by detecting states of the network when the weights should be updated. These states are detected by a group of neurons that we call a Neural State Machine (NSM). In 2019, my colleagues and I showed how NSMs can enable robust learning for intelligent agents (pdf) like robots.

For the current work, the NSM detects different states of the learning and recognition system, e.g. "an object is present and not recognized" or "this label has been seen before". Activated state-neurons trigger different actions accordingly, e.g. asking the user for a label, signaling recognition, updating synaptic weights that represent stored patterns, or recruiting neurons for new objects or objects’ views.

The neural state machine and three-factor learning rule work together to allow a robot to memorize a handful of objects (8 in the paper) in multiple 3D views (8 per object) and recognize them with 96% accuracy in interactive learning-recognition sessions. At each presentation of the object, the robot attempts to recognize it, if recognition fails, it requests a label and updates object representation. When errors are made, both the false positive and false negative are updated. Learning can be triggered any time and fewer updates are needed as the robot learns a given set of objects.

We ran the system on Loihi using NxSDK, Intel’s previous generation of hardware and software. The classification neural network consumed 150x less energy than state-of-the-art continual learning architectures solving the same task on a conventional processor. This is one of the first examples of exploiting the vast algorithmic space of stateful neural networks with different topologies – a feedforward feature extractor, a layer of plastic weights, and a neural state machine.

In the coming months, my team and I plan to transfer these models to Lava, the open-source neuromorphic framework that supports Loihi 2, which will provide even greater performance benefits and will make the algorithms available for the whole research community to build on.

Biological neural circuits can provide inspiration for such novel neural algorithms, while neuromorphic hardware provides the right computing substrate for their efficient implementation. We believe this approach to designing neural network models with on-chip learning will lead to breakthroughs in cognitive and interactive robotics.

If you're interested in learning more about this project or want to share feedback, let me know in the comments.


References

Elvin Hajizada, Patrick Berggold, Massimiliano Iacono, Arren Glover, and Yulia Sandamirskaya. 2022. Interactive continual learning for robots: a neuromorphic approach. In Proceedings of the International Conference on Neuromorphic Systems 2022 (ICONS '22). https://doi.org/10.1145/3546790.3546791 (pdf)

Dongchen Liang, Raphaela Kreiser, Carsten Nielsen, Ning Qiao, Yulia Sandamirskaya, and Giacomo Indiveri. 2019. Neural state machines for robust learning and control of neuromorphic agents. In IEEE Journal on Emerging and Selected Topics in Circuits and Systems. https://doi.org/10.1109/JETCAS.2019.2951442 (pdf)

Join us Tuesday, October 4, 2022 @ 9:00-10:00am PDT / 18:00-19:00 CET for an exciting INRC Forum!

Agenda

  • Timothy Shea provides an update on Kapoho Point for the INRC and the Lava v0.5 release.

  • Research Talk by INRC member Andrew Sornborger of Los Alamos National Labs.

  • Marcus Williams hosts first Intel Labs open-door community Q&A. Ask questions and get feedback from Lava developers.

Neural and Circuit Mechanisms for Neuromorphic Algorithms

Abstract: Over the past few years, we have been working on implementing a number of algorithms on neuromorphic chips. In order to do this, we have developed a range of techniques based on a simple portfolio of fundamental mechanisms. The workhorse mechanism is the synfire-gated synfire chain, which we use to control the flow of information on chip. When used in concert with other neural mechanisms and circuit structures, such as spike-timing-dependent plasticity, synaptic connectivity, and encoding schemes, we have been able to implement algorithms to copy synaptic weights from one circuit to another, to learn statistical processes, and, most recently, to construct a fully on-chip, spiking neuromorphic backpropagation algorithm. In this talk, I will discuss our neuromorphic programming framework and show, through examples, how it may be used to build algorithms of interest both for machine learning, as well as more standard algorithms that might be useful as modules in larger neural circuits.

Bio: Andrew Sornborger is a staff scientist at Los Alamos National Laboratory in the Information Sciences division. He worked in computational neuroscience before switching interests to neural and neuromorphic computation, which he has studied for a little over a decade. With Louis Tao (Peking University), he developed the concept of synfire-gated synfire chains (SGSCs) as a control framework for information processing and learning in neural systems. Based on this framework, he and collaborators have developed neural circuits for signal analysis, statistical learning, synaptic copy, and machine learning. He has also studied the theoretical underpinnings of SGSCs in terms of their bifurcation structure and robustness.

How to join:

INRC Members can find the meeting link on INRC Fall Forum Schedule. If you are interested to become a member, join the INRC.

Lava is getting a major update supporting several new Loihi hardware features and the first end-to-end application tutorials. Check out the release notes and clone the public repository to get started. Significant milestones include:

  • An initial version of the Lava Learning API for CPUs that allows users to evaluate learning rule implementations exactly as they run on Loihi 2, and a tutorial for 2-factor STDP.

  • An updated Lava Deep Learning (Lava-DL) application tutorial that makes full use of Loihi 2 features such as convolutional network compression and graded spikes.

  • The first version of a quadratic unconstrained binary optimization (QUBO) solver for optimization problems, and a tutorial on how to tune hyper-parameters to solve real-world problems.

This release period also included two new external Pull Requests from members of the community. Lava is available for free with permissive licensing – including commercial use – making it easy to contribute neuromorphic algorithms and applications that can be widely adopted by the community.

As with the previous release (see Lava v0.4.0 release with Lava extension for Loihi), members of the INRC can setup the Lava extension for Loihi by following these instructions. If you’re not a member, take a look at Join the INRC to get started.

Intel goes to ICONS 2022

Intel had a strong showing at ICONS 2022 in Knoxville Tennessee. We presented 3 papers, won the best paper award, and featured prominently in many of the other presentations. We also participated in an NSF workshop panel discussion where Lava garnered a lot of interest and positive feedback from attendees.

About ICONS

The International Conference on Neuromorphic Systems (ICONS) brings together leading researchers in neuromorphic computing to share emerging research and build a collaborative ecosystem. The focus is on architectures, models, algorithms, and applications of neuromorphic systems.

Papers

The full list of conference papers is available in the ICONS Proceedings, and Intel’s papers are available at the links below. Look out for a separate blog post soon on Elvin’s award-winning paper!

NSF Workshop

The NSF International Workshop On Large Scale Neuromorphic Computing was held on the second day of ICONS, consisting of two panel discussions. I attended in person and served on the first panel “Opportunities & Challenges for Large-scale Neuromorphic Computing”.

The “large-scale” aspect of the panel discussion focused on the difference in scale between neuromorphic models and modern deep neural networks which can have well over 1 trillion parameters. Although some neuromorphic processors offer features to accelerate conventional AI models (deep networks), there is more to computing than AI, and there is more to AI than backprop trained deep networks. The Neuromorphic Computing field should focus on the novel compute and AI capabilities it can provide.

The discussion turned to how these capabilities can be identified and proven. There was a clear consensus among the panelists and audience on the need to be able to share and replicate algorithmic and benchmarking results across teams. When teams can replicate and scrutinize each other’s results, it brings additional credibility to the field and makes for fairer comparisons between approaches. It also allows teams to improve or extend each other’s work, or re-use each other’s work as modular components of a more complex large-scale application.

The need for Lava

At Intel we are already addressing these needs with the Lava software framework, which can support a variety of computing backends and provides a common framework within which algorithms and code can be shared and reused across teams. Lava is available free on github and offers permissive licensing. The idea of Lava was positively received by the audience and panelists, with some asking how they could get started.

The need for such a framework is so strongly recognized that the topic came up again almost immediately in the second panel discussion (some of the panelists had missed the first discussion) and I was called from the audience to recap the earlier discussion points and aims of Lava.

Summary

Overall, we had a strong showing at ICONS 2022, came away with a best paper award, and received positive feedback from the community on our plans for Lava.

Tuesday, September 20, 2022, 9:00-10:00am PDT / 18:00-19:00 CET

INRC Member Tech Talk – Qinru Qiu (Syracuse University)

Neuromorphic Computing for Energy Efficient Near Sensor Adaptive Machine Intelligence

Abstract: IoT and edge devices are on the frontier of interacting with the physical world for sensing, perception, and recognition. The limited battery capacity of these devices demands highly energy efficient information representation, computing and communication. The constantly changing environment and mission requirements call for the ability of online learning and adaptation. Inspired by the structure and behavior of biological neural systems, spiking neural network (SNN) models and neuromorphic computing hardware adopt many energy-efficient features of biological systems. They have been proven to be effective for mobile and edge applications. In this talk I will introduce our work on applying SNNs and neuromorphic computing in processing multivariate time sequences such as sensor readings. Using neurons modeled as a network of infinite impulse response filters, our SNN network can either work as a classifier to detect temporal patterns from the input sequences or as a generator to generate desired temporal sequence. The ability to discern temporal patterns allows us to adopt very sparse input representation, where information is encoded by the intervals between spike events. When coupled with event driven computing and communication, such temporal coding provides significant energy savings. Online learning and domain adaptation of the model will also be discussed in this talk.

Bio: Dr. Qinru Qiu received her PhD in Electrical Engineering from University of Southern California in 2001. She is currently a professor and the director of the graduate program in the Department of Electrical Engineering and Computer Science at Syracuse University. Dr. Qiu has more than 20 years of research experience in machine intelligence and more than 15 years’ experience in neuromorphic computing. She is a recipient of NSF CAREER award in 2009, and IEEE Region 1 Technological Innovation award in 2020. She serves as an associate editor for IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Circuit and Systems Magazine, IEEE Transactions on Cognitive and Developmental Systems, and Frontier on Neuroscience on Neuromorphic Engineering. She has also served as a technical program committee member of many conferences including DAC, ICCAD, ISLPED, DATE, etc. She is the director of the NSF I/UCRC (Industry University Collaborative Research Center) ASIC (Alternative Sustainable and Intelligent Computing) Center Syracuse Site.

How to join:

INRC Members can find the meeting link on /wiki/spaces/forum/pages/1807712257. If you are interested to become a member, join the INRC.