Tuesday, June 27, 2023 @ 8:00-9:00am PT / 17:00-18:00 CET you are invited to attend an INRC Forum talk from Robert Legenstein at Technische Universität Graz.
Memory-enriched computation and learning through synaptic and non-synaptic plasticity
Abstract: Virtually any task faced by humans has a temporal component and therefore demands some form of memory. Consequently, a variety of memory systems and mechanisms have been shown to exist in the brain of humans and other animals. These memory systems operate on a multitude of time scales, from seconds to years. Yet, it is still not well understood how memory is implemented in the brain and how cortical neuronal networks utilize these systems for computation. In this talk, I will present some recent models that extend (spiking and non-spiking) neural network models with memory using Hebbian and non-Hebbian types of plasticity. I will discuss the similarities between these models and transformers, arguably the most powerful models for sequence processing in the area of machine learning. I will show that Hebbian plasticity can significantly increase the computational and learning capabilities of spiking neural networks. Further, I will show how neurons with non-synaptic plasticity can be utilized for memory and how networks of such neurons can be trained without the need to backpropagate errors through time.
Bio(s):
Dr. Robert Legenstein received his PhD in computer science from the Graz University of Technology, Graz, Austria, in 2002. He is a full professor at the Department of Computer Science, TU Graz, head of the Institute for Theoretical Computer Science, and leading the Graz Center for Machine Learning. Dr. Legenstein has served as associate editor of IEEE Transactions on Neural Networks and Learning Systems (2012-2016). He is an action editor for Transactions on Machine Learning Research, and he was on the program committee for NeurIPS and ICLR several times. His primary research interests are learning in models for biological networks of neurons and neuromorphic hardware, probabilistic neural computation, novel brain-inspired architectures for computation and learning, and memristor-based computing concepts.
Recording:
For the recording and slides, see the full INRC Forum Summer 2023 Schedule (accessible only to INRC Affiliates and Engaged Members).
If you are interested in becoming a member, here is the information about joining the INRC.
Tuesday, June 20, 2023 @ 8:00-9:00am PT / 17:00-18:00 CET you are invited to attend an INRC Forum talk from Wolfgang Maass, Christoph Stoeckl & Yukun Yang at Technische Universität Graz.
Local prediction-learning in high-dimensional spaces enables neural networks to plan
Abstract: Being able to plan a sequence of actions in order to reach a goal, or more generally to solve a problem, is a cornerstone of higher brain function. But compelling models which explain how the brain can achieve that are missing. We show that local synaptic plasticity enables a neural network to create high-dimensional representations of actions and sensory inputs so that they encode salient information about their relationship. In fact, it can create a cognitive map that reduces planning to a simple geometric problem in a high-dimensional space that can easily be solved by a neural network. This method also explains how self-supervised learning enables a neural network to control a complex muscle system so that it can handle locomotion challenges that never occurred during learning. The underlying learning strategy bears some similarity to self-attention networks (Transformers). But it does not require non-local learning rules or very large datasets. Hence it is suitable for implementation in highly energy-efficient neuromorphic hardware, in particular for on-chip learning on Loihi 2.
One goal of our presentation will be to initiate discussions about the relation of this learning-based use of large vectors to other VSA approaches, its relation to Transformers, and possible applications in robotics.
Bio(s):
Wolfgang Maass is a Professor of Computer Science at Technische Universität Graz. He received his PhD (1974) and Habilitation (1978) in Mathematics from Ludwig-Maximilians-Universität in Munich. He conducted research at MIT, the University of Chicago, and UC Berkeley, as a Heisenberg Fellow of the Deutsche Forschungsgemeinschaft. He has been the Editor of Machine Learning (1995-1997), Archive for Mathematical Logic (1987-2000), and Biological Cybernetics (2006-present). He was also a Sloan Fellow at the Computational Neurobiology Lab of the Salk Institute in La Jolla, California from 1997-1998. Since 2005, he has been an Adjunct Fellow of the Frankfurt Institute of Advanced Studies (FIAS).
Christoph Stoeckl is a Postdoc researcher at Technische Universität Graz working in the intersection between computational neuroscience and AI. His research interests include neuromorphic hardware as well as exploring connections between Transformers and neural networks. Before joining the research lab of Prof. Maass, he obtained a Master’s degree in Computer Science also at TU Graz.
Yukun Yang is a 1st-year Doctoral Student at Technische Universität Graz, supervised by Prof. Wolfgang Maass. His primary research interest is at the intersection of AI and neuroscience, with a focus on discovering the learning principles of the brain and its neuromorphic applications. Before joining TU Graz, he earned M.S. in the ECE Department at Duke University in 2020. Earlier, he received B.E. in Information Engineering from Xi'an Jiaotong University in 2018.
Recording:
For the recording and slides, see the full INRC Forum Summer 2023 Schedule (accessible only to INRC Affiliates and Engaged Members).
If you are interested in becoming a member, here is the information about joining the INRC.
Virtual Workshop
The latest Intel and INRC member research and announcements. Detailed schedule to follow.
July 20 - 21
6:00 AM - 12:00 PM PDT
Detailed schedule to follow.
Save-the-Dates:
In-Person Bootcamps
Training and hands-on collaboration for INRC projects and Intel Loihi 2 users.
July 25 - 26
9:00 AM - 5:00 PM CET
Option 1: Intel Campus, Santa Clara, CA, USA
Option 2: Intel Campeon, Munich, Germany
Save-the-Dates:
Please do not book any travel until your bootcamp registration is confirmed by Intel!
Event Details
Workshop registration is open to anyone who is interested in neuromorphic computing.
Some sessions may be NDA-only to allow members to share pre-publication results.
You can apply to Join the INRC to start using Intel Loihi 2.
Questions or feedback? Email inrc_interest@intel.com or leave a comment below.
Are your teammates or colleagues interested in neuromorphic computing? Share this page and invite them to attend.
Tuesday, June 6, 2023 @ 8:00-9:00am PT / 17:00-18:00 CET you are invited to attend an INRC Forum talk from Kenneth Stewart, University of California, Irvine.
Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing
Abstract: Achieving real-time, personalized intelligence at the edge with learning capabilities holds enormous promise to enhance our daily experiences and assist in decision-making, planning, and sensing. Yet, today's technology encounters difficulties with efficient and reliable learning at the edge, due to a lack of personalized data, insufficient hardware, and the inherent challenges posed by online learning. Over time and across multiple developmental phases, the brain has evolved to incorporate new knowledge by efficiently building on previous knowledge. We seek to emulate this remarkable process in digital neuromorphic technology through two interconnected stages of learning.
Initially, a meta-training phase fine-tunes the learning hardware's hyperparameters for few-shot learning by deploying a differentiable simulation of three-factor learning in a neuromorphic chip. This meta-training process refines the synaptic plasticity and related hyperparameters to align with the specific dynamics inherent in the hardware and the given task domain. During the subsequent deployment stage, these optimized hyperparameters enable accurate learning of new classes using the local three-factor synaptic plasticity updates.
We demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor and the associated plasticity dynamics, achieving state-of-the-art accuracy in learning new categories in one-shot in real-time among three task domains. Our methodology is versatile and can be applied to situations demanding quick learning and adaptation at the edge, such as navigating unfamiliar environments or learning unexpected categories of data through user engagement.
Bio(s):
Kenneth Stewart is a final year Ph.D. candidate in Computer Science at the University of California, Irvine advised by professors Emre Neftci, Nikil Dutt, and Jeffery Krichmar. Throughout his Ph.D. Kenneth has investigated adaptive learning algorithms with Spiking Neural Networks that can be applied in Neuromorphic hardware for online, on-chip learning. During his Ph.D. Kenneth has published several papers in the area and was a candidate for the IEEE AICAS'20 best paper award. In addition to papers, Kenneth co-authored patents regarding adaptive edge learning for gesture and speech recognition applications with the Accenture Future Tech Lab. Kenneth is one of the leading members of Neurobench's Few-shot Online Learning initiative trying to motivate further research into the area. After earning his degree at the end of the Summer Kenneth hopes to scale up his research to apply it to real-world problems.
Recording:
For the recording, see the full INRC Forum Summer 2023 Schedule (accessible only to INRC Affiliates and Engaged Members).
If you are interested in becoming a member, here is the information about joining the INRC.