Self-calibration and learning on chip: towards neuromorphic robots

Biological neural systems even of simple animals solve many tasks that are relevant for robotics with an unprecedented efficiency and flexibility. Drawing inspiration from these neural systems, neuromorphic engineers develop a new generation of neurally inspired hardware that realises spiking neural networks that run in real time, on compact (a few mm2) computing devices that consume just a few mW of power. In our work, we develop neuronal computing architectures for neuromorphic hardware that solve different robotic tasks. Here we present one such architecture that enables estimation of the pose of an agent based on the external cues, the learned map, and integration of self-motion signals. We demonstrate, for the first time, online adaptation and error correction of the pose estimation realised fully in a spiking neural network, running on a neuromorphic research chip Loihi, interfaced to a robotic vehicle. I. MOTIVATION AND RESULTS Neuromorphic engineers originally followed a bottomup approach and emulated the dynamics and structure of cortical neuronal networks in electronic hardware [1]. Today, both mixed-signal and digital neuromorphic devices can solve relevant computational tasks with efficient, massively parallel, and event-driven neuronal networks on chip [2], [3], [4]. This hardware is potentially particularly well-suited for robotic applications due to its compact size, low power consumption and massively parallel, event-based processing. Here, we present a crucial component of SLAM – the state estimation architecture with adaptation and online learning. We consider a 1D case of a small robotic vehicle rotating on a spot and observing a number of objects1. In this simple setting we address all sub-tasks of SLAM: the system estimates the orientation of the robot through path integration on chip; at the same time the orientation can be inferred based on the learned map: visual cues leave memory traces in plastic synapses between the state estimation and representation of the visual cues. When the two state estimations do not match, the error is detected and its magnitude is estimated, triggering either update of the speed of neuronal path integration, or of the map. Each architectural component is implemented with spiking integrate-and-fire neuronal networks on the neuromorphic research chip Loihi. The network is tested both with recorded data and on the physical robot, showing online learning, forgetting, update, and adaptation. Fig. 1(A) shows the core component of the Spiking Neural Network (SNN) – the error estimation circuit that computes the difference between the orientation estimate obtained from 1Raphaela Kreiser, Alpha Renner, Gabriel Waibel, and Yulia Sandamirskaya are with the Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057 Zurich, Switzerland rakrei@ini.uzh.ch 1For simplified perception these are realised as blinking LEDs, sensed with an event-based vision sensor, DVS [5]. integrating the velocity signals in a heading direction (HD) network, and the orientation estimate based on the learned map of visual cues, detected by the robot on previous turns. The activity plot (B) shows spikes from the HD neurons for a trial when integration speed had to be reduced due to a detected small positive error: the estimation based on path integration was ahead of the visually-induced estimation. Plot in Fig. 1C, to the contrary, shows how the neuronal map is updated if a large error is detected. Fig. 1. (A) SNN architecture for path integration, error detection, recalibration, and map learning. (B) Neural activity in HD network: When error is detected, path integration speed is adjusted. (C) Synaptic weights increase when LEDs are detected at new positions and decrease when LEDs where learned but are not detected. (D) The neuromorphic processor Loihi. (E) The robotic vehicle.