Energy Efficient and Adaptive Analog IC Design for Delay-Based Reservoir Computing

In recent years neuromorphic computing has achieved a lot of success due to its ability to process data intensive applications much faster and using less power compared to traditional computer architectures. Recurrent neural network, a type of artificial neural network has deemed to be more efficient in emulating biological neurons. The complexity in training recurrent neural networks is simplified in reservoir computing by training only the readout stage. This work describes an analog circuit implementation of a delay- based reservoir computing system. It is implemented using a Mackey-Glass nonlinearity function, a voltage to current converter, an analog to spike signal converter, a delay line and a spike to analog signal converter for adding feedback. The outputs from the system can be read from the delay line where the weights can be calibrated based on the required application. The design was implemented using TSMC 180nm technology with a maximum supply voltage of 1.8V for the neurons. This design is unique since the delayed feedback network used has persistent memory for data processing and uses analog signals directly without the need for analog-to- digital converters which makes it area and power efficient. The number of neurons in the delay line can be increased or decreased depending on the application allowing user flexibility. The nonlinearity function does not use any external supply and draws in power from incoming analog signals and the total power consumed by the system is 4.6mW.

[1]  Yang Yi,et al.  DFR , 2018, ACM Journal on Emerging Technologies in Computing Systems.

[2]  Jan Danckaert,et al.  Delay-Based Reservoir Computing: Noise Effects in a Combined Analog and Digital Implementation , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Carver A. Mead,et al.  Neuromorphic electronic systems , 1990, Proc. IEEE.

[4]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[5]  Isabelle Guyon,et al.  Comparison of classifier methods: a case study in handwritten digit recognition , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3 - Conference C: Signal Processing (Cat. No.94CH3440-5).

[6]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[7]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition , 2012 .

[8]  A Beuter,et al.  Feedback and delays in neurological diseases: a modeling study using dynamical systems. , 1993, Bulletin of mathematical biology.

[9]  Razvan Pascanu,et al.  How to Construct Deep Recurrent Neural Networks , 2013, ICLR.

[10]  L. Appeltant,et al.  Information processing using a single dynamical node as complex system , 2011, Nature communications.

[11]  Yiran Chen,et al.  Understanding the design of IBM neurosynaptic system and its tradeoffs: A user perspective , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.

[12]  Jianhong Wu SYMMETRIC FUNCTIONAL DIFFERENTIAL EQUATIONS AND NEURAL NETWORKS WITH MEMORY , 1998 .

[13]  Cecilia Cabeza,et al.  Exact Discrete-Time Implementation of the Mackey–Glass Delayed Model , 2015, IEEE Transactions on Circuits and Systems II: Express Briefs.

[14]  Yang Yi,et al.  Opening the “Black Box” of Silicon Chip Design in Neuromorphic Computing , 2019 .

[15]  L. Glass,et al.  Oscillation and chaos in physiological control systems. , 1977, Science.

[16]  Yang Yi,et al.  Deep-DFR: A Memristive Deep Delayed Feedback Reservoir Computing System with Hybrid Neural Network Topology , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[17]  Anne Beuter,et al.  Feedback and delays in neurological diseases: A modeling study using gynamical systems , 1993 .