Adaptation, learning and storage in analog VLSI

Adaptation and learning are key elements in biological and artificial neural systems for computational tasks of perception, classification, association, and control. They also provide an effective means to compensate for imprecisions in highly efficient analog VLSI implementations of parallel application-specific processors, which offer real-time operation and low power dissipation. The effectiveness of embedded learning and adaptive functions in analog VLSI relies on careful design of the implemented adaptive algorithms, and on adequate means for local and long-term analog memory storage of the adapted parameter coefficients. We address issues of technology, algorithms, and architecture in analog VLSI adaptation and learning, and illustrate those with examples of prototyped ASIC processors.

[1]  Gert Cauwenberghs A micropower CMOS algorithmic A/D/A converter , 1995 .

[2]  Andreas G. Andreou,et al.  Current-mode subthreshold MOS circuits for analog VLSI neural systems , 1991, IEEE Trans. Neural Networks.

[3]  Mohammed Ismail,et al.  Analog VLSI Implementation of Neural Systems , 2011, The Kluwer International Series in Engineering and Computer Science.

[4]  Carver A. Mead,et al.  A single-transistor silicon synapse , 1996 .

[5]  Gert Cauwenberghs,et al.  Fault-tolerant dynamic multilevel storage in analog VLSI , 1994 .

[6]  M. A. Styblinski,et al.  Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing , 1990, Neural Networks.

[7]  Carver A. Mead,et al.  Neuromorphic electronic systems , 1990, Proc. IEEE.

[8]  Gert Cauwenberghs,et al.  An analog VLSI recurrent neural network learning a continuous-time trajectory , 1996, IEEE Trans. Neural Networks.

[9]  Marwan A. Jabri,et al.  Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks , 1992, IEEE Trans. Neural Networks.

[10]  Thomas Kailath,et al.  Model-free distributed learning , 1990, IEEE Trans. Neural Networks.

[11]  Gert Cauwenberghs A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics , 1993, NIPS.

[12]  Gert Cauwenberghs Analog VLSI long-term dynamic storage , 1996, 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96.

[13]  J. Spall A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates , 1987, 1987 American Control Conference.

[14]  Marwan A. Jabri,et al.  Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation , 1992, NIPS.

[15]  Gert Cauwenberghs,et al.  A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization , 1992, NIPS.

[16]  Richard S. Sutton,et al.  Neuronlike adaptive elements that can solve difficult learning control problems , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[17]  Gert Cauwenberghs,et al.  Analysis and verification of an analog VLSI incremental outer-product learning system , 1992, IEEE Trans. Neural Networks.