On-chip learning in the analog domain with limited precision circuits
暂无分享,去创建一个
The precision constraints imposed by gradient descent learning in the analog domain are considered. Previous studies have investigated the precision necessary to perform weight update calculations with weights stored in digital registers. These studies have found that the learning calculations must be performed with at least 12 b of precision while the feedforward precision can be as low as 6 b. In the present work, the effect of offsets when performing calculations in the analog domain is investigated. An alteration to the standard weight perturbation algorithm is proposed. It allows learning with offsets as large as 1 part in 8 b, thus allowing fast on-chip learning with weights stored in dense analog memory.<<ETX>>
[1] John J. Paulos,et al. The Effects of Precision Constraints in a Backpropagation Learning Network , 1990, Neural Computation.
[2] M. Duranton,et al. Learning on VLSI: a general purpose digital neurochip , 1989, International 1989 Joint Conference on Neural Networks.
[3] Marwan A. Jabri,et al. Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks , 1992, IEEE Trans. Neural Networks.