On-chip learning in the analog domain with limited precision circuits

The precision constraints imposed by gradient descent learning in the analog domain are considered. Previous studies have investigated the precision necessary to perform weight update calculations with weights stored in digital registers. These studies have found that the learning calculations must be performed with at least 12 b of precision while the feedforward precision can be as low as 6 b. In the present work, the effect of offsets when performing calculations in the analog domain is investigated. An alteration to the standard weight perturbation algorithm is proposed. It allows learning with offsets as large as 1 part in 8 b, thus allowing fast on-chip learning with weights stored in dense analog memory.<<ETX>>