Session 31 overview: Computation in memory for machine learning: Technology directions and memory subcommittees

Many state-of-the-art systems for machine learning are limited by memory in terms of the energy they require and the performance they can achieve. This session explores how this bottleneck can be overcome by emerging architectures that perform computation inside the memory array. This necessitates unconventional, typically mixed-signal, circuits for computation, which exploit the statistical nature of machine-learning applications to achieve high algorithmic performance with substantial energy and throughput gains. Further, the architectures serve as a driver for emerging memory technologies, exploiting the high-density and nonvolatility these offer towards increased scale and efficiency of computation. The innovative papers in this session provide concrete demonstrations of this promise, by going beyond conventional architectures.