A Compact and Fast Silicon Implementation for Layered Neural Nets

In this paper we present an architecture for implementation of artificial layered neural networks. The building blocks of the architecture are simple processing elements properly interconnected to a number of shift registers which provide the processing elements with data and weights. The paper concentrates on optimization of the architecture with respect to the three main parameters of silicon area requirements, computational latency and throughput.