Neural approximating architecture targeting multiple application domains

Approximate computing emerges as a promising technique for high energy efficiency. Multi-layer perceptron (MLP) models can be used to approximate many modern applications, with little quality loss. However, the various MLP topologies limits the hardwares performance in all cases. In this paper, a scheduling framework is proposed to guide mapping MLPs onto limited hardware resources with high performance. We then design a reconfigurable neural architecture (RNA) to support the proposed scheduling framework. RNA can be reconfigured to accelerate different MLP topologies, and achieves higher performance than other MLP accelerators.

[1]  Kai Li,et al.  The PARSEC benchmark suite: Characterization and architectural implications , 2008, 2008 International Conference on Parallel Architectures and Compilation Techniques (PACT).

[2]  Mikko H. Lipasti,et al.  BenchNN: On the broad potential application scope of hardware neural network accelerators , 2012, 2012 IEEE International Symposium on Workload Characterization (IISWC).

[3]  Luis Ceze,et al.  Neural Acceleration for General-Purpose Approximate Programs , 2012, 2012 45th Annual IEEE/ACM International Symposium on Microarchitecture.

[4]  Olivier Temam,et al.  A defect-tolerant accelerator for emerging high-performance applications , 2012, 2012 39th Annual International Symposium on Computer Architecture (ISCA).

[5]  Pedro M. Domingos,et al.  An efficient and scalable architecture for neural networks with backpropagation learning , 2005, International Conference on Field Programmable Logic and Applications, 2005..