Area compaction in silicon structures for neural net implementation

Abstract One of the problems designers have to cope with in WSI implementation of neural nets is their extreme connectivity requirements. The present paper presents an architecture capable of implementing feed-forward neural nets. The architecture is constituted of regular buiding blocks allowing faster design and higher density. A second architecture derived from the previous one and scoring better area exploting is then discussed.

[1]  G. Storti Gajani,et al.  Alternative Approaches for Mapping Neural Networks onto Silicon , 1989 .

[2]  S. Y. Kung,et al.  Digital VLSI architectures for neural networks , 1989, IEEE International Symposium on Circuits and Systems,.

[3]  L. E. Atlas,et al.  A study of regular architectures for digital implementation of neural networks , 1989, IEEE International Symposium on Circuits and Systems,.

[4]  S. Y. King Parallel architectures for artificial neural nets , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[5]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .