Neural networks play an important role in artificial intelligence application domains. In most of applications, neural networks are often implemented in software form. Although the software implementation of neural networks provides flexibility, the computation speed is limited due to the sequential machine architecture. In most applications using artificial neural networks, the learning procedure is carried off-line. A large amount of mathematic operations are needed when learning task of neural networks is performed. The software implementation of neural network systems can only work well using high performance computers. The learning performance is not adequate when it is implemented on embedded systems. Following the development of modern semiconductor technologies, people attempt to realize the neural networks by hardware in order to improve the performance. Designs utilizing special architectures and parameters to achieve the performance were proposed in the past. This paper proposes a high efficiency and generic neural network hardware architecture. The architecture uses the toroidal series multiple data stream to process the back propagation neural network operations, which has the full function of recall and learning capabilities. Users can adjust the number of processor elements (PEs) in the system based on the requirement of the applications by setting the values in registers. Since the proposed system is developed in hardware, it can be integrated into embedded systems easily. The experimental results show that the system can reach much higher performance by using fewer logical elements while maintaining flexibility
[1]
Yutaka Maeda,et al.
Simultaneous perturbation learning rule for recurrent neural networks and its FPGA implementation
,
2005,
IEEE Transactions on Neural Networks.
[2]
Geoffrey E. Hinton,et al.
A general framework for parallel distributed processing
,
1986
.
[3]
J. L. Holt,et al.
Back propagation simulations using limited precision calculations
,
1991,
IJCNN-91-Seattle International Joint Conference on Neural Networks.
[4]
Jihan Zhu,et al.
FPGA Implementations of Neural Networks - A Survey of a Decade of Progress
,
2003,
FPL.
[5]
Davide Anguita,et al.
A digital architecture for support vector machines: theory, algorithm, and FPGA implementation
,
2003,
IEEE Trans. Neural Networks.
[6]
Brad Hutchings,et al.
Density enhancement of a neural network using FPGAs and run-time reconfiguration
,
1994,
Proceedings of IEEE Workshop on FPGA's for Custom Computing Machines.
[7]
J. Nazuno.
Haykin, Simon. Neural networks: A comprehensive foundation, Prentice Hall, Inc. Segunda Edición, 1999
,
2000
.
[8]
Simon W. Jones,et al.
Vhdl: A Logic Synthesis Approach
,
1997
.
[9]
Yoshihiro Yamane,et al.
Computational efficiencies of approximated exponential functions for transport calculations of the characteristics method
,
2004
.
[10]
Kai-Pui Lam,et al.
Analog and digital FPGA implementation of BRIN for optimization problems
,
2003,
IEEE Trans. Neural Networks.