A Hardware Architecture for Radial Basis Function Neural Network Classifier

In this paper we present design and analysis of scalable hardware architectures for training learning parameters of RBFNN to classify large data sets. We design scalable hardware architectures for K-means clustering algorithm to training the position of hidden nodes at hidden layer of RBFNN and pseudoinverse algorithm for weight adjustments at output layer. These scalable parallel pipelined architectures are capable of implementing data sets with no restriction on their dimensions. This paper also presents a flexible and scalable hardware accelerator for realization of classification using RBFNN, which puts no limitation on the dimension of the input data is developed. We report FPGA synthesis results of our implementations. We compare results of our hardware accelerator with CPU, GPU and implementations of the same algorithms and with other existing algorithms. Analysis of these results show that scalability of our hardware architecture makes it favorable solution for classification of very large data sets.

[1]  Viktor K. Prasanna,et al.  Efficient hardware data mining with the Apriori algorithm on FPGAs , 2005, 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM'05).

[2]  Kavitha T. Madhu,et al.  Synthesis of Instruction Extensions on HyperCell, a reconfigurable datapath , 2014, 2014 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV).

[3]  James Theiler,et al.  Algorithmic transformations in the implementation of K- means clustering on reconfigurable hardware , 2001, FPGA '01.

[4]  Vuk Vranjkovic,et al.  Coarse-grained reconfigurable hardware accelerator of machine learning classifiers , 2016, 2016 International Conference on Systems, Signals and Image Processing (IWSSIP).

[5]  Joseph R. Cavallaro,et al.  FPGA Implementation of Matrix Inversion Using QRD-RLS Algorithm , 2005, Conference Record of the Thirty-Ninth Asilomar Conference onSignals, Systems and Computers, 2005..

[6]  Tsutomu Maruyama Real-time K-Means Clustering for Color Images on Reconfigurable Hardware , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[7]  Janette B. Bradley,et al.  Neural networks: A comprehensive foundation: S. HAYKIN. New York: Macmillan College (IEEE Press Book) (1994). v + 696 pp. ISBN 0-02-352761-7 , 1995 .

[9]  S. K. Nandy,et al.  An accelerator for classification using radial basis function neural network , 2015, 2015 28th IEEE International System-on-Chip Conference (SOCC).

[10]  Michael R. Lyu,et al.  A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data , 2004, Neurocomputing.

[11]  Shao-Yi Chien,et al.  Flexible Hardware Architecture of Hierarchical K-Means Clustering for Large Cluster Number , 2011, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[12]  Viktor Öwall,et al.  A scalable pipelined complex valued matrix inversion architecture , 2005, 2005 IEEE International Symposium on Circuits and Systems.

[13]  Kavitha T. Madhu,et al.  A framework for post-silicon realization of arbitrary instruction extensions on reconfigurable data-paths , 2014, J. Syst. Archit..

[14]  S. K. Nandy,et al.  A Flexible Scalable Hardware Architecture for Radial Basis Function Neural Networks , 2015, 2015 28th International Conference on VLSI Design.

[15]  André van Schaik,et al.  Learning the pseudoinverse solution to network weights , 2012, Neural Networks.

[16]  Shawki Areibi,et al.  The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study , 2007, IEEE Transactions on Neural Networks.

[17]  Johan Eilert,et al.  Efficient Complex Matrix Inversion for MIMO Software Defined Radio , 2007, 2007 IEEE International Symposium on Circuits and Systems.

[18]  Ryan Kastner,et al.  An FPGA Design Space Exploration Tool for Matrix Inversion Architectures , 2008, 2008 Symposium on Application Specific Processors.