Parametric Sensitivity and Scalability of k-Winners-Take-All Networks with a Single State Variable and Infinity-Gain Activation Functions

In recent years, several k-winners-take-all (kWTA) neural networks were developed based on a quadratic programming formulation In particular, a continuous-time kWTA network with a single state variable and its discrete-time counterpart were developed recently These kWTA networks have proven properties of global convergence and simple architectures Starting with problem formulations, this paper reviews related existing kWTA networks and extends the existing kWTA networks with piecewise linear activation functions to the ones with high-gain activation functions The paper then presents experimental results of the continuous-time and discrete-time kWTA networks with infinity-gain activation functions The results show that the kWTA networks are parametrically robust and dimensionally scalable in terms of problem size and convergence rate.

[1]  Jiun-In Guo,et al.  A new k-winners-take-all neural network and its array architecture , 1998, IEEE Trans. Neural Networks.

[2]  Shubao Liu,et al.  A Simplified Dual Neural Network for Quadratic Programming With Its KWTA Application , 2006, IEEE Transactions on Neural Networks.

[3]  Xiaolin Hu,et al.  An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its $k$-Winners-Take-All Application , 2008, IEEE Transactions on Neural Networks.

[4]  Yaser S. Abu-Mostafa,et al.  On the K-Winners-Take-All Network , 1988, NIPS.

[5]  Hanif D. Sherali,et al.  Linear Programming and Network Flows , 1977 .

[6]  Andrew Chi-Sing Leung,et al.  Analysis for a class of winner-take-all model , 1999, IEEE Trans. Neural Networks.

[7]  Mingqi Deng,et al.  Winner-take-all networks , 1992 .

[8]  Xiaolin Hu,et al.  Design of General Projection Neural Networks for Solving Monotone Linear Variational Inequalities and Linear and Quadratic Optimization Problems , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[9]  Andreas G. Andreou,et al.  Winner-Takes-All Associative Memory: A Hamming Distance Vector Quantizer , 1997 .

[10]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Wolfgang Maass,et al.  On the Computational Power of Winner-Take-All , 2000, Neural Computation.

[12]  Kiichi Urahama,et al.  K-winners-take-all circuit with O(N) complexity , 1995, IEEE Trans. Neural Networks.

[13]  Jun Wang Analogue winner-take-all neural networks for determining maximum and minimum signals , 1994 .

[14]  Michael A. Arbib,et al.  The handbook of brain theory and neural networks , 1995, A Bradford book.

[15]  Qingshan Liu,et al.  Two k-winners-take-all networks with discontinuous activation functions , 2008, Neural Networks.

[16]  Tomaso Poggio,et al.  Cooperative computation of stereo disparity , 1988 .

[17]  Corneliu A. Marinov,et al.  Another K-winners-take-all analog neural network , 2000, IEEE Trans. Neural Networks Learn. Syst..

[18]  Corneliu A. Marinov,et al.  Stable computational dynamics for a class of circuits with O(N) interconnections capable of KWTA and rank extractions , 2005, IEEE Transactions on Circuits and Systems I: Regular Papers.

[19]  Alexander Fish,et al.  High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications , 2004 .

[20]  Corneliu A. Marinov,et al.  Performance analysis for a K-winners-take-all analog neural network: basic theory , 2003, IEEE Trans. Neural Networks.

[21]  Hanif D. Sherali,et al.  Linear programming and network flows (2nd ed.) , 1990 .

[22]  William J. Wolfe,et al.  K-winner networks , 1991, IEEE Trans. Neural Networks.

[23]  B. Sekerkiran,et al.  A CMOS K-winners-take-all circuit with O(n) complexity , 1999 .

[24]  G. L. Dempsey,et al.  Circuit implementation of a peak detector neural network , 1993 .

[25]  Yuen-Hsien Tseng,et al.  On a Constant-Time, Low-Complexity Winner-Take-All Neural Network , 1995, IEEE Trans. Computers.

[26]  Jinde Cao,et al.  A Discrete-Time Recurrent Neural Network with One Neuron for k-Winners-Take-All Operation , 2009, ISNN.

[27]  Haibo He,et al.  Advances in Neural Networks – ISNN 2009 , 2009, Lecture Notes in Computer Science.

[28]  Changyin Sun,et al.  A novel neural dynamical approach to convex quadratic program and its efficient applications , 2009, Neural Networks.

[29]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .