On Wang $k$ WTA With Input Noise, Output Node Stochastic, and Recurrent State Noise

In this paper, the effect of input noise, output node stochastic, and recurrent state noise on the Wang <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>WTA is analyzed. Here, we assume that noise exists at the recurrent state <inline-formula> <tex-math notation="LaTeX">$y(t)$ </tex-math></inline-formula> and it can either be additive or multiplicative. Besides, its dynamical change (i.e., <inline-formula> <tex-math notation="LaTeX">$dy/dt$ </tex-math></inline-formula>) is corrupted by noise as well. In sequel, we model the dynamics of <inline-formula> <tex-math notation="LaTeX">$y(t)$ </tex-math></inline-formula> as a stochastic differential equation and show that the stochastic behavior of <inline-formula> <tex-math notation="LaTeX">$y(t)$ </tex-math></inline-formula> is equivalent to an Ito diffusion. Its stationary distribution is a Gibbs distribution, whose modality depends on the noise condition. With moderate input noise and very small recurrent state noise, the distribution is single modal and hence <inline-formula> <tex-math notation="LaTeX">$y(\infty )$ </tex-math></inline-formula> has high probability varying within the input values of the <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$k+1$ </tex-math></inline-formula> winners (i.e., correct output). With small input noise and large recurrent state noise, the distribution could be multimodal and hence <inline-formula> <tex-math notation="LaTeX">$y(\infty )$ </tex-math></inline-formula> could have probability varying outside the input values of the <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$k+1$ </tex-math></inline-formula> winners (i.e., incorrect output). In this regard, we further derive the conditions that the <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>WTA has high probability giving correct output. Our results reveal that recurrent state noise could have severe effect on Wang <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>WTA. But, input noise and output node stochastic could alleviate such an effect.

[1]  Andrew Chi-Sing Leung,et al.  Effect of Input Noise and Output Node Stochastic on Wang's k WTA , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[2]  Andrew Chi-Sing Leung,et al.  A Fault-Tolerant Regularizer for RBF Networks , 2008, IEEE Transactions on Neural Networks.

[3]  Xiaolin Hu,et al.  An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its $k$-Winners-Take-All Application , 2008, IEEE Transactions on Neural Networks.

[4]  Corneliu A. Marinov,et al.  Time-Oriented Synthesis for a WTA Continuous-Time Neural Network Affected by Capacitive Cross-Coupling , 2010, IEEE Transactions on Circuits and Systems I: Regular Papers.

[5]  Lipo Wang,et al.  Noise injection into inputs in sparsely connected Hopfield and winner-take-all neural networks , 1997, IEEE Trans. Syst. Man Cybern. Part B.

[6]  Andrew Chi-Sing Leung,et al.  Properties and Performance of Imperfect Dual Neural Network-Based $k$ WTA Networks , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[7]  Chi-Sing Leung,et al.  Robustness Analysis on Dual Neural Network-based $k$ WTA With Input Noise , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[8]  B. Øksendal Stochastic differential equations : an introduction with applications , 1987 .

[9]  Yaser S. Abu-Mostafa,et al.  On the K-Winners-Take-All Network , 1988, NIPS.

[10]  Jinde Cao,et al.  A Novel Recurrent Neural Network With One Neuron and Finite-Time Convergence for $k$-Winners-Take-All Operation , 2010, IEEE Transactions on Neural Networks.

[11]  Yiran Chen,et al.  Memristor crossbar based hardware realization of BSB recall function , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[12]  Corneliu A. Marinov,et al.  Another K-winners-take-all analog neural network , 2000, IEEE Trans. Neural Networks Learn. Syst..

[13]  Chi-Sing Leung,et al.  Analysis on the Convergence Time of Dual Neural Network-Based $k{\rm WTA}$ , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[14]  Luca Konig,et al.  Design With Operational Amplifiers And Analog Integrated Circuits , 2016 .

[15]  Jun Wang,et al.  Analysis and Design of a $k$ -Winners-Take-All Model With a Single State Variable and the Heaviside Step Activation Function , 2010, IEEE Transactions on Neural Networks.

[16]  Andrew Chi-Sing Leung,et al.  Analysis of the DNN-kWTA Network Model with Drifts in the Offset Voltages of Threshold Logic Units , 2016, ICONIP.

[17]  Degang Chen,et al.  Analyses of Static and Dynamic Random Offset Voltages in Dynamic Comparators , 2009, IEEE Transactions on Circuits and Systems I: Regular Papers.

[18]  Gunhan Dundar,et al.  Fault-tolerant training of neural networks in the presence of MOS transistor mismatches , 2001 .

[19]  Jun Wang,et al.  Parametric Sensitivity and Scalability of k-Winners-Take-All Networks with a Single State Variable and Infinity-Gain Activation Functions , 2010, ISNN.

[20]  Andrew Chi-Sing Leung,et al.  Analysis on Wang's kWTA with Stochastic Output Nodes , 2011, ICONIP.

[21]  S. Aachen Stochastic Differential Equations An Introduction With Applications , 2016 .