Data-smoothing regularization, normalization regularization, and competition-penalty mechanism for statistical learning and multi-agents

This paper provides an overview on advances of two new learning regularization approaches, both are developed in the past several years from the studies of Bayesian Ying Yang learning (BYY). The first is data smoothing regularization, which was firstly proposed in (Xu, 1997a) for parameter learning in a way similar to Tikhonov regularization but with an easy solution to the difficulty of determining an appropriate hyperparameter. The second is normalization regularization firstly proposed in (Xu, 2001b) which regularizes parameter learning via de-learning of conscience or penalizing type and has a close relation to the rival penalized competitive learning (RPCL) (Xu, Krzyzak, & Oja, 1993). Also, the algorithms for the two types of regularized learning versus the algorithms for maximum likelihood learning and the RPCL learning are presented in a unified learning procedure. Moreover, studies on the competition-penalty mechanism are further elaborated, and this mechanism, especially RPCL mechanism, is suggested to monitoring the performances of multi-agents.

[1]  Lei Xu,et al.  How many clusters?: A Ying-Yang machine based theory for a classical open problem in pattern recognition , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[2]  Lei Xu,et al.  Temporal BYY learning for state space approach, hidden Markov model, and blind source separation , 2000, IEEE Trans. Signal Process..

[3]  Erkki Oja,et al.  Rival penalized competitive learning for clustering analysis, RBF net, and curve detection , 1993, IEEE Trans. Neural Networks.

[4]  Tomaso A. Poggio,et al.  Regularization Theory and Neural Networks Architectures , 1995, Neural Computation.

[5]  Lei Xu,et al.  BYY data smoothing based learning on a small size of samples , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[6]  Lei Xu,et al.  BYY learning, regularized implementation, and model selection on modular networks with one hidden layer of binary units , 2003, Neurocomputing.

[7]  Lei Xu,et al.  BYY learning system and theory for parameter estimation, data smoothing based regularization and model selection , 2000, Neural Parallel Sci. Comput..

[8]  Lei Xu,et al.  Bayesian Ying-Yang machine, clustering and number of clusters , 1997, Pattern Recognit. Lett..

[9]  Lei Xu BKYY Three Layer Net Learning, EM-Like Algorithm, and Selection Criterion for Hidden Unit Number , 1998, ICONIP.

[10]  Lei Xu Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach: (V) Temporal Modeling for Temporal Perception and Control , 1998, ICONIP.

[11]  Christopher M. Bishop,et al.  Current address: Microsoft Research, , 2022 .

[12]  Lei Xu,et al.  Data smoothing regularization, multi-sets-learning, and problem solving strategies , 2003, Neural Networks.

[13]  Lei Xu,et al.  BYY harmony learning, structural RPCL, and topological self-organizing on mixture models , 2002, Neural Networks.

[14]  R. Redner,et al.  Mixture densities, maximum likelihood, and the EM algorithm , 1984 .

[15]  Lei Xu,et al.  Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach (VII): Data Smoothing , 1998, International Conference on Neural Information Processing.

[16]  Lei Xu,et al.  Best Harmony, Unified RPCL and Automated Model Selection for Unsupervised and Supervised Learning on Gaussian Mixtures, Three-Layer Nets and ME-RBF-SVM Models , 2001, Int. J. Neural Syst..

[17]  Lei Xu,et al.  BYY harmony learning, independent state space, and generalized APT financial analyses , 2001, IEEE Trans. Neural Networks.