Independent Residual Analysis for Temporally Correlated Signals

An improvement to the Probabilistic Neural Network (PNN) is presented that overcomes two weaknesses of the original model. In the new model, due to the fact that each neuron uses its own Gaussian kernel function, a better generalization ability is achieved by the means of stretching and rotation leading to the Rotated Kernel Probabilistic Neural Network (RKPNN). Furthermore, an algorithm is presented that calculates automatically the kernel parameters of each Gaussian function. The covariance matrices will be subdivided into two other matrices R and S that are calculated separately. This training is slower than that of the original PNN, but in its complexity comparable with other classification methods. A real-world example finally prooves that the proposed model shows good generalization capacity with similar or even slightly better results than other approaches.

[1]  Andrzej Cichocki,et al.  A New Learning Algorithm for Blind Signal Separation , 1995, NIPS.

[2]  Liqing Zhang,et al.  Natural gradient algorithm for blind separation of overdetermined mixture with additive noise , 1999, IEEE Signal Processing Letters.

[3]  D. F. Specht,et al.  Probabilistic neural networks for classification, mapping, or associative memory , 1988, IEEE 1988 International Conference on Neural Networks.

[4]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[5]  Christian Jutten,et al.  Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture , 1991, Signal Process..

[6]  Mohamad T. Musavi,et al.  On the Generalization Ability of Neural Network Classifiers , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  G. H. You,et al.  ISBN recognition using a modified probabilistic neural network (PNN) , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems.

[8]  Donald F. Specht,et al.  Probabilistic neural networks , 1990, Neural Networks.

[9]  D. F. Specht,et al.  Generalization accuracy of probabilistic neural networks compared with backpropagation networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[10]  Shun-ichi Amari,et al.  Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.

[11]  Sheng Chen,et al.  Robust maximum likelihood training of heteroscedastic probabilistic neural networks , 1998, Neural Networks.

[12]  Eric Moulines,et al.  A blind source separation technique using second-order statistics , 1997, IEEE Trans. Signal Process..

[13]  Shun-ichi Amari,et al.  Blind source separation-semiparametric statistical approach , 1997, IEEE Trans. Signal Process..

[14]  Michael R. Berthold,et al.  Constructive training of probabilistic neural networks , 1998, Neurocomputing.

[15]  Jean-François Cardoso,et al.  Equivariant adaptive source separation , 1996, IEEE Trans. Signal Process..

[16]  Juan Castellanos,et al.  Probabilistic Neural Networks with Rotated Kernel Functions , 1997, ICANN.

[17]  Aapo Hyvärinen,et al.  A Fast Fixed-Point Algorithm for Independent Component Analysis , 1997, Neural Computation.

[18]  Liqing Zhang,et al.  Semiparametric model and superefficiency in blind deconvolution , 2001, Signal Process..

[19]  E. Parzen On Estimation of a Probability Density Function and Mode , 1962 .

[20]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[21]  Samy Bengio,et al.  SVMTorch: Support Vector Machines for Large-Scale Regression Problems , 2001, J. Mach. Learn. Res..

[22]  Terrence J. Sejnowski,et al.  Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources , 1999, Neural Computation.