Unsupervised BCM projection pursuit algorithms for classification of simulated radar presentations

Abstract A comparison of the unsupervised Projection Pursuit learning algorithm (BCM), with supervised backward propagation (BP) and a laterally inhibited version of BP (LIBP) was performed. Simulated inverse synthetic aperature radar (ISAR) presentations served as a testbed for evaluation. Symmetries of the artificial presentations make the use of localized moments a convenient preprocessing tool for the inputs. Although all three algorithms obtain classification rates comparable to trained human observers for this simulated data base, BCM obtains solutions that classify more effectively inputs that are corrupted by noise or errors in registration; in noise tolerance experiments, the best BCM solution represents a 10 dB improvement over the best BP solution. Recurrent and differential forms of BCM that could be applied to time-dependent classification problems are also developed.

[1]  Nathan Intrator Feature Extraction using an Unsupervised Neural Network , 1991 .

[2]  L. Cooper,et al.  Synaptic plasticity in visual cortex: comparison of theory with experiment. , 1991, Journal of neurophysiology.

[3]  Daechul Park,et al.  Statistical analysis of a change detector based on image modeling of difference picture , 1990, International Conference on Acoustics, Speech, and Signal Processing.

[4]  Nathan Intrator,et al.  Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions , 1992, Neural Networks.

[5]  M. R. Davenport,et al.  Dispersive networks for nonlinear adaptive filters , 1992, Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop.

[6]  J. Friedman Exploratory Projection Pursuit , 1987 .

[7]  John W. Tukey,et al.  A Projection Pursuit Algorithm for Exploratory Data Analysis , 1974, IEEE Transactions on Computers.

[8]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[9]  Marius Usher,et al.  Excitatory-Inhibitory Networks with Dynamical Thresholds , 1990, Int. J. Neural Syst..

[10]  Jack Sklansky,et al.  Automated design of linear tree classifiers , 1990, Pattern Recognit..

[11]  E. Bienenstock,et al.  Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[12]  R. G. White,et al.  Change detection in SAR imagery , 1990 .

[13]  C. Lee Giles,et al.  Higher Order Recurrent Networks and Grammatical Inference , 1989, NIPS.

[14]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[15]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[16]  L N Cooper,et al.  Mean-field theory of a neural network. , 1988, Proceedings of the National Academy of Sciences of the United States of America.

[17]  C. Lee Giles,et al.  Extracting and Learning an Unknown Grammar with Recurrent Neural Networks , 1991, NIPS.

[18]  Christophe Bernard,et al.  Optimal approximation of square integrable functions by a flexible one-hidden-layer neural network of excitatory and inhibitory neuron pairs , 1991, Neural Networks.

[19]  A. S. Elfishawy,et al.  Adaptive change detection in image sequence , 1990, International Conference on Acoustics, Speech, and Signal Processing.

[20]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[21]  Leon N. Cooper,et al.  Learning and generalization in neural networks , 1990 .