Extensions to projection pursuit learning networks with parametric smoothers

A neural network which can grow its own structure on the training attracts a lot of research. Projection pursuit learning networks (PPLNs) and cascaded correlation learning networks (CCLNs) are two such neural networks. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using trainable nonlinear unit activation functions (e.g., hermite polynomials). To relax the necessity of predefined smoothness of nonlinearity in a PPLN, we propose in this paper a new learning network, called a pooling projection pursuit network (PPPN), which alleviates the the critical requirement of adequate order pre-selection without suffering the regression performance degradation as often encountered in a previously proposed cascaded projection pursuit network.<<ETX>>

[1]  Jenq-Neng Hwang,et al.  The learning parsimony of projection pursuit and back-propagation networks , 1991, [1991] Conference Record of the Twenty-Fifth Asilomar Conference on Signals, Systems & Computers.

[2]  I-Chang Jou,et al.  A new cascaded projection pursuit network for nonlinear regression , 1994, Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing.

[3]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[4]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[5]  Jenq-Neng Hwang,et al.  Regression modeling in back-propagation and projection pursuit learning , 1994, IEEE Trans. Neural Networks.

[6]  Jerome H Friedman,et al.  Classification and Multiple Regression through Projection Pursuit , 1985 .