Configurating and optimizing the back-propagation network

This chapter focuses on some of the ways of configuring and optimizing a back-propagation network. Some of the major ways to improve the performance of back-propagation or similar multi-layer feedforward networks are by modifying the structure, the dynamics, or the network training and learning rules. The structure of a basic feedforward network can be modified in several different ways. At the microstructural level, a new transfer function or creating new types of connection weights can be used. The microstructure of a Perceptron-like network can be modified by using a radial basis function as the transfer function in the hidden layer. Radial basis functions are particularly useful for complex mapping tasks where the mapping is continuous. The high-order connection networks, sometimes called “sigma-pi” networks, achieve greater processing power by using complex connections. Various combinations of inputs are multiplied together. The functional-link network achieves nonlinear responses using a single-layer net without hidden nodes. This is accomplished by applying nonlinear functions to some or all of the inputs before they are fed into the network.

[1]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[2]  Terrence J. Sejnowski,et al.  Analysis of hidden units in a layered network trained to classify sonar targets , 1988, Neural Networks.

[3]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[4]  Ming-Tak Leung,et al.  Fingerprint processing using back propagation neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[5]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[6]  B. Widrow,et al.  The truck backer-upper: an example of self-learning in neural networks , 1989, International 1989 Joint Conference on Neural Networks.

[7]  Stephen Grossberg,et al.  Invariant recognition of cluttered scenes by a self-organizing ART architecture: CORT-X boundary segmentation , 1989, Neural Networks.

[8]  Naftali Tishby,et al.  Consistent inference of probabilities in layered networks: predictions and generalizations , 1989, International 1989 Joint Conference on Neural Networks.

[9]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[10]  D. G. Bounds,et al.  A multilayer perceptron network for the diagnosis of low back pain , 1988, IEEE 1988 International Conference on Neural Networks.

[11]  Kunihiko Fukushima,et al.  Analysis of the process of visual pattern recognition by the neocognitron , 1989, Neural Networks.

[12]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[13]  Karl-Friedrich Kraiss,et al.  Teaching neural networks to guide a vehicle through an obstacle course by emulating a human teacher , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[14]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[15]  Scott E. Fahlman,et al.  An empirical study of learning speed in back-propagation networks , 1988 .

[16]  P. J. Werbos,et al.  Backpropagation and neurocontrol: a review and prospectus , 1989, International 1989 Joint Conference on Neural Networks.

[17]  John Moody,et al.  Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.

[18]  M. L. Rossen,et al.  Representational issues in a neural network model of syllable recognition , 1989, International 1989 Joint Conference on Neural Networks.

[19]  R. Nakano,et al.  Medical diagnostic expert system based on PDP model , 1988, IEEE 1988 International Conference on Neural Networks.

[20]  Lilly Spirkovska,et al.  Connectivity strategies for higher-order neural networks applied to pattern recognition , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[21]  James D. Keeler,et al.  Algorithms for Better Representation and Faster Learning in Radial Basis Function Networks , 1989, NIPS.

[22]  Bartlett W. Mel,et al.  Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning , 1989, NIPS.

[23]  Bernardo A. Huberman,et al.  AN IMPROVED THREE LAYER, BACK PROPAGATION ALGORITHM , 1987 .

[24]  T Poggio,et al.  Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks , 1990, Science.

[25]  A. C. Tsoi Multilayer perceptron trained using radial basis functions , 1989 .

[26]  Michael C. Mozer,et al.  Using Relevance to Reduce Network Size Automatically , 1989 .

[27]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[28]  Geoffrey E. Hinton,et al.  Learning symmetry groups with hidden units: beyond the perceptron , 1986 .

[29]  Kiyohiro Shikano,et al.  Modularity and scaling in large phonemic neural networks , 1989, IEEE Trans. Acoust. Speech Signal Process..

[30]  Kunihiko Fukushima,et al.  Neocognitron: A hierarchical neural network capable of visual pattern recognition , 1988, Neural Networks.