Adiabatic layering: A new concept of hierarchical multi-scale optimization

Abstract Recurrent neural networks (RNNs) with linearized dynamics have shown great promise in solving continuous valued optimization problems subject to bound constraints. Building on this progress, a novel method of constrained hierarchical multi-scale optimization is developed that applies to a wide range of optimization problems and signal decomposition tasks. Central to the underlying concept is the definition of adiabatic layering. Analytic justification of this model can be regarded as a natural development of the mean-field theory. What emerges is an alternative hierarchical optimization method that promises to improve upon existing hierarchical schemes in combining the accuracy of global optimization with the compact representation of hierarchical optimization. Whereas conventional hierarchical optimization techniques typically tend to average over fine-scale detail when applied to bound-constrained problems, such behavior is avoided by the modified dynamics of the proposed method. Applied to the signal decomposition problem of RBF approximation, the behaviour of the adiabatic layering model is shown to be in close correspondence with the theoretical expectations.

[1]  Carsten Peterson,et al.  Explorations of the mean field theory learning algorithm , 1989, Neural Networks.

[2]  John Moody,et al.  Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.

[3]  Michel Benaïm,et al.  On Functional Approximation with Normalized Gaussian Units , 1994, Neural Comput..

[4]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[5]  Malur K. Sundareshan,et al.  Exponential stability and a systematic synthesis of a neural network for quadratic minimization , 1991, Neural Networks.

[6]  Bart W. Stuck,et al.  A Computer and Communication Network Performance Analysis Primer (Prentice Hall, Englewood Cliffs, NJ, 1985; revised, 1987) , 1987, Int. CMG Conference.

[7]  Azriel Rosenfeld,et al.  Multiresolution image processing and analysis , 1984 .

[8]  J. Gillis,et al.  Matrix Iterative Analysis , 1961 .

[9]  Abdesselam Bouzerdoum,et al.  Neural network for quadratic optimization with bound constraints , 1993, IEEE Trans. Neural Networks.

[10]  Larry L. Schumaker,et al.  Topics in Multivariate Approximation , 1987 .

[11]  Carsten Peterson,et al.  A Mean Field Theory Learning Algorithm for Neural Networks , 1987, Complex Syst..

[12]  Yiu-Fai Wong,et al.  Clustering Data by Melting , 1993, Neural Computation.

[13]  Nira Dyn,et al.  Interpolation of scattered Data by radial Functions , 1987, Topics in Multivariate Approximation.

[14]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[15]  Robert P. W. Duin,et al.  Generalization capabilities of minimal kernel-based networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[16]  Edward H. Adelson,et al.  The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..