Optimization by ghost image processes in neural networks

Abstract We identify processes for structuring neural networks by reference to two classes of interacting mappings, one generating provisional outcomes (“trial solutions”) and the other generating idealized representations, which we call ghost images. These mappings create an evolution both of the provisional outcomes and ghost images, which in turn influence a parallel evolution of the mappings themselves. The ghost image models may be conceived as a generalization of the self-organizing neural network models of Kohonen. Alternatively, they may be viewed as a generalization of certain relaxation/restriction procedures of mathematical optimization. Hence indirectly they also generalize aspects of penalty based neural models, such as those proposed by Hopfield and Tank. Both avenues of generalization are “context free”, without reliance on specialized theory, such as models of perception or mathematical duality. From a neural network standpoint, the ghost image framework makes it possible to extend previous Kohonen-based optimization approaches to incorporate components beyond a visually oriented frame of reference. This added level of abstraction yields a basis for solving optimization problems expressed entirely in symbolic (“non-visual”) mathematical formulations. At the same time it allows penalty function ideas in neural networks to be extended to encompass other concepts springing from a mathematical optimization perspective, including parametric deformations and surrogate contractions . This paper demonstrates the efficacy of ghost image processes as a foundation for creating new optimization approaches by providing specific examples of such methods for covering, packing, generalized covering, fixed charge and multidimensional knapsack problems. Preliminary computational results for multidimensional knapsack problems are also presented.

[1]  Vasek Chvátal,et al.  A Greedy Heuristic for the Set-Covering Problem , 1979, Math. Oper. Res..

[2]  C. Reeves Modern heuristic techniques for combinatorial problems , 1993 .

[3]  F. Glover,et al.  In Modern Heuristic Techniques for Combinatorial Problems , 1993 .

[4]  Cecilia R. Aragon,et al.  Optimization by Simulated Annealing: An Experimental Evaluation; Part I, Graph Partitioning , 1989, Oper. Res..

[5]  Richard Durbin,et al.  An analogue approach to the travelling salesman problem using an elastic net method , 1987, Nature.

[6]  D. Werra,et al.  Tabu search: a tutorial and an application to neural networks , 1989 .

[7]  F. Glover HEURISTICS FOR INTEGER PROGRAMMING USING SURROGATE CONSTRAINTS , 1977 .

[8]  Fred W. Glover,et al.  A user's guide to tabu search , 1993, Ann. Oper. Res..

[9]  S. Senju,et al.  An Approach to Linear Programming with 0--1 Variables , 1968 .

[10]  F. Glover A Multiphase-Dual Algorithm for the Zero-One Integer Programming Problem , 1965 .

[11]  Bernard Angéniol,et al.  Self-organizing feature maps and the travelling salesman problem , 1988, Neural Networks.

[12]  Hasan Pirkul,et al.  Efficient algorithms for solving multiconstraint zero-one knapsack problems to optimality , 1985, Math. Program..

[13]  Y. Toyoda A Simplified Algorithm for Obtaining Approximate Solutions to Zero-One Programming Problems , 1975 .

[14]  Fred W. Glover,et al.  Future paths for integer programming and links to artificial intelligence , 1986, Comput. Oper. Res..

[15]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[16]  Jadranka Skorin-Kapov,et al.  A connectionist approach to the quadratic assignment problem , 1992, Comput. Oper. Res..