Introducing MONEDA: scalable multiobjective optimization with a neural estimation of distribution algorithm

In this paper we explore the model-building issue of multiobjective optimization estimation of distribution algorithms. We argue that model-building has some characteristics that differentiate it from other machine learning tasks. A novel algorithm called multiobjective neural estimation of distribution algorithm (MONEDA) is proposed to meet those characteristics. This algorithm uses a custom version of the growing neural gas (GNG) network specially meant for the model-building task. As part of this work, MONEDA is assessed with regard to other classical and state-of-the-art evolutionary multiobjective optimizers when solving some community accepted test problems.

[1]  Marco Laumanns,et al.  Scalable Test Problems for Evolutionary Multiobjective Optimization , 2005, Evolutionary Multiobjective Optimization.

[2]  Pedro Larrañaga,et al.  Estimation of Distribution Algorithms , 2002, Genetic Algorithms and Evolutionary Computation.

[3]  Thomas Martinetz,et al.  'Neural-gas' network for vector quantization and its application to time-series prediction , 1993, IEEE Trans. Neural Networks.

[4]  Qingfu Zhang,et al.  This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of , 2022 .

[5]  Dirk Thierens,et al.  The Naive MIDEA: A Baseline Multi-objective EA , 2005, EMO.

[6]  J. A. Lozano,et al.  Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation , 2001 .

[7]  R. K. Ursem Multi-objective Optimization using Evolutionary Algorithms , 2009 .

[8]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[9]  Chang Wook Ahn,et al.  Advances in Evolutionary Algorithms: Theory, Design and Practice , 2006, Studies in Computational Intelligence.

[10]  Marco Laumanns,et al.  PISA: A Platform and Programming Language Independent Interface for Search Algorithms , 2003, EMO.

[11]  Jesús García,et al.  A cumulative evidential stopping criterion for multiobjective optimization evolutionary algorithms , 2007, GECCO '07.

[12]  Peter A. N. Bosman,et al.  Design and Application of iterated Density-Estimation Evolutionary Algorithms , 2003 .

[13]  Lothar Thiele,et al.  A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers , 2006 .

[14]  Eros Pasero,et al.  An Hybrid Neural/Genetic Approach to Continuous Multi-objective Optimization Problems , 2003, WIRN.

[15]  Witold Kosiński,et al.  Advances in Evolutionary Algorithms , 2008 .

[16]  Peter J. Fleming,et al.  On the Evolutionary Optimization of Many Conflicting Objectives , 2007, IEEE Transactions on Evolutionary Computation.

[17]  Jesús García,et al.  Model-building algorithms for multiobjective EDAs: Directions for improvement , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[18]  Xin Yao,et al.  Performance Scaling of Multi-objective Evolutionary Algorithms , 2003, EMO.

[19]  T. Martínez,et al.  Competitive Hebbian Learning Rule Forms Perfectly Topology Preserving Maps , 1993 .

[20]  David E. Goldberg,et al.  Multiobjective Estimation of Distribution Algorithms , 2006, Scalable Optimization via Probabilistic Modeling.

[21]  R. Kruse,et al.  An extension to possibilistic fuzzy cluster analysis , 2004, Fuzzy Sets Syst..

[22]  P. N. Suganthan,et al.  Robust growing neural gas algorithm with application in cluster analysis , 2004, Neural Networks.

[23]  Bernd Fritzke,et al.  A Growing Neural Gas Network Learns Topologies , 1994, NIPS.