A Computationally Fast and Approximate Method for Karush-Kuhn-Tucker Proximity Measure

Karush-Kuhn-Tucker (KKT) optimality conditions are used for checking whether a solution obtained by an optimization algorithm is truly an optimal solution or not by theoretical and applied optimization researchers. When a point is not the true optimum, a simple violation measure of KKT optimality conditions cannot indicate anything about it’s proximity to the optimal solution. Past few studies by the first author and his collaborators suggested a KKT proximity measure (KKTPM) that is able to identify relative closeness of any point from the theoretical optimum point without actually knowing the exact location of the optimum point. In this paper, we suggest several computationally fast methods for computing an approximate KKTPM value, so that a convergence measure for iteration-wise best solutions of an optimization algorithm can be quantified for terminating an optimization run. The KKTPM value can also be used to isolate less-converged solutions in a population-based optimization algorithm so as to specially modify them for an overall faster execution of the optimization run. The approximate KKTPM values are evaluated in comparison with the original exact KKTPM value on standard single-objective, multi-objective and many-objective optimization problems. In all cases, our proposed ‘estimated’ approximate method is found to achieve a strong correlation of KKTPM values with the exact values and achieve such results in two or three orders of magnitude smaller computational time. These results are extremely motivating to launch further studies in using the proposed estimated KKTPM procedure for establishing termination-based and developing other modified optimization procedures.

[1]  D. Bertsekas 6.253 Convex Analysis and Optimization, Spring 2010 , 2004 .

[2]  Andrzej P. Wierzbicki,et al.  The Use of Reference Objectives in Multiobjective Optimization , 1979 .

[3]  Kalyanmoy Deb,et al.  Investigating EA solutions for approximate KKT conditions in smooth problems , 2010, GECCO '10.

[4]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[5]  Jing J. Liang,et al.  Problem Deflnitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization , 2006 .

[6]  Kalyanmoy Deb,et al.  Approximate KKT points and a proximity measure for termination , 2013, J. Glob. Optim..

[7]  Kaisa Miettinen,et al.  Nonlinear multiobjective optimization , 1998, International series in operations research and management science.

[8]  C. R. Bector,et al.  Principles of Optimization Theory , 2005 .

[9]  Kalyanmoy Deb,et al.  A flexible optimization procedure for mechanical component design based on genetic adaptive search , 1998 .

[10]  Kalyanmoy Deb,et al.  An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach , 2014, IEEE Transactions on Evolutionary Computation.

[11]  Kalyanmoy Deb,et al.  An Optimality Theory Based Proximity Measure for Evolutionary Multi-Objective and Many-Objective Optimization , 2015, EMO.

[12]  J. Periaux,et al.  Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems , 2001 .

[13]  Kalyanmoy Deb,et al.  Simulated Binary Crossover for Continuous Search Space , 1995, Complex Syst..

[14]  R. K. Ursem Multi-objective Optimization using Evolutionary Algorithms , 2009 .

[15]  Hsien-Chung Wu,et al.  The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function , 2007, Eur. J. Oper. Res..

[16]  A. Ravindran,et al.  Engineering Optimization: Methods and Applications , 2006 .

[17]  Yacov Y. Haimes,et al.  Multiobjective Decision Making: Theory and Methodology , 1983 .

[18]  Gary B. Lamont,et al.  Evolutionary Algorithms for Solving Multi-Objective Problems , 2002, Genetic Algorithms and Evolutionary Computation.

[19]  R. Tyrrell Rockafellar,et al.  Convex Analysis , 1970, Princeton Landmarks in Mathematics and Physics.

[20]  Kalyanmoy Deb,et al.  U-NSGA-III: A Unified Evolutionary Optimization Procedure for Single, Multiple, and Many Objectives: Proof-of-Principle Results , 2015, EMO.

[21]  Lothar Thiele,et al.  Comparison of Multiobjective Evolutionary Algorithms: Empirical Results , 2000, Evolutionary Computation.

[22]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[23]  Kalyanmoy Deb,et al.  On finding multiple Pareto-optimal solutions using classical and evolutionary generating methods , 2007, Eur. J. Oper. Res..

[24]  K. Schittkowski,et al.  NONLINEAR PROGRAMMING , 2022 .

[25]  José Mario Martínez,et al.  A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences , 2010, SIAM J. Optim..

[26]  Kalyanmoy Deb,et al.  An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints , 2014, IEEE Transactions on Evolutionary Computation.

[27]  Marco Laumanns,et al.  SPEA2: Improving the Strength Pareto Evolutionary Algorithm For Multiobjective Optimization , 2002 .

[28]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[29]  Qingfu Zhang,et al.  MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition , 2007, IEEE Transactions on Evolutionary Computation.