Multi-objective problems (MOP) are of significant interest to both multi-criteria decision making (MCDM) and evolutionary multi-objective (EMO) research communities. A core technique common in both is scalarization, which combines multiple objectives into one in a way that solving it provides a solution to the original MOP. In this paper, we look closely at two scalarization methods --- augmented achievement scalarization function (AASF) and penalty boundary intersection (PBI). While the former has its roots in MCDM literature, the latter was developed in EMO field with focus on decomposition-based algorithms. We observe the conventional limits of the parameters involved in these methods and then demonstrate that by relaxing those limits one could be made to behave like the other. The aim is to gain a deeper understanding of both these measures, as well as expand their parametric range to provide more control over the search behavior of EMO algorithms. It also lays groundwork for further development of complete analytical derivations of equivalence conditions between the two metrics.
[1]
Lothar Thiele,et al.
Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
,
2000,
Evolutionary Computation.
[2]
Kaisa Miettinen,et al.
Nonlinear multiobjective optimization
,
1998,
International series in operations research and management science.
[3]
Hisao Ishibuchi,et al.
A Study on the Specification of a Scalarizing Function in MOEA/D for Many-Objective Knapsack Problems
,
2013,
LION.
[4]
Qingfu Zhang,et al.
MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition
,
2007,
IEEE Transactions on Evolutionary Computation.
[5]
Kalyanmoy Deb,et al.
An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints
,
2014,
IEEE Transactions on Evolutionary Computation.