Evolutionary Optimization of Constrained Problems

Although evolutionary algorithms have proved useful in general function optimization, they appeared particularly apt for addressing nonlinearly constrained optimization problems. Constrained optimization problems present the difficulties with potentially nonconvex or even disjoint feasible regions. Classic linear programming and nonlinear programming methods are often either unsuitable or impractical when applied to these constrained problems [76]. Unfortunately, most of the real-world problems often pose such difficulties. Evolutionary algorithms are global methods, which aim at complex objective functions (e.g., non-differentiable or discontinuous) and they can be constructed to cope effectively with these difficulties. There are, however, no well-established guidelines on how to deal with infeasible solutions. Contemporary evolution strategies usually use “death penalty” heuristic for infeasible solutions. This death penalty offers a few simplifications of the algorithm: for example, there is no need to evaluate infeasible solutions and to compare them with feasible ones. Fortunately, this method may work reasonably well when the feasible search space is convex and it constitutes a reasonable part of the whole search space. Otherwise, such an approach has serious limitations. For example, for many search problems where the initial population consists of infeasible individuals only, it might be essential to improve them [101]. Moreover, quite often the system can reach the optimum solution easier if it is possible to “cross” an infeasible region especially in non-convex feasible search spaces. This chapter deals with a new approach which will utilize a log-dynamic penalty function method in the NES algorithm [61, 62] that has been proposed and tested in the previous chapter.