Augmented Lagrangian methods for constrained optimization: the role of the penalty constant
暂无分享,去创建一个
In recent years there has been considerable research activity in the area of penalty function and augmented Lagrangian methods for constrained optimization. The role that the penalty constant plays with respect to local convergence and rate of convergence is reviewed here. As the emphasis has changed from the penalty function methods to the multiplier methods, and lately to the quasi-Newton methods, there has been a corresponding decrease in the importance of the penalty constant. Specifically, in the penalty function method one obtains local convergence if and only if the penalty constant becomes infinite. It is possible to obtain local convergence in the multiplier method for a fixed penalty constant, provided that this constant is sufficiently large. However, one obtains superlinear convergence if and only if the penalty constant becomes infinite. Finally, the quasi-Newton methods are locally superlinearly convergent for fixed values of the penalty constant, and actually the most natural formulation gives an algorithm that is independent of the penalty constant.