Inverse Optimality in Robust Stabilization

The concept of a robust control Lyapunov function ({\Bf rclf}) is introduced, and it is shown that the existence of an {\Bf rclf} for a control-affine system is equivalent to robust stabilizability via continuous state feedback. This extends Artstein's theorem on nonlinear stabilizability to systems with disturbances. It is then shown that every {\Bf rclf} satisfies the steady-state Hamilton--Jacobi--Isaacs (HJI) equation associated with a meaningful game and that every member of a class of pointwise min-norm control laws is optimal for such a game. These control laws have desirable properties of optimality and can be computed directly from the {\Bf rclf} without solving the HJI equation for the upper value function.