Relatively Optimal Control for Continuous-Time Systems

This paper presents a continuous time solution to the problem of the design of a relatively optimal control, precisely, a dynamic control which is optimal with respect to a given initial condition and it is stabilizing for any other initial state. This technique provides a drastic reduction of the complexity of the controller and successfully applies to systems in which (constrained) optimality is necessary for some "nominal operation" only. The technique is combined with a pole assignment procedure. It is shown that once the closed-loop poles have been fixed and an optimal trajectory originating from the nominal initial state compatible with these poles is computed, a stabilizing compensator which drives the system along this trajectory can be derived in closed form. There is no restriction about the optimality criterion and the constraints. The optimization is carried out over a finite-dimensional parameterization of the trajectories. However, although the complexity of the compensator is not affected by the choice of cost and constraints, (the compensator dimension is fixed as a design data) it is not in general easy to optimize in the proposed space since, in general, the constraints are infinite dimensional so approximations are necessary. In the case of quadratic optimization with simultaneous pole assignment, an efficient solution based on convex quadratic programming is proposed