Optimal Timing of Control-Law Updates for Unstable Systems with Continuous Control

The optimal control of a linear system is studied relative to a periodic unstable trajectory using continuous control. Gaussian state uncertainties induce a statistical cost of controlling the state over a long period of time. The length of time between control-law updates directly impacts this cost, and in a hyperbolically unstable system, the time between control updates can take an optimal value. If the amount of uncertainty is fixed, there is an optimal distribution between position and velocity uncertainty. We apply these ideas to study the statistical cost of controlling a spacecraft in the vicinity of a relative equilibrium point and a Halo orbit in the Hill three-body problem.