Adaptive control of linear systems

The principle of feedback control is to maintain a consistent performance when there are uncertainties in the system or changes in the setpoints through a feedback controller using the measurements of the system performance, mainly the outputs. Many controllers are with fixed controller parameters, such as the controllers designed by normal state feedback control, and H∞ control methods. The basic aim of adaptive control also is to maintain a consistent performance of a system in the presence of uncertainty or unknown variation in plant parameters, but with changes in the controller parameters, adapting to the changes in the performance of the control system. Hence, there is an adaptation in the controller setting subject to the performance of the closed-loop system. How the controller parameters change is decided by the adaptive laws, which are often designed based on the stability analysis of the adaptive control system. A number of design methods have been developed for adaptive control. Model Reference Adaptive Control (MRAC) consists of a reference model which produces the desired output, and the difference between the plant output and the reference output is then used to adjust the control parameters and the control input directly. MRAC is often in continuous-time domain, and for deterministic plants. Self-Tuning Control (STC) estimates system parameters and then computes the control input from the estimated parameters. STC is often in discrete-time and for stochastic plants. Furthermore, STC often has a separate identification procedure for estimation of the system parameters, and is referred to as indirect adaptive control, while MRAC adapts to the changes in the controller parameters, and is referred to as direct adaptive control. In general, the stability analysis of direct adaptive control is less involved than that of indirect adaptive control, and can often be carried out using Lyapunov functions. In this chapter, we focus on the basic design method of MRAC.