Optimal Control Theory

The theory of optimal control is one of the major areas of application of mathematics today. From its early inception to meet the demands of automatic control system design in engineering, it has grown steadily in scope and now has spread to many other far removed areas such as economics. Until recently the theory has been limited to “lumped parameter systems”— systems governed by ordinary differential equations. In fact, it is most developed for linear ordinary differential equations—particularly feedback control for quadratic performance index—where the results are most complete and closest to use in practical design. The extension to partial differential equations (and delay differential equations) is currently an active area of research and holds much promise. It is natural that this extension deal with linear systems not only for mathematical reasons but also for reasons of practicality. The theory of semigroups of linear operators developed in the last chapter lends a convenient setting for this purpose and offers many advantages. It provides a useful degree of generality and serves, for instance, to distinguish between those aspects peculiar to the particular partial differential equation involved and those which are more general. Not the least advantage is the structural similarity to the familiar finite-dimensional model. Of course, semigroup theory per se applies only to time invariant systems; but this is not a serious limitation.