Mathematical Systems Theory I

The state–space approach to control-systems theory seems to have originated in the 1950s with the study of necessary conditions for the existence of optimal controls. Viewed as a natural outgrowth of the calculus of variations—but with more problematic constraints—optimal control theory received widespread recognition with the publication of [6]. Contemporaneously with these initial investigations into optimal control theory, engineers were developing (sometimes ad-hoc) techniques for the design of control systems based on the input–output (or “frequency domain”) approach to the modeling of physical systems. The first attempt to reconcile the state–space approach with the input–output approach is generally attributed to R. E. Kalman as set forth in his seminal papers [2] and [3], which appeared in the early 1960s. Particularly significant was Kalman’s enunciation of the axiomatic definition of a (controlled) dynamical system in [3]. From these fruitful beginnings, research in controlled dynamical systems has experienced explosive growth in the intervening 40-plus years, resulting in a mature and well developed intellectual discipline with myriad and wide-ranging applications. Accompanying the maturation of the discipline is the increasing availability of monographs, textbooks, and research journals that specialize in controlled dynamical systems. Books on the subject are now available for audiences with widely diverse backgrounds, interests, and levels of mathematical preparation. As a result, reviewers of books in the subject area are obliged to place books in the context of a ever expanding sphere of literature. The ensuing discussion will therefore address not only the contents of the book under review, but also how it relates to a (small) sample of other existing books with similar objectives.