Optimal control-1950 to 1985

Optimal control had its origins in the calculus of variations in the 17th century. The calculus of variations was developed further in the 18th century by Euler and Lagrange and in the 19th century by Legendre, Jacobi, Hamilton, and Weierstrass. In the early 20th century, Bolza and Bliss put the final touches of rigor on the subject. In 1957, Bellman gave a new view of Hamilton-Jacobi theory which he called dynamic programming, essentially a nonlinear feedback control scheme. McShane (1939) and Pontryagin (1962) extended the calculus of variations to handle control variable inequality constraints, the latter enunciating his elegant maximum principle. The truly enabling element for use of optimal control theory was the digital computer, which became available commercially in the 1950s. In the 1980s research began, and continues today, on making optimal feedback logic more robust to variations in the plant and disturbance models; one element of this research is worst-case and H-infinity control, which developed out of differential game theory.

[1]  E. J. McShane On Multipliers for Lagrange Problems , 1939 .

[2]  Edward S. Rutowski Energy Approach to the General Aircraft Performance Problem , 1954 .

[3]  H. Simon,et al.  Dynamic Programming Under Uncertainty with a Quadratic Criterion Function , 1956 .

[4]  A. E. Bryson,et al.  Optimum Rocket Trajectories With Aerodynamic Drag , 1958 .

[5]  J. Breakwell The Optimization of Trajectories , 1959 .

[6]  R. E. Kalman,et al.  Contributions to the Theory of Optimal Control , 1960 .

[7]  Henry J. Kelley,et al.  Gradient Theory of Optimal Flight Paths , 1960 .

[8]  R. E. Kalman,et al.  New Results in Linear Filtering and Prediction Theory , 1961 .

[9]  D. Joseph,et al.  On linear control theory , 1961, Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry.

[10]  J. G. F. Francis,et al.  The QR Transformation A Unitary Analogue to the LR Transformation - Part 1 , 1961, Comput. J..

[11]  A. E. Bryson,et al.  A Steepest-Ascent Method for Solving Optimum Programming Problems , 1962 .

[12]  A. MacFarlane An Eigenvector Solution of the Optimal Linear Regulator Problem , 1963 .

[13]  H. G. Moyer,et al.  A trajectory optimization technique based upon the theory of the second variation. , 1963 .

[14]  A. Bryson,et al.  Optimization and Control of Nonlinear Systems Using the Second Variation , 1963 .

[15]  Gene F. Franklin,et al.  A General Solution for Linear, Sampled-Data Control , 1963 .

[16]  Arthur E. Bryson,et al.  Energy-state approximation in performance optimization of supersonic aircraft , 1969 .

[17]  J. H. Wilkinson,et al.  TheQ R algorithm for real hessenberg matrices , 1970 .

[18]  R. Mehra,et al.  A generalized gradient method for optimal control problems with inequality constraints and singular arcs , 1972 .

[19]  Arthur E. Bryson,et al.  Wind Modeling and Lateral Control for Automatic Landing (Originally schedulled for publication in the Journal of Aircraft) , 1977 .

[20]  A. Laub A schur method for solving algebraic Riccati equations , 1978, 1978 IEEE Conference on Decision and Control including the 17th Symposium on Adaptive Processes.

[21]  G. Stein,et al.  Multivariable feedback design: Concepts for a classical/modern synthesis , 1981 .

[22]  Richard H. Battin,et al.  Space guidance evolution - A personal narrative , 1982 .

[23]  John C. Doyle Analysis of Feedback Systems with Structured Uncertainty , 1982 .

[24]  G. Zames,et al.  Feedback, minimax sensitivity, and optimal robustness , 1983 .

[25]  C. Hargraves,et al.  DIRECT TRAJECTORY OPTIMIZATION USING NONLINEAR PROGRAMMING AND COLLOCATION , 1987 .

[26]  Hans Seywald,et al.  Trajectory optimization based on differential inclusion , 1994 .

[27]  Dimitri P. Bertsekas,et al.  Nonlinear Programming , 1997 .