Abstract This paper demonstrates that Pontriagin's Maximum Principle may be derived from the principle of Optimality. It considers a control system described by ẋ = f(x, u, t) where the control vector u is restricted to a closed and bounded set. The optimal control steers the system from an initial state x0 at 0 to a moving target in such a way that the cost of control along the optimal trajectory ∫t0t1f0[x0(τ), u0(τ)dτ is minimized. The optimal control u0 satisfies the maximum principle at each point of the trajectory: max〈g(x), f(x, u)〉 = 〈ḡ(x), f(x, u0)〉 where the vector g(x) is related to ∇Γ where Γ(x0, x) is the minimal cost of going from x0 to x.
[1]
N. Krasovskii.
On a problem of optimum control of nonlinear systems
,
1959
.
[2]
S. Dreyfus.
Dynamic Programming and the Calculus of Variations
,
1960
.
[3]
Charles A. Desoer,et al.
The Bang Bang Servo Problem Treated by Variational Techniques
,
1959,
Inf. Control..
[4]
T. Apostol.
Mathematical Analysis
,
1957
.
[5]
R. Bellman,et al.
On the “bang-bang” control problem
,
1956
.
[6]
E. Lee,et al.
Mathematical aspects of the synthesis of linear minimum response-time controllers
,
1960
.