Stability Region Analysis using composite Lyapunov functions and Sum of Squares Programming

We propose using (bilinear) sum-of-squares programming fo r obtaining inner bounds of regionsof-attraction and outer bounds of attractive invariant set s for dynamical systems with polynomial vector fields. We search for composite Lyapunov functions, compris ed of pointwise maximums and pointwise minimums of polynomial functions. Excellent results for so me examples are obtained using the proposed methods and the PENBMI solver. I. I NTRODUCTION Finding the stability region or region-of-attraction of a n onlinear system is a topic of significant importance and has been studied extensively, for example in [1], [2] and [3]. It also has practical applications, such as determining the operating envelope o f aircraft and power systems. In this paper, we present a method of using sum of squares (SOS ) programming to search for polynomial Lyapunov functions that enlarge a provable regi on-of-attraction of nonlinear systems with polynomial vector fields. An impediment to using high de gr e Lyapunov functions is the extremely rapid increase in the number of optimization deci sion variables as the state dimension and the degree of the Lyapunov function (and the vector field) increase (see section VII-A). Here, we propose using pointwise maximum or minimum of polyn mial functions to obtain rich functional forms while keeping the number of optimizat on decision variables relatively low. The region-of-attraction analysis is applicable to sy stems with local asymptotically stable equilibrium points. For systems without such properties, w e study a related problem of finding an outer bound for an attractive invariant set. The notation is generally standard, withRn denoting Department of Mechanical Engineering, University of California, Ber keley. Email: {weehong, pack }@jagger.me.berkeley.edu the set of polynomials with real coefficients in n variables andΣn ⊂ Rn denoting the subset of SOS polynomials. II. ESTIMATING A REGION OFATTRACTION Consider a system of the form ẋ = f(x) (1) wherex(t) ∈ R and f is a n-vector of elements of Rn with f(0) = 0. We want to find a region-of-attraction for this system, i.e. all trajectori es starting in this region will be attracted to the fixed point at the origin. The following lemma on finding a region-of-attraction using a Lyapunov function is a modification of a lemma from [4, pg. 167 ] and [5, pg. 122]: Lemma 1: If there exists a continuously differentiable function V : R → R such that V is positive definite, (2) Ω := {x ∈ R |V (x) ≤ 1} is bounded, and (3) {x ∈ R |V (x) ≤ 1} \ {0} ⊆ {x ∈ R | ∂V ∂x f(x) < 0} (4) then for all x(0) ∈ Ω, the solution of (1) exists, satisfies x(t) ∈ Ω, and limt→∞ x(t) = 0. As such,Ω is invariant, and a subset of the region-of-attraction for ( 1). Proof: Let Ωr := {x ∈ R |V (x) ≤ r ≤ 1}, soΩr ⊆ Ω and henceΩr is bounded. Because V̇ < 0 on Ωr \ {0}, if x(0) ∈ Ωr, V (x(t)) ≤ V (x(0)) ≤ r while the solution exists. This means that solution starting inside Ωr will remain in Ωr while the solution exists. Since Ωr is compact, the system (1) has an unique solution defined for all t ≥ 0 if x(0) ∈ Ωr. Take ǫ > 0. Define Sǫ := {x ∈ R | ǫ 2 ≤ V (x) ≤ 1}. Note thatSǫ ⊆ Ω \ {0} ⊆ {x ∈ R n | ∂V ∂x f(x) < 0}. SinceSǫ is compact,∃ rǫ > 0 such thatV̇ ≤ −rǫ < 0 on Sǫ. This implies that ∃ t such thatV (x(t)) < ǫ for all t > t, i.e. x(t) ∈ Tǫ := {x ∈ R |V (x) < ǫ} for all t > t. Hence, ifx(0) ∈ Ω, V (x(t)) → 0 as t → ∞. Finally, let ǫ > 0. DefineΩǫ := {x ∈ R | ‖x‖ ≥ ǫ, V (x) ≤ 1}. Ωǫ is compact, with0 / ∈ Ωǫ. SinceV is continuous and positive definite, ∃ γ such thatV (x) ≥ γ > 0 on Ωǫ. We have already established that V (x(t)) → 0 as t → ∞, so ∃ t̂ such that for allt > t̂, V (x(t)) < γ and hence x(t) / ∈ Ωǫ, which means‖x(t)‖ < ǫ. So x(t) → 0 as t → ∞. Remark: The constraints in equations (2)-(4) are not convex constra i ts onV , as illustrated by a 1-dimensional example, [6]. Take f(x) = −x, V1(x) = 16x − 19.95x + 6.4x and V2(x) = 0.1x . ThenV1 andV2 satisfy (2)-(4), but0.58V1 + 0.42V2 does not. This nonconvexity of certifying Lyapunov functions in such a local analysis ma y be a source of the nonconvexity in our approach (described later in this section), namely th e bilinear nature of the formulation. A simple extension allowsV to be defined as the pointwise maximum of smooth functions. Lemma 2: If there exist continuously differentiable functions {Vi} q i=1 : R n → R such that V (x) := max 1≤i≤q Vi(x) is positive definite, (5) Ω := {x ∈ R |V (x) ≤ 1} is bounded, (6) Li := {x ∈ R n | max 1≤j≤q Vj(x) ≤ Vi(x) ≤ 1} (7) Li \ {0} ⊆ {x ∈ R n | ∂Vi ∂x f(x) < 0}, (8) then for all x(0) ∈ Ω, the solution of (1) exists, satisfies x(t) ∈ Ω, and limt→∞ x(t) = 0. As such,Ω is invariant, and a subset of the region-of-attraction for ( 1). Proof: The proof is written forq = 2. The extension to larger (but finite) q is straightforward. SinceL1 ∪ L2 = Ω, condition (8) insures that if x(0) ∈ Ω, V (x(t)) ≤ V (x(0)) ≤ 1 while the solution exists. Solution starting inside Ω will remain in Ω while the solution exists. Since Ω is compact, the system (1) has an unique solution defined for a ll t ≥ 0 wheneverx(0) ∈ Ω. Take ǫ > 0. Define Sǫ := {x ∈ R | ǫ 2 ≤ V (x) ≤ 1}, so Sǫ ⊆ (L1 ∪ L2) \ {0}. Note that for eachi, (Sǫ ∩ Li) ⊆ Li \ {0} ⊆ {x ∈ R | ∂Vi ∂x f(x) < 0}, so on the compact set Sǫ ∩ Li, ∃ ri,ǫ, such that ∂Vi ∂x f(x) ≤ −ri,ǫ < 0. Consequently, ifx(t) ∈ Sǫ ∩ L1 on [tA, tB], then V (x(tB)) ≤ −r1,ǫ(tB − tA) + V (x(tA)). Similarly, if x(t) ∈ Sǫ ∩ L2 on [tA, tB], then V (x(tB)) ≤ −r2,ǫ(tB − tA) + V (x(tA)). Therefore, ifx(t) ∈ Sǫ ∩ (L1 ∪ L2) on [tA, tB], then V (x(tB)) ≤ −rǫ(tB − tA) + V (x(tA)), whererǫ = min(r1,ǫ, r2,ǫ). Sincerǫ > 0, this implies that ∃ t such thatV (x(t)) < ǫ for all t > t, i.e. x(t) ∈ Tǫ := {x ∈ R |V (x) < ǫ} for all t > t. This shows that ifx(0) ∈ Ω, V (x(t)) → 0 as t → ∞. SinceV is positive definite and continuous, andΩ is bounded, the argument in Lemma 1 gives x(t) → 0. In order to enlargeΩ (by choice ofV ), we define a variable sized region Pβ := {x ∈ R | p(x) ≤ β}, and maximizeβ while imposing the constraint Pβ ⊆ Ω. Here, p(x) is a fixed, positive definite polynomial, chosen to reflect the relative importan ce of the states. Applying Lemma 2, the problem is posed as an optimization: max Vi∈Rn β s.t. Vi(0) = 0 V (x) := max 1≤i≤q Vi(x) is positive definite, (9) Ω := {x ∈ R |V (x) ≤ 1} is bounded, (10) {x ∈ R | p(x) ≤ β} ⊆ {x ∈ R |V (x) ≤ 1}, (11) {x ∈ R | max 1≤j≤q Vj(x) ≤ Vi(x) ≤ 1} \ {0} ⊆ {x ∈ R n | ∂Vi ∂x f(x) < 0} (12) Let l1(x) be a given positive definite polynomial. For each Vi, if we requireVi − l1 ∈ Σn for i = 1, . . . , q, then both (9) and (10) are satisfied. Clearly, (11) holds if an d o ly if {x ∈ R | p(x) ≤ β} ⊆ q