Computing control Lyapunov functions via a Zubov type algorithm

We present a scheme for the determination of control Lyapunov functions which can be used as a basis for numerical computations. Under the assumption of local asymptotic null-controllability, we define the domain of asymptotic null-controllability. On this setting a control Lyapunov function is defined via an optimal control problem. It is then shown that this function can be characterized as the unique viscosity solution of a partial differential equation which can be interpreted as a generalization of Zubov's equation.


Introduction
Control Lyapunov functions have been shown to be an interesting tool in the analysis of nonlinear control systems.The existence of such a function can according to its regularity guarantee several interesting properties.The existence of a continuous control Lyapunov functions is equivalent to asymptotic nullcontrollability [7,10].The existence of a continuously differentiable one, is equivalent to the existence of (possibly discontinuous) controllers robust with respect to measurement noise, [6].Several design procedures are available that construct controllers given the knowledge of a control Lyapunov function.We refer to [9] for a good introduction to the area.The use of control Lyapunov functions as a design tool is by now discussed in many textbooks, references can be found in [9].Here, usually, design procedures are discussed that use a certain structure of the system in order to find control Lyapunov functions.A general procedure for their determination is not available.It is therefore of interest to turn to numerical methods for the approximation of control Lyapunov functions.In this paper we present an approach to this end that is based on a generalization of a result by Zubov.One of the celebrated results in the theory of ordinary differential equations is Zubov's method [12] which asserts that the domain of attraction of an asymptotically stable fixed point x * of ẋ = f(x) , x ∈ R n may be characterized by solutions v of the partial differential equation Namely, under suitable assumptions on h the domain of attraction is the set v −1 ([0, 1)).The v constructed via this equation is automatically a Lyapunov function and smooth.This result has recently be extended to perturbed systems in [2].Numerical treatment of this equation is discussed in [3].As we already know that it is unreasonable to expect continuously differentiable control Lyapunov functions, we will look for solutions of a generalization of (1) in the viscosity sense.We refer to [4] for an introduction to the theory and its close link to problems in optimal control.Given this connection it should come as no surprise that our approach is also heavily based on optimal control methods.In fact our procedure can be viewed as an extension to [7] where the equivalence of asymptotic nullcontrollability and the existence of a control Lyapunov function has been proved using basically the same idea.Here we take this approach a step further and obtain a characterization of control Lyapunov functions as unique viscosity solutions of a suitable PDE.In the following Section 2 we introduce the problem, define the domain of asymptotic controllability and show some first properties of it.The ensuing Section 3 is devoted to the proof that a certain class of control Lyapunov functions is characterized as a viscosity solution of a partial differential equation.We discuss some open problems in the concluding Section 4.
We consider a control system of the form where u(•) ∈ U = L ∞ ([0, +∞), U) and U is a compact subset of R m , f is continuous and bounded in R n × U and Lipschitz in x with Lipschitz constant independent of u ∈ U .Solutions of this system are denoted by x(t, x 0 , u).We will assume that 0 ∈ U and that x = 0 is a fixed point under the control u = 0, that is, f(0, 0) = 0. We assume that the point 0 is uniformly locally asymptotically nullcontrollable (ULAM) for the system (2), i.e. there exist a constant r > 0 and a function β of class KL1 such that for any x 0 ∈ B(0, r) there exists a u ∈ U such that x(t, x 0 , u) ≤ β( x 0 , t).It is known [8] that for any β ∈ KL there exist two functions α 1 , α 2 ∈ K ∞ such that β(r, t) ≤ α 2 (α 1 (r)e −t ).For ease of presentation we will work with these two functions from now on.
The following definition specifies the set of initial conditions that may be asymptotically steered to zero.
Definition 2.1 Assume that (2) is ULAM.The domain of asymptotic nullcontrollability is defined by Let D ⊂ R n be the domain of asymptotic nullcontrollability of (2).A function V : R n → R satisfying V (0) = 0, V (x) > 0, x = 0 is called a control Lyapunov function on D, if it is proper on D and there exists a positive definite function W : R n → R ≥0 with max ξ∈∂P V (x) where ∂ P V (x) denotes the proximal subgradient of V in x, that is, the set of vectors ξ ∈ R n such that there exists ε, σ > 0 with In order to obtain a different characterization of D we introduce the following "first hitting time" defined by t(x, u) := inf{T > 0 : x(t, x, u) ∈ B(0, r) for all t ≥ T }, where we set inf ∅ = ∞.
Lemma 2.2 Assume that (2) is ULAM, then Proof: This is immediate from Definition 2.1.
In the following proposition we present some relevant properties of the set D. It will frequently be convenient to consider the reachable set at time T from an initial condition x 0 ∈ R n defined by Recall that a set M is called weakly forward invariant for system (2) if for all x ∈ M there exists a u ∈ U such that x(t, x, u) ∈ M for all t ≥ 0. Proof: (i) Assume that for some x ∈ ∂B(0, r) we have x / ∈ D. Let {x n } ⊂ B(0, r) be a sequence with lim n→∞ x n = x.By assumption to each x n there exists a control By construction y(t) ≤ α 2 (α 1 (r)e −t ), so that y(t) ∈ B(0, r/2) for some t large enough.Now by [1, Theorem 2.4.2]there is a sequence x(•, x, v n ) for some controls v n ∈ U converging uniformly to y.It follows that x(t, x, v n ) ∈ B(0, r) for some n large enough which shows that x is asymptotically null controllable.
(ii) Let x 0 ∈ D. Then there exist T ∈ R and u ∈ U such that x(T, x 0 , u) ∈ B(0, r).By continuous dependence on the initial value a neighborhood of x 0 is mapped into B(0, r) under the map y → x(T, y, u) and it follows that D is open.Furthermore, by definition from each x ∈ D (x ∈ D) there exists a trajectory x(•, x, u) entering B(0, r).This shows connectedness.Weak invariance is obvious from the definition.
If we assume that {T n } is bounded, we can find T such that, for any n there is a control u n such that x(T, x n , u n ) ∈ B(0, r).Now we can argue as in (i) to construct a solution from x 0 into B(0, r) which contradicts the assumption that x 0 / ∈ D. The assertion is clear for x n → ∞, as our assumptions exclude solutions exploding in backward time.
(iv) It is sufficient to show that for every x ∈ cl D and all t > 0 there exists a u ∈ U such that x(t, x, u) ∈ cl D. Then the claim follows by concatenation.Assume this is not the case for x 0 ∈ cl D, then R(x 0 , T ) ∩ cl D = ∅.As R(x 0 , T ) is compact by [1, Theorem 2.2.1] it follows that there is a neighborhood V of R(x 0 , T ) such that V ∩ cl D = ∅.Now by continuous dependence on initial conditions a neighborhood of x 0 is mapped into V by the map x → x(T, •, u) for any u ∈ U.This contradicts weak invariance of D.

Zubov's method for domains of asymptotic null controllability
It is our aim to show that some control Lyapunov functions may be characterized as viscosity solutions and that the set D may be characterized with the help of these functions.Before turning to this problem we introduce two optimal value functions and show certain properties of these functions.This will enable us to use standard techniques from the theory of Hamilton-Jacobi-Bellman equations in the proofs.
Consider the following nonnegative, extended value function V : R n → R ∪ {+∞} given by the optimal control problem We will also consider its transformation via the Kruzkov transform The function g : R n × U → R ≥0 is assumed to be continuous and satisfies furthermore (A iii) For every R > 0 there exists a constant L R such that g(x, u) − g(y, u) ≤ L R x − y for all x , y ≤ R, and all u ∈ U .
Since g is nonnegative it is immediate that V (x) ≥ 0 and v(x) ∈ [0, 1] for all x ∈ R n .Furthermore, standard techniques from optimal control (see e.g.[4, Chapter III]) imply that V and v satisfy the dynamic programming principle, i.e. for each t > 0 we have and with In the next proposition we investigate the relation between D and V (and thus also v), and the continuity of V and v.

Proof:
(i) To show that V (x 0 ) < +∞ for x 0 ∈ D, observe that by Lemma 2.2 for each x 0 ∈ D there exist T 0 > 0, u ∈ U such that x(t, x 0 , u) ∈ B(0, r) for all t ≥ T 0 .Thus J(x 0 , u) is bounded by and therefore V (x 0 ) < +∞.Now let x 0 ∈ D. Then for any u ∈ U we have g(x(t, x 0 , u), u(t)) ≥ g 0 , where g 0 > 0 is defined as in (A ii).It follows that J(x, u) = ∞ and hence V (x) = +∞.Now the assertion for v is immediate.
(ii) This follows immediately from (3), (A i), and ).Then local asymptotic nullcontrollability and (A i) imply that for some u 0 ∈ U we have Now fix ε > 0 and r * < α −1 1 (α −1 2 (r)) ≤ r such that Cα 1 (r * ) < ε.Let x ∈ D be arbitrary.By assumption there are u ∈ U and T > 0 such that x(T, x, u) ≤ r * /2 and V (x) + ε > J(x, u).By continuous dependence on the initial value there is a neighborhood W ⊂ D of x such that x(T, y, u) ≤ r * for all y ∈ W .We may assume that cl W ⊂ D is compact, whence also R := cl ∪ x∈W,t∈[0,T * ] R(x, t) is compact.This implies in particular, that V (y) ≤ T max{g(x, u) | x ∈ R, u ∈ U } + ε for all y ∈ W . From the boundedness of V on W and the fact that g is bounded away from zero on R n \ B(0, r * /2) it follows that there is a T * such that whenever y ∈ W and V (y) + ε > J(y, u) we have x(t, y, u) ∈ B(0, r * /2) for all t > T * .We may now choose Lipschitz constants L f , L g for f and g on R, respectively.Let y, z ∈ W and u be such that V (y)+ε > J(y, u).Then we obtain If z − y is small enough then x(T * , z, u) ∈ B(0, r * ) so that V (x(T * , z, u)) < Cα 1 (r * ) and also the integral can be bounded by ε, so that the whole expression is bounded by 4ε.As this condition is symmetric in y, z this shows continuity of V .The function v is then continuous by definition.
(iv) The statement follows immediately from Proposition 2.3 (iii) since D is open and g(x, u) ≥ g 0 > 0 for x outside of cl B(0, r) as assumed in (A ii).
We now turn to the formulation of suitable partial differential equations for which V and v form solutions.Since in general these functions will not be differentiable we have to work with a more general solution concept, namely viscosity solutions.
Let us recall the definition of viscosity solutions (for more details about this theory we refer to [4]).A continuous function u : O → R is said to be a viscosity solution of ( 8) if u is a viscosity supersolution and a viscosity subsolution of (8).
Recalling that V is locally bounded in D, and v is locally bounded on R n the following proposition follows from an easy application of the dynamic programming principles ( 5) and ( 6), cp.[4, Chapter III].
and v is a viscosity solution of Observe that ( 10) is the straightforward generalization of the classical Zubov equation ( 1) introduced in [12].Also our "auxiliary function" V can be characterized as the solution of a suitable PDE.It might be considered to be more natural to write inf u∈U {Dv(x)f(x, u) + (1 − v(x))g(x, u)} = 0.However, this would be in disaccord with the standard way of defining sub-and supersolutions.We note the following immediate consequence.

Corollary 3.4
The functions V, v are control Lyapunov functions on D.

Proof:
The equations ( 9) and ( 10) incorporate already the definition of the function W that is required in the definition of control Lyapunov functions.Properness on D follows from Proposition 2.3 (iv).The condition on the proximal subgradients follows from the properties of supersolutions.
In order to prove uniqueness of the solutions we need the following optimality principles.The statement is an immediate consequence of [11, Theorem 3.2 (i) & (iii)].Proposition 3.5 (i) Let w be a u.s.c.subsolution of (10) (ii) Let W be a u.s.c.subsolution of (10) in D, then for any x ∈ D (iii) Let w be a l.s.c.supersolution of (10) We can now apply these principles to the generalized version of Zubov's equation (10).Proposition 3.6 Let w be a bounded l.s.c.supersolution of (10) on R n with w(0) ≥ 0. Then w ≥ v for v as defined in (4).

Proof:
Let M > 0 be a bound on |w|.If for some control u and some η > 0 it holds that x(t, x 0 , u) ≥ η for all t ≥ 0 then it follows from assumption (H1) that G(x 0 , t, u) ≤ exp(−tg η ) for a suitable constant g η .This implies Hence, if for some x 0 ∈ R n the infimum in ( 13) is approximated via trajectories that are bounded away from the origin, then w(x 0 ) ≥ 1.In particular, this is the case if x 0 / ∈ D by the forward invariance of D c .Hence using Proposition 3.1 (i), we have w(x 0 ) ≥ 1 = v(x 0 ) for x 0 ∈ D c .If x 0 ∈ D and w(x 0 ) ≥ 1, then again Proposition 3.1 (i) implies the assertion, so that it remains to show the claim under the conditions x 0 ∈ D and w(x 0 ) < 1. Fix (1 − w(x 0 ))/2 > ε > 0 and choose u ε such that This implies in particular, that x(t, x 0 , u ε ) is not bounded away from the origin.Now observe that the lower semicontinuity of w and the assumption w(0) ≥ 0 imply that there exists a δ > 0 such that w(x) ≥ −ε for all x ≤ δ. ( Hence there exists a sequence t n → ∞ such that for all n ∈ N we have w(x(t n , x 0 , u ε )) ≥ −ε and Thus from ( 14) and (13), and using the definition of v we can conclude which shows the claim, as ε > 0 is arbitrary.
Proposition 3.7 Let w be a bounded u.s.c.subsolution of (10) on R n with w(0) ≤ 0. Then w ≤ v for v defined in (4).
(ii) x 0 ∈ D: In this case by Proposition 3.1 (i) it is sufficient to show that w(x 0 ) ≤ 1.Let M be a bound on |w|.Since x(t, x 0 , u) ∈ D for all u ∈ U we have and the result follows by (11) as the right hand side tends to 1 for t → ∞.
Using these propositions we can now formulate an existence and uniqueness theorem for the generalized version of Zubov's equation (10).
Theorem 3.8 Consider the system (2) and a function g : R n × U → R such that (H1) and (H2) are satisfied.Then (10) has a unique bounded and continuous viscosity solution v on R n satisfying v(0) = 0.
This function coincides with v from (4).In particular the characterization Proof: This is immediate from Propositions 3.6 and 3.7.
For the sake of completeness we state the analogous result for equation ( 9) which is proved with the same techniques, using the obvious modifications of ( 11) and (13).Observe that this result corresponds to the one in [5].
Theorem 3.9 Consider the system (2) and a function g : R n ×U → R. Assume (H1) and (H2).Let O ⊂ R n be an open set containing the origin, and let P : O → R be a positive and continuous function which is a viscosity solution of (9) on O and satisfies P (0) = 0 and P (x) → ∞ for x → ∂O and for |x| → ∞.
Then P coincides with V from (3) and O = D.In particular, the function V from (3) is the unique positive continuous viscosity solution of equation ( 9) on D with V (0) = 0.
As in [2] it can be shown that we can restrict ourselves to a proper open subset O of the state space and still obtain our solution v, provided D ⊆ O.We omit the discussion of this aspect, for reasons of space.
Finally, we return to the point that the function v is a control Lyapunov function for the system (2).In particular each sublevel set of v is weakly forward invariant.
The following bound may be obtained for the amount of decrease that can be obtained in terms of v.This immediately yields the assertion.

Conclusions
This paper contains some first results on the relation of control Lyapunov functions and viscosity solutions, but the story seems to be far from complete.The experienced reader will have noticed that we avoided any problems with the boundedness of control functions by assuming that U is compact.The standard definition allows for non-compact U and requires in addition that on each compact subset of the state space it is sufficient to choose control values from a compact set in order to drive the system to zero.In terms of the Zubov equation this means that large values of u should to be penalized.The construction then becomes a little more involved.That we also required f to be bounded on the other hand is no big drawback, as one might always replace f by f/(1 + f ) without changing the systems trajectories.Though of course other points of interest in design are lost.A further question is what subset of control Lyapunov functions may be realized as solutions to a Zubov equation by appropriate choice of g.This is the subject of ongoing research.