CONTROL LYAPUNOV FUNCTIONS AND ZUBOV’S METHOD

For finite dimensional nonlinear control systems we study the relation between asymptotic null-controllability and control Lyapunov functions. It is shown that control Lyapunov functions may be constructed on the domain of asymptotic null-controllability as viscosity solutions of a first order PDE that generalizes Zubov’s equation. The solution is also given as the value function of an optimal control problem from which several regularity results may be obtained.

1. Introduction.We consider finite-dimensional control systems of the form ẋ(t) = f (x(t), u(t)) , (1.1) where x ∈ R n denotes the state, u ∈ R m denotes the input, and where f is sufficiently regular with f (0, 0) = 0. We call a point x 0 ∈ R n asymptotically controllable to 0 if there exists a measurable, essentially bounded function u 0 : R + → R m such that the corresponding solution ϕ(t, x 0 , u 0 ) of (1.1) satisfies ϕ(t, x 0 , u 0 ) → 0 for t → ∞.The domain of asymptotic null-controllability is the collection of all points that are asymptotically controllable to 0. The main results of this paper are twofold: On the one hand we provide a converse theorem for maximal control Lyapunov functions (CLFs) on the domain of asymptotic null-controllability. On the other hand we consider a generalized Zubov equation and prove that the maximal control Lyapunov function is the unique viscosity solution of this equation.5:03pm Thus beyond the proof of existence of CLFs a way to their numerical generation is provided.The construction of CLFs in this paper relies on optimal control methods as they are frequently used in Lyapunov theory.One of the contributions of the paper is to present easily checkable conditions on the running cost that result in an appropriate CLF and give rise to a tractable Hamilton-Jacobi equation.
Converse theorems have a fundamental role in Lyapunov theory as they state that certain stability properties imply the existence of a Lyapunov function.The direct implication that the existence of a Lyapunov function implies a stability property is usually much easier to prove.Early converse results were obtained by Persidskii, see the discussion in [20, Chapter VI], Massera [26] and Kurcveȋl [21].In recent times these results have been extended in several directions to cover perturbed systems and differential inclusions [23,12,37,7].
While for linear systems a constructive procedure to find Lyapunov functions has already been given by Lyapunov, the first general constructive procedure to find Lyapunov functions was obtained by Zubov [39].Namely, a Lyapunov function on the domain of attraction of an asymptotically stable fixed point x * ∈ R n of the system ẋ(t) = f (x(t)) , t ∈ R, x ∈ R n may be found by solving the 1st order PDE, called Zubov's equation, under the condition that v(0) = 0.Here h is an auxiliary function, see [39,20] for details.This method has been recently extended by the authors to the case of perturbed systems, see [9] where also a discussion of the impact of Zubov's result may be found.Further constructive approaches valid for C 2 systems and based on approximations by radial basis functions, respectively on linear programming methods have recently been described in [18,19].While for (perturbed) ordinary differential equations the property of interest is stability, for systems with control inputs a basic question concerns the existence of control functions steering the system to a desired target.In contrast to the case of asymptotically stable fixed points of ordinary differential equations, for which smooth Lyapunov functions always exist, it is not reasonable to require too many regularity properties of Lyapunov functions for controllability questions for systems of the form (1.1).For this reason it is now standard to formulate the concept of a control Lyapunov function in nondifferential terms.Recall that a function V : R n → R is called positive definite, if V (x) ≥ 0 for all x ∈ R n and V (x) = 0 iff x = 0.The function V is proper if preimages of compact sets are compact.A positive definite, proper function V is called a control-Lyapunov function (CLF) for (1.1) if there is a positive definite function W such that for every compact set X ⊂ R n there is a compact set U X of control values so that V is a continuous viscosity supersolution of max u∈U X −DV (x)f (x, u) ≥ W (x) , x ∈ X . ( For the definition of viscosity solutions we refer to [3].In many articles control Lyapunov functions are defined in terms of proximal subgradients of V , but the two notions are in fact equivalent, [10]. The interest in the theory of control Lyapunov has received widespread attention in recent years, in particular in connection with the design of stabilizing feedbacks.While design techniques using Lyapunov functions have been popular in applied control theory for a long time, the systematic study of converse theorems for control Lyapunov functions only started with Artstein [1], who proved for the case of systems affine in the control term u that the existence of a smooth CLF is equivalent to stabilizability by continuous state feedback.For general systems of the form (1.1) the existence of a global continuous CLF is equivalent to global asymptotic null controllability [31].Interestingly, the existence of a differentiable CLF is equivalent to the existence of (discontinuous) stabilizing feedbacks that are robust with respect to perturbations in the measurement of the state, [22].Now in general asymptotic nullcontrollability does not imply the existence of continuous stabilizing feedback as there may be topological obstructions to this which even carry over to the case of upper semicontinuous set-valued feedbacks, [8,13,29].For this reason discontinuous feedbacks and associated solution concepts have been one of the focal points of the research on CLF's in recent times starting with [11].In this context it has been shown by Clarke et al. [10], Rifford [27,28] using tools from nonsmooth analysis that semiconcavity of the CLF is an essential tool in order to establish the existence of feedback with nice properties.
Usually, the knowledge of a CLF requests a certain structure of the control system, while a general procedure for its determination is not available.Constructive approaches have therefore received widespread attention in literature, most notably with techniques known as backstepping and forwarding [17,30], which however, rely heavily on the differentiability of the CLF that is obtained.In this article we aim to derive a constructive approach by going back to the original ideas for the construction of control Lyapunov functions.Here constructive is to be understood in the way that we determine a class of PDEs which have unique solutions in the viscosity sense that are maximal control Lyapunov functions on the domain of asymptotic nullcontrollability.
It is a classical approach to the problem to regard CLF's as solutions of steady state Hamilton-Jacobi (HJ) equations.In the uncontrolled case this may be regarded as one of the central elements of the work of Zubov [20].In [16] the connection between smooth CLF's and HJ equations has been studied in detail.In particular, it is shown in that paper that smooth CLF's may always be interpreted as value functions of an appropriate optimal control problem.This "inverse optimality" property can be exploited in several ways [17].In a different approach, in [15] a CLF was obtained by truncating series expansion of analytical solutions of HJ equations in an approach very similar to early studies around Zubov's equation.
In the present paper we use ideas from [9] where, for the case of a perturbed system, the classical Zubov method was reinterpreted using a suitable notion of weak solution.For controlled or perturbed systems Zubov's equation becomes a nonlinear 1 st order PDE of Hamilton-Jacobi type and it is well known that this class of equations does not admit, in general, classical solutions.Therefore a suitable concept of weak solution has to be introduced and the one of viscosity solution seems to be appropriate, see [9,25].In the construction of the corresponding result for CLF's several additional technical obstacles have to be overcome, which stem from the possibility of solutions with finite escape time and the unboundedness of the control set, both of which pose no problem in the perturbed case.To this end reparametrization techniques are used -an idea, which was introduced in [5] and has been applied by various authors.
A similar problem to the CLF construction in this paper has been studied in [24] in which optimal control and viscosity methods are used in relation to the problem of steering the state of a system to a prescribed target.This leads to an optimal control problem with positive but vanishing Lagrangian, see [25,24] for further references.In [24] a "small-time controllability" property is used, which requires in particular that the target can be reached exactly in finite time starting from small neighborhoods of it.In the general context of control Lyapunov functions this reachability property is undesirable so that we have to apply different arguments.
We use this generalization of Zubov's method to construct a CLF for a finite dimensional nonlinear control system, that is asymptotically null controllable in a neighborhood of the origin.Our aim is to determine a CLF as (i) an optimal value function of a suitable control problem and (ii) as unique viscosity solution to a suitable HJ equations which is a generalization of the Zubov's equation.
Concerning the first point, i.e. the connection between CLF and optimal control problems, our procedure can be viewed as an extension of [31] where the equivalence between asymptotic null controllability and the existence of a CLF has been proved using an optimal control approach.The significant advantage of the characterization of a CLF as unique viscosity solution of the generalized Zubov equation is that it can used as the basis for its numerical approximation.
From the point of view of the PDE approach the equation presents some difficulties when attacked using the standard theory of viscosity solution because of the unbounded control set, see [4,5,14,36,35] for some related papers.In the proof of the necessary comparison result we use the local asymptotic controllability to obtain a local comparison result in a neighborhood of the origin.We then extend the comparison result to all R n taking advantage, as in the classical Zubov method, of the freedom in the choice of cost function of the associated control problem.For this reason we can make rather general assumptions on the dependence of the dynamics respect to the control variable compensating them with an appropriate choice of the cost.An example for an explicit construction of the cost function satisfying all requirements is provided.
The comparison result extends results in [36,35] at the price of studying a much more specific situation.The main difference is that in the setting studied in this paper a uniqueness result is obtained.
We proceed as follows: In the ensuing Section 2 the class of systems under consideration is defined and we prove some preliminary results.In Section 3 the optimal control problem that characterizes the domain of asymptotic null controllability is introduced and it is shown that under suitable conditions the corresponding value function is continuous, positive definite and proper on the domain of asymptotic nullcontrollability.In Section 4 we show that the value function of the optimal control problem is the unique viscosity solution of the generalized Zubov equation.In Section 5 we discuss an approximation of the problem with unbounded control set with a sequence of problems with bounded control set.In the last section we discuss the necessity of our assumptions at the hand of a few examples.It is also shown that for the classical linear quadratic control problem the general equations of this paper reduce to the standard algebraic Riccati equation.
2. The domain of null controllability.We consider nonlinear control systems of the type where f : R n × U → R is continuous, U ⊂ R m is a closed set and the space of admissible control functions is given by Solutions corresponding to an initial value x and a control u ∈ U at time t are denoted by ϕ(t, x, u), which are defined on a maximal positive interval of definition [0, T max (x, u)), where we do not exclude the case that T max (x, u)) < ∞. i.e. that solutions explode.In the following the open ball of radius r around a point z ∈ R p is denoted by B(z, r).Uniqueness of solutions is a consequence of our further standard assumption on f .These are formulated using comparison functions, a fashionable approach these days.1 (H0) There exists an open ball B(0, r), a constant ū > 0, and β ∈ KL such that for any x ∈ B(0, r) there exists Remark 2.1.The Lipschitz assumption (H0) is weaker than the following assumption: 2) is used in many papers on viscosity solutions, in particular in [35,36], whose results we will use later.In order to be able to use these results under the weaker assumption (H0) we define the map R : R m → R m by R(u) = γ −1 ( u )u/ u and consider the vector field (2.2).Hence by applying the results from [35,36] to f these immediately carry over to f under the weaker assumption (H0).
Property (H2) is a local asymptotic controllability property, which ensures that at least from a neighborhood of 0 the system may be steered to 0.
For certain systems it makes sense to strengthen this local asymptotic controllability property (H2) by requiring that u x is not only bounded but also converges to 0 as t → ∞.In this case we can strengthen (H2) to the so-called small control property There exists an open ball B(0, r) and β ∈ KL such that for any x ∈ B(0, r) there exists Note that (H2') implies (H2) with ū = β(r, 0).It is known [32] that for any β ∈ KL there exist two functions . These functions can be computed from β, e.g., in the case of exponential convergence, i.e., β(r, t) = ce −σt r for c, σ > 0, one obtains α 1 (r) = r 1/σ and α 2 (r) = cr σ .Note that (H2) or (H2') immediately imply β(r, 0) ≥ r and thus We note for later use that by applying For ease of presentation we will work with these two functions from now on.Furthermore, we will from now on tacitly assume that We define the domain of null controllability by and the first hitting time with respect to B(0, r) by with the convention inf ∅ = ∞.The following lemma shows how D 0 and t(x, u) are related.
Lemma 2.2.The set D 0 is given by Proof.If we find u ∈ U with t(x, u) < ∞ then for some t(x, u) < t 1 we have ϕ(t 1 , x, u) ∈ B(0, r) and we can concatenate u| [0,t 1 ] with the control u ϕ(t 1 ,x,u) from (H2), which implies ϕ(t, x, u) → 0. Hence we obtain For the formulation of the next result recall that a set M is called viable (or controlled or weakly invariant) if for every x ∈ M there is a u ∈ U such that ϕ(t, x, u) ∈ M for all t ≥ 0 (see [2]).In the following the convex hull of a set M is denoted by co M .
(i) cl B(0, r) ⊂ D 0 , (ii) the set D 0 is open, connected and viable.Proof.(i): It is clear that B(0, r) ⊂ D 0 .In order to show cl B(0, r) ⊂ D 0 pick x ∈ ∂B(0, r) and a sequence {x n } ⊂ B(0, r) with lim n→∞ x n = x.By assumption for each x n there exists a control . This shows that on each compact interval the solutions are bounded uniformly in n.Since, furthermore, the u n are uniformly bounded by continuity we get lim n→∞ ϕ(t, Then there exists T > 0 such that ϕ(T, x 0 , u) ∈ B(0, r).By continuous dependence on the initial value we obtain ϕ(T, x, u) ∈ B(0, r) for all x in a neighborhood of x 0 .Thus t(•, u) is finite on that neighborhood which shows that it is contained in D 0 .As x 0 was arbitrary this shows the assertion.
Since for any x ∈ D 0 there exists a trajectory from x to B(0, r) we obtain that D 0 is connected.
In order to see viability, consider a point x ∈ D 0 and the trajectory ϕ(t, x, u) → 0. Clearly, each point x(t) = ϕ(t, x, u), t ≥ 0 can be controlled to the origin by the control u(t + •), thus x(t) ∈ D 0 and hence ϕ(t, x, u) ∈ D 0 for all t ≥ 0, i.e., D 0 is viable.
Remark 2.4.Note that the domain of nullcontrollability D 0 is in general not diffeomorphic to R n .This is in contrast to the theory of domains of attraction of (perturbed) ordinary differential equations, i.e. the set {x 0 ∈ R n : ϕ(t, x 0 , u) → 0 as t → +∞ for any u ∈ U}.In the case of asymptotically stable fixed points the domain of attraction is diffeomorphic to R n even for perturbed systems, see e.g.[9,38].
3. Characterization of D 0 using Optimal Control.In this section we describe how to characterize the domain of asymptotic nullcontrollability via an optimal control problem and show continuity of the corresponding value function.In order to set up the problem we need a running cost g : R n × U → R. The assumptions on g are as follows: (H3) The function g : R n × U → R is continuous and satisfies (H0) with the same γ ∈ K ∞ as f .Furthermore, for all c > 0 we have Note that the assumption "with the same γ ∈ K ∞ as f " can always be met by enlarging the γ from (H0) for f , if necessary.
We now define the functional the (extended real valued) optimal value function and the function Note that both V and v satisfy appropriate dynamic programming principles (see for example [35,36]), i.e., for each T > 0 we have and where We now investigate the properties of V and v.For this purpose we need the following observation on the solutions of (2.1).Using the function γ from (H0) we define for u ∈ U Proof.From the definition of v it immediately follows that that the claims for V and v are equivalent.We show the statements for V .
(i) Pick a point x ∈ D 0 .Then there exists u ∈ U and

3).) By assumption (H1) we can assume (by changing u on
Hence using (H4) we can estimate If (H2') and (H4') hold, then the proof is completely analogous.Conversely, let x ∈ D 0 .Then we obtain t(x, u) = ∞ for all u ∈ U which implies for each u ∈ U and thus also (ii) It is clear that V (0) = 0, so let x = 0. Assume to the contrary that there is a sequence {u k } ⊂ U such that J(x, u k ) → 0. Let c := x /2 and denote

By (H3) we have for all
which is well defined up to a set of measure zero.Then On the other hand we have for all k that Using (H5) this implies that Next we turn to the investigation of the regularity properties of the functions V and v.We start by proving continuity properties for the trajectories of (2.1).
Lemma 3.4.Assume (H0) and let T > 0 and R > 0 be arbitrary constants.Then for all x, y ∈ R n and all u ∈ U satisfying we have ) The assumption (H0) yields for almost all t ∈ [0, T ] Using (3.8) Gronwall's Lemma we then obtain and the assertion follows.
Using this lemma we can prove the following continuity statement.Proposition 3.5.Assume (H0)-(H5) or their respective variants from Remark 3.1.Then V and v are continuous on D 0 .
Proof.We show the continuity of V , then the statement for v follows immediately from its definition.The proof is performed in several steps.Throughout the proof the constants C R , C etc. are those defined in (H0) and (H4), resp.(H4').
First note that from (3.6) we have Pick an arbitrary x 0 ∈ D 0 and fix ε > 0. Then there exists a u 0 ∈ U such that J(x 0 , u 0 ) ≤ V (x 0 ) + ε.Since J(x 0 , u 0 ) is finite it follows from (H3) there exists a time . By continuity of ϕ in x we can pick a ball B(x 0 , δ) such that We define the set which is compact since ϕ is continuous in t and x (recall that u 0 is essentially bounded).Using (3.10) we obtain from Bellman's optimality principle for all x ∈ B(x 0 , δ) the inequality where we have used (3.9).This shows that sup x∈B(x 0 ,δ) V (x) =: B V is finite.
(ii) (Bounds on ε-optimal controls and trajectories) For any x ∈ B(x 0 , δ) and any ε ∈ (0, 1] we pick an ε-optimal control function u x,ε ∈ U , i.e., We claim that for any ε, T > 0 the set are bounded.If the first set were unbounded then there would be an x ∈ B(x 0 , δ) and t 1 > 0 such that ϕ(t 1 , x, u x,ε ) ≥ V (x) + 2ε + 2r.If t 2 > t 1 is the first time at which ϕ(t 2 , x, u x,ε ) = 2r again, then we obtain using Lemma 3.2 that On the other hand, if { u x,ε γ,T | x ∈ B(x 0 , δ)} is unbounded for a given T > 0, then there have to be x, u x,ε such that u x,ε γ,T ≥ V (x) + 2ε + T γ(2ū).This implies that if we integrate over the (measurable) set as the contribution of the integral over [0, T ] \ E to u x,ε γ,T can be at most T γ(2ū).Using an estimate over the set E and again Lemma 3.2 we obtain again a contradiction to J(x, u x,ε ) ≤ V (x) + ε.
(iii) (Continuity of trajectories) We denote by R ε an upper bound on the set K ε .By Lemma 3.4 we can conclude that for x, y ∈ B(x 0 , δ) and all t ≥ 0 such that we have We show the continuity of V on B(x 0 , δ).Since x 0 ∈ D 0 was arbitrary this proves the proposition.So pick ε > 0 and assume without loss of generality that < α −1 2 (r)C.
From the lower bound g c on g in (H3) and the boundedness of J(x, u x,ε ) on B(x 0 , δ) it follows that for any ρ > 0 there is a time T ρ such that for x ∈ B(x 0 , δ) we have ϕ(t, x, u x,ε ) ∈ B(0, ρ) for some t ≤ T ρ .Using (3.9) we may thus assume that the controls u x,ε are chosen in such a way that there exists T > 0 (depending on B V ) such that for all t ≥ T , x ∈ B(0, δ) we have and note that the right hand side is finite by (ii).Choose two points x, y ∈ B(x 0 , δ) such that Without loss of generality assume V (y) ≥ V (x).Abbreviating u := u x,ε , T := T ε we obtain using the Lipschitz condition in (H3) and (3.11) we continue and we obtain , because in this case we obtain from (3.11) that ϕ(T, y, u) ∈ B(0, α −1 1 (ηε 1/η /C)) and thus from (3.9) Thus for any ε ∈ (0, 1] and any x ∈ B(x 0 , δ) we can find δ ε > 0 such that |V (y) − V (x)| ≤ 3ε, for all x, y ∈ B(x 0 , δ) with x − y ≤ δ ε .This implies continuity of V in B(x 0 , δ) and, since x 0 ∈ D 0 was arbitrary, continuity on the whole set D 0 .The next proposition makes a statement of the behavior of V (x) near the boundary of D 0 or at ∞. Proposition 3.6.Assume (H0)-(H5) or their respective variants from Remark 3.1.Then for any sequence x k which satisfies dist(x k , ∂D 0 ) → 0 or x k → ∞ we have V (x k ) → ∞ and v(x k ) → 1.In particular, v is continuous on R n .
Proof.If x k → ∞, then we have for every k either that x k / ∈ D 0 , in which case V (x k ) = ∞ or x k ∈ D 0 .In the latter case we have by Lemma 3.2 that V (x k ) ≥ x k − 2r, for all k large enough.This shows the assertion for V and the conclusion for v is immediate from the definition.
To prove the assertion for dist(x k , ∂D 0 ) → 0, we may now assume that there exists a sequence x k → x 0 ∈ ∂D 0 and some C > 0 such that V (x k ) ≤ C holds for all k ∈ N. Pick ε > 0 and for each k choose a control function u k ∈ U such that we have Following Step (ii) of the proof of Proposition 3.5 we obtain that {ϕ(t, x k , u k ) | t ≥ 0, k ∈ N} is bounded and that u k γ,t is uniformly bounded in k for all t ≥ 0. Then we may apply (3.11) as in Step (iv) of the proof of Proposition 3.5 to conclude that for every t ≥ 0 and every δ > 0 there is a Because of the lower bound on g in (H3) we may assume that there exists T > 0 (independent of k) such that This implies ϕ(T, x 0 , u k ) ∈ B(0, r/2) for all sufficiently large k ∈ N which in turn implies x 0 ∈ D 0 .This contradicts x 0 ∈ ∂D 0 because D 0 is open.respectively (for the definition of viscosity solution we refer to [6,3]).
Recalling that V is locally bounded in D 0 and v is bounded in R n , our first result follows from a standard application of the dynamic programming principles (3.4) and (3.5), see [3].
Proposition 4.1.Assume (H0) -(H5) or their respective variants from Remark 3.1.Then the functions V and v defined in (3.2) and (3.3) are viscosity solutions of (4.1) in D 0 and of of (4.2) in R n , respectively.Remark 4.2.Note that it follows from these characterizations that v is a control Lyapunov function on D 0 in the usual sense, [34].In fact, a small calculation shows that v is a viscosity supersolution on D 0 of The main result in this section will be a uniqueness statement for the equations (4.1) and (4.2), showing that the above functions are the unique viscosity solutions of these equations.
In order to obtain such a result we make use of the so called optimality principles developed by Soravia [35,36].For the application of the results from these references we need that our system is defined by a bounded vector field f .To this end we introduce a standard tool for unbounded control systems which consists in rescaling the coefficients of the equations (see [5], [35]) The following proposition summarizes the main properties of the rescaled functions.
Proposition 4.3.Assume (H0)-( H3) and (H5) or their respective variants from Remark 3.1.Then f and g satisfy (H0)-(H3) for suitably adjusted K ∞ and KL functions and the optimal value functions V and v of the original and the rescaled problems coincide.
Proof.First note that (H0), (H1), and the first part of (H3) follow by straightforward computations.In order to prove the second part of (H3) we fix an arbitrary c > 0 and show that gc := inf {g(x, u) | x ≥ c, u ∈ U } is positive.To this end we pick arbitrary x ∈ R n , u ∈ U with x ≥ c and distinguish three cases: Case 1 f (x, u) ≤ 1: In this case from (H3) we get g(x, u) ≥ g c /2. Case 2 f (x, u) > 1 and (x, u) ∈ B(0, 2r) × B(0, 2ū): In this case from (H3) we get g(x, u) Combining the three cases we obtain gc ≥ min{g c /2, g c /(1 + f ), 1/2} > 0 which shows the second part of (H3).
In order to show that the optimal value functions coincide, observe that the introduction of the vector field f and the running cost g amounts to nothing more than a rescaling of time, that does not change trajectories or values associated to a particular control.To see this let x ∈ R n , u ∈ U be given.Now introduce a new time variable τ through the differential equation , a.e., and a control ũ(τ ) := u(t(τ )), a.e.Then the function ψ(τ ) := φ(t(τ ), x, u) satisfies the differential equation So if we consider the system then using standard transformation of integral formulas it is also easy to see that if T (x, u) = ∞ then J(x, ũ) = J(x, u), where J defines the value along a rescaled trajectory using the running cost g in (3.1).If the solution explodes, i.e.T (x, u) < ∞ then we have so far simply defined the value to be infinity.However, since (H3) holds for g, the associated integral of the transformed system also diverges because it will never enter B(0, r).Thus, the optimal value functions coincide.Note that we do not need (H4) and (H5) for the rescaled problem in order to establish the previous result: (H4) is needed in order to ensure finiteness of V on D 0 while (H5) is needed in order to establish the continuity of V .Both properties readily carry over to the rescaled problem via the integral transformations.
In order to prove our uniqueness statement we need one final assumption.

(H6)
The rescaled function g satisfies g(x, u) → ∞ as u → ∞ for each x ∈ R n .
To Zubov's equations (4.1) and (4.2) we associate the Hamiltonians and From (H5) we obtain that the supremum in these Hamiltonians is attained in a compact subset of U for r < 1 in the case of H v .This implies that the Hamiltonians H V and H v are locally Lipschitz continuous with respect to their arguments, again for r < 1 in the case of H v .
The following Theorem 4.4 and its Corollary 4.5 are the main results of this paper.

2). (iii) The functions v and V characterize the domain of asymptotic controllability via
(iv) The functions v and V satisfy v(x k ) → 1 and V (x k ) → ∞ for all sequences with x k → ∂D 0 or x k → ∞.Before turning to the proof we state the following corollary, whose proof in particular shows how a cost function g meeting the assumptions of Theorem 4.4 can be constructed.
Corollary 4.5.Assume that f satisfies the conditions (H0)-(H2).Then there exists a continuous function v : R n → [0, 1] which is a control Lyapunov function (in the usual sense, cf.Remark 4.2) on the domain of asymptotic controllability D 0 and constant equal to 1 on R n \ D 0 .Furthermore, this v is the unique bounded viscosity solution of (4.2) with v(0) = 0 for some suitable g : R n × U → R.
Proof.In order to prove the theorem it is sufficient to construct a function g satisfying (H4)-(H6).Then the Lyapunov function property follows immediately from Theorem 4.4 and Remark 4.2.
To this end, consider the Lipschitz function ρ : R + 0 → R + 0 given by For this function (H3), (H4) and (H6) follows immediately from the construction.We obtain the desired g by modifying g as follows: Let γ ∈ K ∞ such that (H0) and (H3) are satisfied and set Then (H5) is satisfied while straightforward computations show that (H3), (H4) and (H6) carry over from g to g.
In the proof of Theorem 4.4 we encounter two difficulties: the unbounded dependence of the functions on the control variable and the vanishing of the cost g at the origin.
To solve the first problem we use the rescaled functions from above.Associated to these functions we introduce two rescaled equations which share with (4.1) and (4.2) the same set of sub-and supersolutions.
The proof of Lemma 4.6 is postponed to Appendix 6.The following corollary is a simple consequence of the previous lemma.
Even if the coefficients of the rescaled equations have a better dependence on the variable u, there is still the problem of the vanishing of g at the origin.In order to prove a uniqueness result for (4.5) and (4.6), we use a control theoretic argument and some optimality principles introduced in [35,36], as stated in the following lemma.
(ii) Consider a continuous viscosity supersolution w + of (4.6) and let Ω ⊂ R n be an open and bounded set with sup x∈Ω w + (x) < 1.Consider the first exit time from Ω given by Proof.Let Ω ⊂ R n be an open and bounded set and let U be a compact subset of U with the corresponding space of measurable control functions denoted by U .If w − is an upper semicontinuous viscosity subsolution of (4.6) in R n , then the restriction of w − to Ω is also a subsolution of (4.6) on Ω with U instead of U .For the restricted control value set U Equation (4.6) is continuous, furthermore f , g are uniformly Lipschitz on Ω. Hence we can apply [36, Theorem 3.2 (i)] which for each u ∈ U yields where T ex (x, u, Ω) is the first exit time of φ(t, x 0 , u) from the set Ω defined in (ii).Since f is globally bounded, for any x ∈ R n and any T > 0 we may find an open and bounded set Ω x,T ⊂ R n such that T ex (x, u, Ω x,T ) ≥ T for each u ∈ U .Since each u ∈ U is essentially locally bounded, it lies in U for an appropriate choice of U , which shows (i).
The proof of (ii) follows from [36, Theorem 3.2 (ii)] observing that the equation (4.6) is continuous on Ω since w − (x) < 1, hence here we do not need to restrict the control value set U .
Remark 4.9.Note that the asymmetry of the statements (i) and (ii) is due to the fact that we imposed different conditions in order to obtain continuity of (4.6), which is needed for the application of [36,Theorem 3.2].In (i) we restrict the set of control values U obtaining a result for arbitrary Ω (thus for arbitrary T ) and for upper semicontinuous functions.In (ii) this restriction is not possible because the supersolution property will not persists passing from U to U .Thus here we ensure continuity of (4.6) by considering suitable subsets Ω of the state space.
Using these inequalities we can now prove the following uniqueness results.Lemma 4.10.Assume (H0) -(H6) and consider the functions V and v defined by (3.2) and (3.3).Then (i) v is the unique bounded continuous viscosity solution of (4.6) with v(0) = 0, (ii) (D 0 , V ) is the unique couple of an open set containing the origin and a locally bounded, nonnegative continuous viscosity solution of (4.5) in the open set such that V (0) = 0 and V (x) → +∞ for x → ∂ Õ. Proof.We prove only (i), since the proof of assertion (ii) is similar.Note that by Proposition 4.3 the functions v and V can be taken to be defined through (4.4) and the running cost g.In the following we work with this representation.Again by φ(t, x, u) we denote the solutions of (4.4).Claim 1: If w − is a bounded continuous subsolution of (4.6) on R n with w − (0) ≤ 0, then w − ≤ v.By the upper semicontinuity of w − and w − (0) ≤ 0 we obtain that for every ε > 0 there exists a δ > 0 with w − (x) ≤ ε for all x ∈ R n with x ≤ δ.Now we distinguish two cases: Then (H3) (cf.Proposition 4.3) implies that there exists a sequence t k → ∞ such that φ(t k , x 0 , u * ) → 0 as k → ∞.Thus it follows from the lower optimality principle (4.7) and the definition of v that which shows the claim as ε > 0 was arbitrary.(ii) x 0 ∈ D 0 : In this case by Proposition 3.3 it is sufficient to show that w − (x 0 ) ≤ 1.
Thus using (4.10), (3.5) and the inequality G(x 0 , t n , u n ) ≤ 1 for sufficiently large t > 0 we can conclude which shows Claim 2, as ε > 0 is arbitrary.Finally, since every viscosity solution w is both sub-and supersolution, the combination of Claim 1 and 2 proves the lemma.Proof. of Theorem 4.4 All properties follow from the fact that by Lemma 4.10 the functions V and v defined by (3.2) and (3.3) are the unique continuous viscosity solutions for (4.6) and (4.5), respectively.
(i) and (ii): By Corollary 4.7 all viscosity solutions to (4.6) and (4.5) equations are also viscosity solutions of (4.2) and (4.1), respectively, and vice versa.Hence, v and V are also the unique viscosity solutions of (4.2) and (4.1), respecively.

Approximation with bounded control values.
In this section we consider the bounded approximations U k = U ∩ cl B(0, k) of the (possibly) unbounded set U of control values and the corresponding set U k := L ∞ ([0, ∞), U k ) of control functions.Throughout this section we assume that (H0)-(H2) holds which implies that we can find g meeting (H3)-(H6).
Proposition 5.1.Consider the functions Then the relations Since ε was arbitrary this shows the claim on D 0 , both for V and v.For x ∈ D 0 we have which shows the claim also in this case.
Remark 5.2.If the assumptions of Proposition 3.6 hold, then, since v k is decreasing in k, Dini's Theorem yields that v k converges to v locally uniformly on R n .
For the following proposition recall the definition of set limits, which for a sequence of sets X k are given by lim sup Then the set limit lim k→∞ D k exists and satisfies On the other hand, if x ∈ D 0 then for any ε > 0 there exists k 0 ∈ N with V k (x) ≤ V (x) + ε for all k ≥ k 0 .This implies that x ∈ D k for all k ≥ k 0 and consequently x ∈ m≥k 0 D m .This implies In particular, this implies that for any compact set K ⊂ D 0 we obtain K ⊂ D k for all sufficiently large k.Thus, in order to steer the system to 0 from a compact subset K ⊂ D 0 it is sufficient to consider bounded control functions.

Examples.
In this section we discuss the necessity of some of our assumptions.Also it is explained how the classical case of linear quadratic control fits within the present framework.
Example 6.1.Consider the one dimensional dynamics where U = R.The origin is an equilibrium point so that (H1) is satisfied, while x = 1 is repulsive, in the sense that any trajectory starting from x 0 ≥ 1 cannot reach the origin.With this it is easy to see that D 0 = (−∞, 1).Furthermore, (H0) is satisfied with γ(u) = |u|.Now consider the cost function g 1 (x, u) = |x|, which satisfies (H3) and (H4) but neither (H5) nor (H6).For x 0 ∈ (0, 1) and an arbitrary constant α > 0 choose where χ [0,x0/α] denotes the indicator function of the interval [0, x 0 /α].The corresponding solution of (6.1) is given by φ(t) = (x 0 − αt)χ [0,x0/α] (t) , Observe that for x 0 close to 1 we need a very large control to start to move towards the origin.This is because the control u is multiplied by x − 1.
Calculating the corresponding cost we obtain and therefore sending α → +∞, it follows that V 1 (x 0 ) = 0 for any x 0 ∈ (0, 1).Of course, V 1 (x) = ∞ for x ≥ 1. Summarizing this shows that v 1 is discontinuous on R and not a control Lyapunov function on D 0 .On the other hand setting g 2 (x, u) = max{|x| + |u|, (|x| + |u|) 2 } a cost function satisfying (H6) is obtained.To analyze the associated value functions fix x 0 ∈ (0, 1) and choose a control u such that φ(t) := φ(t, x, u) → 0. We will assume that φ is strictly decreasing as otherwise it is clearly not optimal.Now let T > 0 be a time such that ) > 0, then we have As φ(T ) approaches 0 (in finite or infinite time) this calculation shows that V 2 (x 0 ) ≥ − log(1 − x 0 ) for x 0 ∈ (0, 1) so that in particular v 2 is continuous on R and a control Lyapunov function on D 0 (where we leave the assertion for (−∞, 0) to the reader).Finally note that a combination of the previous examples leads to an intermediate situation.To this end let h : R → [0, 1] be a continuous function such that h(x) = 1 if x ∈ (−∞, 1/2], h(x) = 0 for x ∈ [3/4, ∞) and let g 3 (x, u) = |x| + h(x)|u|.Then it follows for x ∈ [0, 1/2] that V 3 (x) = V 2 (x) ≥ − log(1 − x) by the considerations on g 2 , whereas for x ∈ (3/4, 1) we have V 3 (x) = V (3/4) using that V 1 is constant on that interval.In this example (H5) and (H6) are not satisfied, v 3 is not continuous and V 3 is a control Lyapunov function only on a subset of D 0 .
Example 6.2.Finally we show that the classical linear quadratic control problem fits into our setup.This problem is obtained if we set f (x, u) = Ax + Bu and g(x, u) = x T Qx + u T Ru, where A, B, Q, R are matrices of appropriate dimensions with Q and R being symmetric and positive definite.
By direct computations one sees that these functions satisfy (H0) for any γ ∈ K ∞ , (H1), (H3) and (H5).The linear system also satisfies (H2'), because it is known that local asymptotic controllability implies the existence of a feedback matrix F such that A + BF is exponentially stable, i.e., this matrix has all its eigenvalues in the open left half plane, which yields (H2') with β(r, t) = Ke −λt r for suitable constants K, λ > 0. Hence we obtain β(r, t) = α 2 (α 1 (r)e −t ) with α 2 (r) = r λ which implies (H4') for our g with δ = 2/λ and C = Q+R .Finally, (H6) is satisfied because g grows quadratically in u while f only grows linearly in u.Thus, the classical linear quadratic problem is a special case of our setup and the resulting equation Thus also in this case V + is a viscosity supersolution of (4.5).Conversely, let V + be a viscosity supersolution of (4.5).Then for any subgradient p of V + in x we have sup u∈U {− f (x, u)p − g(x, u)} ≥ 0.
Since f is bounded and g grows unbounded in u due to (H6), the supremum over u is contained in a compact set.Hence by continuity we can find a control value u * ∈ U for which the maximum is attained, i.e.

Theorem 4 . 4 .
Assume that f and g satisfy the assumptions (H0)-(H6) (or their respective variants from Remark 3.1).Then (i) The function v from (3.3) is the unique bounded viscosity solution of (4.2) with v(0) = 0 (ii) There exists a unique couple (O, V ) such that O is an open set containing the origin and V is a locally bounded, nonnegative continuous viscosity solution of (4.1) in O with V (0) = 0 and V (x) → +∞ for x → ∂O.Here V is the function from (3.
these two sets coincide, lim k→∞ X k := lim sup k→∞ X k = lim inf k→∞ X k .Proposition 5.3.Consider the sets and since x ∈ D 0 was arbitrary we obtainD 0 ⊆ lim inf k→∞ D k ,which shows the claim.Remark 5.4.This Proposition implies that for any compact set K ⊂ R n the convergenced H (K ∩ D k , K ∩ D 0 ) → 0in the Hausdorff metric holds (see e.g.[2, Proposition 1.1.5]).In particular, if D 0 is bounded then we obtain uniform convergence of D k to D 0 in the Hausdorff metric.