Quantitative aspects of the input{to{state stability property

In this paper we consider quantitative aspects of the input{to{state stability (ISS) property. Our considerations lead to a new variant of ISS, called input{to{state dynamical stability (ISDS), which is based on using a one{dimensional dynamical system for building the class KL function for the decay estimate and for describing the inﬂu-ence of the perturbation. The main feature of ISDS is that it admits a quantitative or gain preserving Lyapunov function characterization. We also show the relation to the original ISS formulation and present several applications


Introduction
The input-to-state stability (ISS) property introduced by Sontag [16] has by now become one of the central properties in the study of stability of perturbed nonlinear systems.It assumes that each trajectory ϕ of a perturbed system satisfies the inequality ϕ(t, x, u) ≤ {β( x , t), ρ( u ∞ )} for suitable functions β of class KL and ρ of class K ∞ .
While ISS has turned out to be a very useful qualitative property with many applications (see, e.g., [1,4,8,9,11,13,15,21]) and lots of interesting features (see, e.g., [7,17,20] and in particular the recent survey [19]), there are some drawbacks of this property when quantitative statements are of interest.The main problem with ISS in this context is that it does not yield explicit information about what happens for vanishing perturbations, i.e., for perturbations u with u(t) → 0 as t → ∞.Implicitly, ISS ensures that if u(t) tends to 0 as t tends to infinity then also ϕ(t, x, u) converges to 0 for t tending to infinity, but no explicit rate of convergence can be deduced.The main idea in order to overcome this difficulty is by introducing a certain "memory fading" effect into the u-term of the ISS formulation, an idea which was used before by Praly and Wang [13] in their notion of exp-ISS.There the perturbation is first fed into a one-dimensional control system whose output then enters the right hand side of the ISS estimate.Here, instead, we use the value of the perturbation at each time instant as an initial value of a one-dimensional dynamical system, which leads to the concept of input-to-state dynamical stability (ISDS).Proceeding this way, we are in particular able to "synchronize" the effects of past disturbances and large initial values by using the same dynamical system for both terms.It turns out that ISDS is qualitatively equivalent to ISS and, in addition, that we can pass from ISS to ISDS with only slightly larger robustness gains.
One of the most important features of the ISS property is that it can be characterized by a dissipation inequality using a so called ISS Lyapunov function, see [20].One of the central properties of the ISDS estimate is that it admits an ISDS Lyapunov function, which not only characterizes ISDS as a qualitative property (the qualitative equivalence ISS ⇔ ISDS immediately implies that the well known ISS Lyapunov function would be sufficient for this) but also represents the respective decay rate, the overshoot gain and the robustness gain.The respective results are given in Section 4.
We believe that there are many applications where quantitative robust stability properties are of interest.A particular area of applications are numerical investigations, where one interprets a numerical approximation as a perturbation of the original system and vice versa.One example is given in Section 5, for a comprehensive treatment of this subject we refer to the monograph [5].In Section 6 we indicate two control theoretic applications of the ISDS property, which also illustrate the difference to the ISS property.

Motivation
In this section we are going to motivate our approach by considering systems without input, i.e., nonlinear autonomous differential equations of the type ẋ(t) = f(x(t)) (2.1) with x ∈ R n and f : R n → R n is locally Lipschitz.The solutions of (2.1) for initial value x ∈ R n at initial time t = 0 will be denoted by ϕ(t, x).If we assume that the origin is globally asymptotically stable for (2.1), then it is well known that there exists a Lyapunov Function V : R n → R, i.e., a positive definite and proper function, which is By suitable rescaling of V we may assume that there exists a class K ∞ -function σ (see Section 3 for a definition), such that the inequalities hold.Furthermore, it is easily seen that there exists a continuous function g : R + 0 → R + 0 with g(r) > 0 for r > 0 such that the inequality Integrating inequality (2.3) and using (2.2) then yields the estimate where µ is the solution of the 1d differential equation This means that we get a special type of KL-estimate for the norm of the solution trajectories ϕ(t, x), which in turn implies global asymptotic stability.Now the nice property of an inequality of type (2.4) is that it admits a converse Lyapunov theorem using a construction of Yoshisawa [22].If we assume (2.4) and set then this function satisfies (2.2) and V (ϕ(t, x)) ≤ µ(V (x), t) for all x ∈ R n , t ≥ 0, from which we can in turn conclude (2.4).This function V , however, may be discontinuous, thus we cannot conclude (2.3).In order to obtain a smooth function we can fix an arbitrary ε > 0 and set This function is Lipschitz continuous and satisfies (2.2) and By an appropriate smoothing technique we can obtain a smooth function (at least away from the origin) satisfying Thus, the particular form of the decay estimate (2.4) allows a converse Lyapunov theorem, which preserves the decay rate µ(σ(r), t) up to an arbitrarily small ε > 0. The scope of this paper is to carry out the same procedure for systems with input and the ISS property, i.e., • formulate a suitable variant of ISS similar to (2.4), which leads to the ISDS property • find a Lyapunov function which implies ISDS • prove a converse Lyapunov theorem which preserves the rate and gains at least up to some arbitrarily small parameter ε > 0.
3 Input-to-state dynamical stability We consider nonlinear systems of the form where we assume that f : R n ×R m → R n is continuous and that for each two compact subsets K ⊂ R n and W ⊂ R m there exists a constant L = L(K, W ) such that f(x, u) − f(y, u) ≤ L x − y for all x, y ∈ K and all u ∈ W .The perturbation functions u are supposed to lie in the space U of measurable and locally essentially bounded functions with values in U, where U is an arbitrary subset of R m .The trajectories of (3.1) with initial value x at time t = 0 are denoted by ϕ(t, x, u).
We recall that a continuous function α : the first and strictly decreasing to 0 in the second argument.We define a continuous function µ : R + 0 × R → R + 0 to be of class KLD if its restriction to R + 0 × R + 0 is of class KL and, in addition, it is a one dimensional dynamical system, i.e., it satisfies µ(r, t + s) = µ(µ(r, t), s) for all t, s ∈ R.
The expression • denotes the usual euclidean norm, u ∞ is the L ∞ norm of u ∈ U and for t > 0 and any measurable function g : R → R + 0 the expression ess sup τ ∈[0,t] g(τ ) denotes the essential supremum of g on [0, t].
Using these notations we can now formulate the concept of input-to-state dynamical stability.
Definition 3.1.A system (3.1) is called input-to-state dynamically stable (ISDS), if there exists a function µ of class KLD and functions σ and γ of class K ∞ such that the inequality holds for all t ≥ 0, x ∈ R n and all u ∈ U, where ν is defined by Here we call the function µ the decay rate, the function σ the overshoot gain and the function γ the robustness gain.
Conversely, a straightforward application of [18,Proposition 7] shows that any class KL function can be bounded from above by the composition of a class KLD and a class K ∞ function, see [5,Lemma B.1.4].Hence the only real difference between ISS and ISDS is the decay property of the ν(u, t)-term.The following theorem shows how one can pass from the ISS to the ISDS formulation.For the proof see [5,Proposition 3.4.4].Theorem 3.1.Assume that the system (3.1) is ISS for some β of class KL and ρ of class K ∞ .Then for any class K ∞ function γ with γ(r) > ρ(r) for all r > 0 there exists a class KLD function µ such that the system is ISDS with attraction rate µ, overshoot gain σ(r) = β(r, 0) and robustness gain γ.
For some results in this paper we will need the following assumption.
for some locally Lipschitz continuous function g : R + → R + , all r > 0 and all t ∈ R.
It was shown in [5, Appendix A] that for given nonsmooth rates and gains from Definition 3.1 one can find rates and gains arbitrarily close to the original ones, such that Assumption 3.1 holds and Definition 3.1 remains valid.Hence Assumption 3.1 is only a mild regularity condition.

Lyapunov function characterization
One of the main tools for working with ISS systems is the ISS Lyapunov function whose existence is a necessary and sufficient condition for the ISS property, see [20].In this section we provide two theorems on a Lyapunov function characterization of the ISDS property.We start with a version for discontinuous Lyapunov functions, which can exactly represent the rate and gains in the ISDS formulation.The proof of the following theorem is given in Section 7.
for all x ∈ R n , t ≥ 0 and all u ∈ U, where ν is given by (3.2).
For many applications it might be desirable to have ISDS Lyapunov functions with some more regularity.The next theorem, which is also proved in Section 7, shows that if we slightly relax the sharp representation of the gains, then we can always find smooth (i.e., C ∞ ) Lyapunov functions, at least away from the origin.Theorem 4.2.A system (3.1) is ISDS with rate µ of class KLD and gains σ and γ of class K ∞ satisfying Assumption 3.1 if and only if for each ε > 0 there exists a continuous function for all x ∈ R n \ {0} and all u ∈ U.
It should be noted that there exists an intermediate object between the discontinuous and the smooth ISDS Lyapunov function, namely a Lipschitz Lyapunov function which satisfies (4.4) in a suitable generalized sense using the theory of viscosity solutions, see [5] for details.While both smooth and Lipschitz Lyapunov functions characterize the optimal gains "in the limit", we conjecture that there are examples in which gains can be exactly characterized by Lipschitz but not by smooth ISDS Lyapunov functions, similar to what was shown recently for H ∞ Lyapunov functions in [14].
Theorem 4.2 gives rise to a constructive procedure of computing ISDS robustness gains from Lyapunov functions for the unperturbed system f(x, 0).We illustrate this procedure by three examples.
This example nicely illustrates the (typical) tradeoff between the attraction rate µ and the robustness gain γ, which is represented here by the choice of λ: the smaller γ becomes the slower convergence can be guaranteed.In the next two examples, showing ISDS estimates for two simple nonlinear systems, we set λ = 3/4.Example 4.2.

Applications in Numerical Analysis
In order to illustrate the way in which ISDS-like properties can be used in numerical analysis, we consider a problem from numerical dynamics.We briefly describe an algorithm for the computation of attractors developed by Dellnitz and Hohmann [2]; here we describe a version due to Junge [10].
Consider the differential equation (2.1) and its time-1 map Φ(x) := ϕ(1, x).Consider a rectangular domain Ω ⊂ R n and a partition C 0 of Ω into N 0 rectangular cells For simplicity we assume here that Φ(C) can be computed, which will not be the case in general, cf.Remark 5.1 (ii), below.In the next step each cell contained in C k is refined (e.g., by subdividing it into a number of finer rectangles) and the resulting collection of cells is denoted by C k+1 .Now we set k = k + 1 and restart this procedure by going to step (5.1).This generates a sequence of collections C k , k = 0, 1, . .., satisfying i C k+1 i ⊆ i C k i .Now let A ⊂ Ω be an attractor, i.e., a minimal asymptotically stable set which attracts Ω \ {A}.Then it is known that the convergence d H (C k , A) → 0 holds in the Hausdorff metric d H for compact sets, however, estimates for the corresponding rate of convergence are dificult to obtain.
Such estimates can be derived from the ISDS property.Consider the perturbed system

Applications in Control Theory
As a first application, we derive an estimate on a nonlinear stability margin.In [20] it was shown that ISS implies the existence of a stability margin for a perturbed system, however, for ISS it is difficult to derive an estimate for this margin.In contrast to this, the ISDS property easily allows to give an estimate based on the ISDS robustness gain.
The proof can be found in [6].As a second application we consider the stability of coupled systems.The following theorem is a version of the generalized small gain theorem [9, Theorem 2.1] (in a simplified setting).As for Theorem 6.1, the qualitative result (i.e., asymptotic stability of the coupled system) can be proved using the original ISS property.The advantage of ISDS lies in the estimates for the overshoot and the decay rates of the coupled system.Theorem 6.2.Consider two systems ẋi = f(x i , u i ), i = 1, 2, of type (3.1)where the f i are Lipschitz in both x i and u i .Let Assume that the systems are ISDS with rates µ i and gains σ i and γ i and assume that the inequalities γ 1 (γ 2 (r)) ≤ r and γ 2 (γ 1 (r)) ≤ r hold for all r > 0. Then the coupled system is globally asymptotically stable and the trajectories (x 1 (t), x 2 (t)) of (6.1) satisfy for i = 1, 2, j = 3 − i and functions δ i given by In particular, for all t ≥ 0 from (6.2) we obtain the overshoot estimates Again, the proof can be found in [6].Remark 6.1.A different characterization of the decay rates δ i in Theorem 6.2 can be obtained if we assume that the gains γ i and the class KLD functions µ i satisfy Assumption 3.1 for functions g i .In this case, derivating the expressions in the definition of δ i (r, t), i = 1, 2, with respect to t, one sees that the δ i are bounded from above by the solutions of the one- , where γ i denotes the derivative of γ i and j = 3 − i.
In the following example we illustrate the quantitative information one can obtain from Theorem 6.2 and Remark 6.1.
Example 6.1.Consider the two systems from Examples 4.2 and 4.3 with robustness gains γ 1 (r) = 2r 3 /3 and γ 2 (r) = 3 4r/3.Then the coupled system reads ẋ1 (t) = −x 1 (t) + x 2 (t) 3 /2, ẋ2 (t) = −x 2 (t) 3 + x 1 (t).One verifies that the gain condition of Theorem 6.2 is satisfied, hence we can conclude asymptotic stability with overshoot estimates Using the formula from Remark 6.1 we obtain for suitable constants c 1 , . . ., c 4 > 0. This shows that far away from the equilibrium exponential convergence can be expected, while in a neighborhood of 0 the rates of convergence in both components will slow down.
Proof: Fix u 0 ∈ U with γ( u 0 ) < V (x) and consider the constant function u(t) ≡ u 0 .By continuity, for all τ > 0 small enough we obtain V (ϕ(τ, x, u)) ≤ µ(V (x), τ ), which implies and thus the claim.We cannot in general conclude the result for γ( u ) = V (x) using continuity in u because U is an arbitrary set which might in particular be discrete.The following Lemma shows that we can nevertheless obtain (7.1) for γ( u ) = V (x) if V is continuously differentiable.Furthermore, if V is smooth, then also the converse implication holds.Lemma 7.3.Let µ be a class KLD function satisfying Assumption 3.1 and let γ be a class K ∞ function.Then a continuous function V : R n → R + 0 which is smooth on R n \{0} satisfies the inequality V (ϕ(t, x, u)) ≤ max{µ(V (x), t), ν(u, t)} (7.2) for all x ∈ R n , t ≥ 0 and all u ∈ U, where ν is given by (3.2), if and only if it satisfies for all x ∈ R n \ {0} and all u ∈ U.
The next lemma shows the existence of a Lipschitz ISDS Lyapunov function.
Lemma 7.4.If a system (3.1) is ISDS with rate µ of class KLD satisfying Assumption 3.1 and gains σ and γ of class K ∞ then for each ε > 0 there exists a continuous function V : R n → R + 0 , which is Lipschitz on R n \ {0} and satisfies for almost all x ∈ R n and all u ∈ U.
We now show the Lipschitz property of V .In order to do this pick a compact set N ⊂ R n not containing the origin.From the bounds on V we can conclude that there exists a compact interval I = [c 1 , c 2 ] ⊂ R + such that for x ∈ N the infimum over b ≥ 0 in (7.7) can be replaced by the infimum over b ∈ I. Now the ISDS property implies the existence of a constant R > 0 such that ϕ(t, x, u) ≤ max{µ(R, t), ν(u, t)} holds for all x ∈ N, all u ∈ U and all t ≥ 0, which implies that we can restrict ourselves to those u ∈ U with u ∞ ≤ R. Furthermore, there exists T > 0 such that µ(R, t) < µ(c 1 , (1 − ε)t) holds for all t ≥ T , which implies that we only have to check the inequality for ϕ(t, x, u) in (7.7) for t ∈ [0, T ].Thus the definition of V eventually reduces to Now we find constants L 1 > 0 and C 1 > 0 such that the inequalities ϕ Since δ > 0 was arbitrary and this estimate is symmetric in x 1 and x 2 we obtain the desired Lipschitz estimate with constant L N .
Proof: This follows from Theorem B.1 in [12], observing that the proof in [12] (which requires compact U) remains valid if for any compact subset K ⊂ R n we can restrict ourselves to a compact subset of U, which is the case here since we only need to consider u ≤ γ −1 (max x∈K V (x)).
Since all these expressions are continuous in ε we obtain the desired inequality.

Assumption 3 . 1 .
The functions µ, σ and γ in Definition 3.1 are C ∞ on R + × R or R + , respectively, and the function µ solves the ordinary differential equation

Theorem 4 . 1 .
A system (3.1) is ISDS with rate µ of class KLD and gains σ and γ of class K ∞ if and only if there exists a (possibly discontinuous) ISDS Lyapunov function V : R n

Example 4 . 3 .
Consider the system ẋ = f(x, u) = −x 3 + u with x ∈ R, u ∈ R. Again using the Lyapunov function V (x) = |x| one obtains DV (x)f(x, 0) = −|x| 3 = −V (x)3 .Here we choose γ such that γ(|u|) ≤ V (x) = |x| implies |u| ≤ 3|x| 3 /4, i.e., γ(r) = 3 4r/3.Then we obtain γ [5,ark 5.1.(i)Infact, for this estimate to hold we only need that the ISDS estimate is valid for x ∈ Ω.It can be shown that any asymptotically stable set for the unperturbed system (2.1) for which Ω lies in its domain of attraction has this "local" ISDS property for the perturbed system (5.2) for suitable µ, σ and γ and suitable perturbation range U, see[5, Theorem 3.4.6].Hence estimate (5.3) holds for all attractors without any additional asumptions for suitable functions µ, σ and γ. (ii) It is possible to incorporate numerical errors in the computation of the image Φ(C) in (5.1) in the analysis of the algoroithm.We refer to [5, Section 6.3] for details.
A := inf y∈A x − y .Let diam(C k ) := max i=1,...,N k max x,y∈C k i x − y be the maximal diameter of the cells in C k .Then we obtain the estimate d H (C k , A) ≤ max µ(σ(d H (Ω, A), k), max