Continuous–time controller redesign for digital implementation: a trajectory based approach

: Given a continuous time nonlinear closed loop system, we investigate sampled–data feedback laws for which the trajectories of the sampled–data closed loop system converge to the continuous time trajectories with a prescribed rate of convergence as the length of the sampling interval tends to zero. We derive necessary and suﬃcient conditions for the existence of such sampled–data feedback laws and — in case of existence — provide explicit redesign formulas and algorithms for these controllers.


Introduction
One of the most popular methods for sampled-data controller design is the design of a controller based on the continuous-time plant model, followed by a discretization of the controller [3,4,11].This method, often referred to as emulation, is attractive since the controller design is carried out in two relatively simple steps.The first (design) step is done in continuous-time, completely ignoring sampling, which is easier than the design that takes sampling into account.The second step involves the discretization of the controller and there are many methods that can be used for this purpose.Simple methods, however, may not perform well in practice since the required sampling rate may exceed the hardware limitations even for linear systems [9,1].This has led to a range of more advanced controller discretization techniques for linear systems, see, e.g., [1,3].
In the nonlinear case, the survey paper [12] gives an overview about a number of methods, which show that under suitable control theoretic assumptions (involving, e.g., the relative degree of the system) an exact sampled-data reproduction of the continuous time inputoutput behavior is possible.An important special case is the analysis of the possibility of feedback linearization with sampled feedback control which was studied during the 1980s (see, e.g., [2] and the references therein).Our approach in this paper is on the one hand less demanding, because we only aim at an approximate reproduction of the continuous time response, on the other hand it is more demanding than the input-output behavior analysis because we want to approximately reproduce the full state trajectory.
The present paper builds on results from [15], where a redesign method based on control Lyapunov functions and Fliess expansions has been developed, avoiding, however, the use of control Lyapunov functions.The purpose of our sampled-data feedback construction lies in minimizing the difference between the continuous time system and the sampled-data system after one sampling-interval, either with respect to some auxiliary output function or with respect to the whole state.In the latter case, whose analysis is the main topic of this paper, a straightforward induction allows to conclude closeness of trajectories at sampling instances for each compact time interval.In this approach, minimization is not meant in the sense of optimal control (for an optimal control approach to this problem we refer to the model predictive technique presented in [16,7]).Instead, minimization is to be understood asymptotically in the sense that for sampling interval length T > 0 the difference between continuous and sampled response at time T should be smaller than O(T k ) for some k > 0.Then, the larger the k is, the faster the sampled-data trajectory will converge to the continuous time one and thus we are interested to choose the order of convergence k as large as possible.
The contribution of this paper is twofold: On the one hand we derive necessary and sufficient conditions expressed in terms of Lie brackets and derivatives of the vector fields which allow to conclude whether a certain order of convergence k of the sampled-data trajectories to the continuous time trajectories is realizable or not.In particular, we show that one needs rather restrictive geometric properties in order to obtain an order of convergence k ≥ 4. On the other hand, if the conditions are satisfied, then we present analytic formulas for the sampled-data controllers which realize this convergence rate.
The paper is organized as follows.In Section 2 we present the setting and the preliminary results from [15].In Section 3 we consider a sampled-data feedback first considered in [15] and present a sufficient structural condition on the system under which this feedback provides a sampled-data trajectory arbitrarily close to the continuous time one.In Section 4 we consider a very large class of admissible sampled-data feedback laws and derive a necessary and sufficient structural condition for the difference O(T k ) for k = 4 and give a formula for the sampled-data feedback law realizing this performance.Since conditions for larger k become very complicated, we only comment on the condition for k = 5 and instead present a maple program for checking these conditions and computing the corresponding sampled-data controllers in the appendix.Finally, in Section 5 we illustrate our results by two examples.Two appendices contain a technical result and the mentioned maple code.

Setup
We consider nonlinear control affine systems of the form with vector fields g 0 , g 1 : R n → R n and control functions u : R → R. For simplicity of exposition we consider single input systems (i.e., u(t) ∈ R), because for the multi input case the computations and expressions become much more involved.
We assume that a static state feedback u 0 : R n → R has been designed which solves some control task for the continuous time closed loop system 2) The solutions of (2.2) with initial value x 0 at initial time t 0 = 0 will be denoted by φ(t, x 0 ).We assume that all functions involved are smooth with sufficiently high degree of smoothness such that the derivatives taken in what follows are well defined and continuous.
Our goal is to find a feedback u T (x) such that the solution trajectories of the sampled data closed loop system for the sampling sequence t k = kT and sampling period T > 0 are close to those of the continuous time closed loop system (2.2).More precisely, denoting solutions of (2.3) by φ T (t, x 0 , u T ), we want to find u T such that the difference after one sampling time step becomes small, with x ∞ = max i=1,...,n |x i | denoting the maximum norm in R n .
For the feedback u T we consider the following general class of functions.
Definition 2.1 An admissible sampled data feedback law u T is a family of maps u T : R n → R, parameterized by the sampling period T ∈ (0, T * ] for some maximal sampling period T * , such that for each compact set Note that for existence and uniqueness of the solutions to (2.3) we do not need any continuity assumptions on u T .Boundedness is, however, imposed, because from a practical point of view unbounded feedback laws are physically impossible to implement and from a theoretical point of view they often lead to closed loop systems which are very sensitive to modeling or approximation errors, cf., e.g., the examples in [6,13].A special class of these admissible feedback laws which was proposed in [15] is given by with u 0 from (2.2) and u 1 , . . ., u M : R n → R being locally bounded functions.
In the present paper, we are in particular interested in asymptotic estimates, i.e., in the behavior of the difference (2.4) for T → 0. For this purpose we use the following definition.(i) For some compact set K ⊂ R n we write ∆φ(T, x 0 , u T ) = O(T k ) on K if there exists C > 0 such that the inequality ∆φ(T, x 0 , u T ) ≤ CT k holds for all x 0 ∈ K.
(ii) We write ∆φ(T, , where the constant C in (i) may depend on the choice of K.
If we are able to establish ∆φ(T, x 0 , u T ) = O(T k ), then it follows by a standard induction argument that on each interval [0, t * ] we obtain for all times t = iT , i ∈ N with t ∈ [0, t * ].In particular, this "closeness of trajectories" allows to prove that several stability concepts carry over from φ to φ T in a semiglobal practical sense, see [14].
In order to establish estimates for (2.4) we consider a smooth real valued function h : R n → R and derive estimates for the differences ∆h(T, x 0 , u T ) := |h(φ(T, x 0 )) − h(φ T (T, x 0 , u T ))|. (2.7) The function h plays the role of an auxiliary output function but it does not need to have any physical meaning as an output of the system.Applying the respective results to the specific functions h j (x) := x j , j = 1, . . ., n and the respective differences we are able to conclude the desired estimate for ∆φ, because if ∆h j (T, x 0 , u T ) ≤ C holds for some constant C > 0 and all j = 1, . . ., n, then ∆Φ(T, x 0 , u T ) ≤ C follows.
For a problem similar to the one posed in this paper, in [15] the feedback law i.e., (2.5) with (2.10) was discussed.Note that the results in [15] were formulated for Lyapunov functions V instead of general real valued functions h, however, the usual Lyapunov function properties were only needed for the interpretation of the results and not for the proofs.Hence, we can in particular apply Theorem 4.11 of this reference to our setting which shows that for M = 0 (note that u 0 T = u 0 ) the estimate ∆h(T, x 0 , u 0 T ) = O(T 2 ) holds while for M = 1 the estimate ∆h(T, x 0 , u1 T ) = O(T 3 ) holds.
In Remark 4.13 of [15] 1 it was observed that the above estimates for ∆h using (2.9) do not hold in general for M ≥ 2. It is the purpose of the present paper to find necessary and sufficient conditions under which it is possible to generalize these results to larger M .Furthermore, we will discuss whether other choices for the feedback u T different from (2.9) can provide better asymptotic estimates.
Our analysis is based on Theorem 3.1 from [15].In order to state this theorem we need to introduce some notation: for a vector field g : R n → R n and a scalar function h : R n → R we denote the directional derivative of h in the direction of g by cf. Isidori [8].Furthermore, we define multinomial coefficients Theorem 2.3 Consider the system (2.1), a smooth function h : R n → R, the continuous closed loop system (2.2) and the sampled data closed loop system (2.3) with controller u T given by (2.5).Then, for sufficiently small T , we can write: where p 0 (x) = L g 0 h(x) and

A sufficient condition
Our first main result is a condition under which the sampled data feedback law u M T from (2.9) yields trajectories which are arbitrary close to the continuous time ones.In this theorem, we use the Lie bracket of two vector fields f, g : R n → R n defined by Isidori [8].
Theorem 3.1 Consider the system (2.1), the continuous closed loop system (2.2) and the sampled data closed loop system (2.3) with controller u M T given by (2.9) for some M ∈ N. Assume that the condition [g 0 , g 1 ] = 0 (3.1) holds, i.e., that the vector fields g 0 and g 1 commute.Then holds for every smooth function h : R n → R and consequently also Proof: The proof of this theorem relies on the (technical) Proposition 6.1, which can be found in the appendix.Under condition (3.1), Proposition 6.1 states that with p i from Theorem 2.3 and u i from (2.10).Inserting these terms into the Taylor expansion of h(φ(t, x)) in t = 0 and evaluating the expansion in t = T yields On the other hand, multiplying (2.11) by T and adding h(x) we obtain Now, comparing (3.5) and (3.6) we obtain the assertion.

Remark 3.2
The above proof also shows that -unless g 1 (x) = 0 and up to terms of order O(T M +1 ) -under condition (3.1) the feedback law u M T (x) from (2.9) is the only feedback law for which (3.3) holds: in order to see this, consider an arbitrary admissible feedback law ũT .Comparing (3.5) and (3.6) for M = 0 yields that if g 1 (x) = 0 then ũT (x) must be of the form ũT (x) = u 0 (x) + T ũ1 (x) in order to satisfy (3.3) with M = 0. Repeating this argument inductively, one sees that for each M ∈ N the feedback law ũT must be of the form ũT (x) = u M T (x) + O(T M +1 ) in order to satisfy (3.3).
Remark 3.3 It should be noted that condition (3.1) is well known in the numerical approximation theory of control systems, cf.e.g.[5,17].We will show in Corollary 4.12, below, that it is also necessary for (3.3) in those points x in which the derivative of u 0 along the solutions φ does not vanish.

A necessary and sufficient condition
In this section we investigate a necessary and sufficient condition for the existence of an admissible feedback law u T which achieves and provide a formula for this feedback law.Since the necessary and sufficient condition turns out to be much more involved than the sufficient condition (3.1) we restrict ourselves to the case M = 2.This is the first nontrivial case given that (4.1) and thus (4.2) for M ≤ 1 are always achievable by (2.9) without any further conditions, cf.[15,Theorem 4.11].
For the necessary and sufficient condition it turns out that the cases (4.1) and (4.2) require different conditions which is why we state them in two separate theorems.We start with (4.1).
Theorem 4.1 Consider the system (2.1), the continuous closed loop system (2.2), a smooth function h : R n → R and a compact set K ⊂ R n .

If the condition
holds for some constant c ≥ 0 and all x ∈ K, then there exists an admissible feedback law u T : R n → R satisfying (4.1) on K with M = 2 given by In this case, any feedback u T : R n → R of the form with u 1 T and u 2 T from (2.9) and Conversely, if there exists an admissible feedback law u T : R n → R satisfying (4.1) on K with M = 2, then (4.3) holds for all x ∈ cl K.In this case, this feedback u T must be of the form (4.4) for all x ∈ K.
Proof: From the Taylor expansion of h(φ(t, x)) in t = 0 we obtain the identity with u i from (2.10) and p i from (2.12).We use the identity and compare the coefficients of (4.5) with (2.11) inductively for i = 0, 1, 2. For x / ∈ cl K this yields that the proposed feedback realizes (4.1) with M = 2 provided (4.3) holds.
For x ∈ cl K this coefficient analysis yields that any feedback ũT of the form with u 0 and u 1 from (2.10), and ũ2 (x) satisfying for u 1 from (2.10) satisfies (4.1) with M = 2 on O.The advantage of specifying β = 1 in (4.4) lies in the fact that this choice will also work on ∂ K (i.e., in particular on ∂O).In contrast to this, the choice β = 2/3 -which is the only correct choice on O if (4.3) is not satisfied -will not in general work on ∂ K.
Since in what follows we do not need necessary conditions outside K, we will not elaborate this topic in further detail.
Remark 4.3 On K, the necessary and sufficient condition (4.3) can be interpreted as follows: For x ∈ K the control ũ2 can always be used in order to induce any third order correction.However, if L g 1 h(x n ) → 0 for some sequence x n ∈ K, then the control effort needed for this purpose may be unbounded which may make the resulting feedback not admissible in the sense of Definition 2.1.Condition (4.3) guards against this situation.
Remark 4.5 A geometric explanation of the difference between the sufficient condition (3.1) and the necessary and sufficient condition (4.3) on K can be given by looking at the coefficient for T 3 in the Taylor expansion (4.5).On the one hand, this coefficient contains terms which can always be compensated for by a suitable choice of u T , these are L g 1 h(x)u 2 (x) and the terms contained in p 2 .On the other hand, it contains the expression which reflects the change of h in the direction −[g 0 , g 1 ] with speed L g 0 +g 1 u 0 u 0 (x), which forms a part of the motion of the trajectory of (2.2).This direction can in general not be generated using a constant linear combination of g 0 and g 1 , which is why one could call this expression the indirect motion.
The difference between the conditions now is that (3.1) rules out this indirect motion while the condition (4.3) ensure that its effect on h can be compensated for by an admissible sampled data feedback.
Remark 4.6 From the continuity of the expressions in (4.3) it is easily seen that the condition (4.3) is always satisfied if K = K.In particular, in many practical examples it might be possible to choose a reasonable set K for which K = K holds.Then, our proposed feedback (4.4) will yield ∆h = O(T 3 ) on K and ∆h = O(T 2 ) outside K, i.e., we can improve the sampled data performance with respect to h at least in parts of the state space.It should, however, be mentioned that for arbitrary real valued functions h this is of limited use, because in general it will not be possible to inductively conclude an estimate analogous to (2.6) for the difference |h(φ) − h(φ T )|.An exception is the case where h = V is a Lyapunov function for (2.2), because in this case the proposed control law renders the Lyapunov difference along the sampled data trajectories close to those of the continuous time ones.For a detailed discussion of this topic we refer to [15].
The reason for the fact that ∆h = O(T 3 ) is rather easy to obtain is due to the fact that the values h(φ) and h(φ T ) to be matched are one-dimensional.The necessary and sufficient condition becomes much more restrictive if we consider ∆φ, as the following theorem shows.
Theorem 4.7 Consider the system (2.1), the continuous closed loop system (2.2) and a compact set K ⊂ R n satisfying K = cl int K. Then there exists an admissible feedback law u T : R n → R satisfying (4.2) on K with M = 2 if and only if there exists a bounded function α : K → R satisfying [g 0 , g 1 ](x)L g 0 +g 1 u 0 u 0 (x) = α(x)g 1 (x).(4.9) In this case, any feedback u T : R n → R of the form with u 2 T from (2.9) and 2) with M = 2. Furthermore, each feedback satisfying (4.2) with M = 2 is of the form (4.10) for x ∈ K and the function α in (4.9) can be chosen as α(x) = 0 for x / ∈ K.
Proof: We first show that under condition (4.9) any feedback of the form (4.10) satisfies the assertion.
First note that for x / ∈ cl K the feedback value u T (x) is indeed arbitrary.This follows since on K \ cl K the control system is given by ẋ = g 0 (x).Thus, on the open set int (K \ cl K) the Taylor expansions of φ(t, x) and φ T (t, x, u T ) coincide for any order, regardless of the values of u 0 and u T , i.e., we obtain (4.2) for any M > 0 for arbitrary u T .By continuity of the expressions in the Taylor expansion this property carries over to cl int (K \ cl K) which contains K \ cl K because we have assumed K = cl int K.
It is hence sufficient to show that u T satisfies the assertion for x ∈ cl K. Assume that the function α exists and is bounded.Fix i ∈ {1, . . ., n} and consider the function h i (x) = x i .
Using the expression v i for the i-th component of a vector v ∈ R n , a simple computation using the identities shows that whenever g 1 (x) i = 0, then the function α from (4.9) satisfies If g 1 (x) i = 0 then the feedback is of the form u 1 T + O(T 2 ) for u 1 T from (2.9).Thus, the feedback is of the form (4.4) for h = h i and we can use Theorem 4.1 to conclude ∆h i (T, x, u T ) = O(T 4 ) for all x ∈ cl K. Since i ∈ {1, . . ., n} was arbitrary, this implies ∆φ(T, x, u T ) = O(T 4 ).Furthermore, again by Theorem 4.1, any feedback yielding ∆φ(T, x, u T ) = O(T 4 ) must be of the form (4.10) if g 1 (x) i = 0 and since for each x ∈ K we have g 1 (x) i = 0 for some i ∈ {1, . . ., n} it must be of the form (4.10) for all x ∈ K.
Conversely, assume that an admissible feedback law u T satisfying (4.2) on K with M = 2 exists.Then for each x ∈ K we have g 1 (x) i = 0 for some suitable i ∈ {1, . . ., n}.Thus, applying Theorem 4.1 for h = h i we obtain that u T must be of the form (4.4) for h = h i and some i = 1, . . ., n, i.e., of the form (4.10).In particular, α(x) meeting (4.9) exists on K and since u T is admissible this function α must be bounded on K. On the open set int (K \ cl K) we have g 1 ≡ 0, thus also [g 0 , g 1 ] ≡ 0, which by continuity also holds on cl int (K \ cl K) = K \ cl K. Hence we can choose α(x) = 0 for x ∈ K \ cl K.This defines a bounded function α for x ∈ K ∪ (K \ cl K) = K \ (cl K \ K).It remains to define α on cl K \ K. Since cl int K = K and K is open relative to K we obtain cl K = cl int K. Thus for any x ∈ cl K we find a sequence x n → x with x n ∈ int K, i.e., x n / ∈ cl K \ K. Since α is already defined on this set, satisfies (4.9) and is bounded, by continuity we obtain This implies [g 0 , g 1 ](x)L g 0 +g 1 u 0 u 0 (x) = g 1 (x) = 0 and thus we can set α(x) = 0 on cl K \ K in order to satisfy (4.9).This finishes the proof.Remark 4.8 While the condition ensuring ∆h = O(T 4 ) is still relatively easy to satisfy at least in parts of the state space, cf.Remark 4.6, the condition about the existence of α : K → R with (4.9) is rather strong.If L g 0 +g 1 u 0 u 0 (x) = 0 (i.e., if the continuous time feedback is not constant up to second order terms along the solution), it says that indirect motion generated by the Lie bracket [g 0 , g 1 ] (cf.Remark 4.5) must be contained in the span of g 1 .Remark 4.9 Conditions for M ≥ 3 can be obtained in a similar way but they become more and more involved, because the number of higher order Lie brackets to be considered grows exponentially.For instance, for M = 3 the analogous condition to (4.9) is the existence of a bounded function β : K → R satisfying for each i = 1, . . ., n, h i (x) = x i and α from (4.9).This is in contrast to the sufficient condition (3.1) which implies that all higher Lie brackets appearing in the formulas vanish and which therefore holds for all M ≥ 2.
Remark 4.10 Despite the fact that the conditions for higher order sampled-data feedback control become rather complicated, for a given continuous time closed loop system it is possible to give a rather simple recursive maple procedure which checks the conditions for arbitrary order and calculates the corresponding sampled-data feedback, if possible.The maple code for this purpose is given in the Appendix.

Remark 4.11
The conditions for sampled feedback linearizability derived in [2] bear some similarities with the conditions we derived here.In particular, necessary conditions for sampled feedback linearizability derived in [2] (under varying assumptions) include conditions like [g 1 , [g 0 , g 1 ]] = 0 and [g 1 , [g 0 , g 1 ]] = αg 1 for an analytic function α : R n → R.However, apart from the obvious similarity of these conditions to our conditions (3.1) and (4.9) and from the fact that geometric conditions on the vector fields appear naturally in both problems, there does not seem to be a deeper connection.In fact, to our opinion such a connection cannot be expected because the problems are different in two important points: on the one hand, our results give asymptotic estimates while sampled feedback linearizability is an exact property and thus more difficult to establish.On the other hand, feedback linearization allows for additional coordinate changes which add more flexibility to the problem and thus simplify it.Thus, neither problem follows from the other and hence one cannot expect that the needed conditions imply each other in one way or the other.
Using the results in this section, we now return to the feedback u M T from (2.9) and show that condition (3.1) is also necessary for (3.3), at least for a suitable set of states x.Corollary 4.12 Consider the system (2.1), the continuous closed loop system (2.2) and the sampled data closed loop system (2.3) with controller u M T given by (2.9) for some M ≥ 2. Assume that (3.3) holds.Then condition (3.1) holds for each x ∈ R n for which L g 0 +g 1 u 0 u 0 (x) = 0.
Proof: If (3.3) holds for some M ≥ 2, then in particular it holds for M = 2 on any compact ball K = cl B r (0).Thus, from Theorem 4.7 we obtain the existence of a function α satisfying (4.9).Furthermore, we obtain that u M T = u T + O(T 3 ) for u T from (4.10) which is only possible if α(x) = 0 for x ∈ K. Since α(x) can be chosen as 0 for x / ∈ K, we obtain α ≡ 0 on K.This implies that the right hand side of (4.9) equals 0 for all x ∈ K and since K is an arbitrary compact ball we obtain [g 0 , g 1 ]L g 0 +g 1 u 0 u 0 (x) = 0 for each x ∈ R n .This implies the assertion.

Examples
We illustrate our results by two examples.The first example is a simple artificial system for which (3.1) does not hold but (4.9) holds.It is given by Here one computes which immediately implies that (4.9) holds on every compact set K with The resulting sampled-data feedback laws for M = 0, 1, 2 are, respectively, and Figure 5.1 shows the x 1 -component of the respective trajectories for x 0 = (−1, 1) T and sampling interval T = 0.2.Here the line without symbols is the continuous time trajectory.
Note that at time t = 1, i.e., after 1/T sampling intervals, we expect the difference between the continuous time solution and the sampled-data solution to be of order T M +1 .Figure 5.2 shows a log-log plot of these differences which confirms that the respective controllers yield this accuracy.Our second example is a second order version of the Moore-Greitzer jet engine model Based on a continuous time stabilizing backstepping feedback law derived in [10,Section 2.4.3]given by u 0 (x) = −7x 1 + 5x 2 , several sampled-data controllers laws were derived in [15].Despite the fact that these controllers show good performance, we can now prove that no sampled data feedback u T can satisfy ∆φ(T, x, u T ) ≤ O(T 3 ).This follows, because for this system we obtain for any scalar function α : R 2 → R. Thus condition (4.9) is violated and consequently a controller u T yielding ∆φ(T, x, u T ) ≤ O(T 3 ) cannot exist.
6 Appendix A: A technical result Proposition 6.1 Consider the continuous closed loop system (2.2) with solutions φ(t, x 0 ) and assume that the condition (3.1), i.e., [g 0 , g 1 ] = 0 holds.Then the equation holds for all s ∈ N with p s from Theorem 2.3 and u s from (2.10).
Proof: We prove the assertion by induction over s ∈ N.For s = 1, we obtain On the other hand, we have which shows the claim for s = 1.Now we perform the induction step s − 1 → s.For the left hand side of the asserted equality we obtain +L g 0 +g 1 u 0 p s−1 (x, u 0 , . . ., u s−2 ) Omitting the arguments for brevity, we thus have to show that for p s from (2.12).
In order to prove (6.2) we proceed the following way: We consider the summands of the outer sum in the definition of p s given by for k = 1, . . ., s and show that (6.3) consists of those terms from the left hand side of (6.2) which contain exactly k + 1 L g i operators applied to h.Since each term on the left hand side of (6.2) contains at least 2 and at most s + 1 L g i operators, this proves (6.2).
We start with k = 1.In this case, if s − 1 is even, then (6.3) becomes All the terms in this expression contain exactly two L g i operators.Collecting the terms with exactly two L g i operators on the left hand side of (6.2) using (6.4) and (6.5) for p s−1 and the identity implied by (3.1), one obtains that the terms with exactly two L g i operators on the left hand side of (6.2) equal (6.3) for k = 1.Now we prove the same property for 2 ≤ k ≤ s.Using (6.6) we can rewrite the summands in (6.3) as with i = |I k | ranging from 0, . . ., k+1.The expression (6.7) contains exactly i L g 0 operators and k + 1 − i L g 1 operators.On the left hand side of (6.2), using again (6.6), the terms containing exactly this number of operators can be written as Thus, we have to show that (6.7) and (6.8) coincide for i = 0, . . ., k + 1.To this end we fix one summand in (6.7) -i.e., one multi-index (n 0 , . . ., n M ) -and collect all summands in (6.8) containing the control product s−1 j=0 u n j j .Once we have shown that these summands coincide, equality of (6.7) and (6.8) follows because one easily checks that (6.8) does not contain control products which do not appear in (6.7).
In order to collect the appropriate summands in (6.8) we have to identify the indices ν for which the control products in the three terms in (6.8) equal s−1 j=0 u n j j .For the first term in (6.8) this simply amounts to setting ν = (n 0 , . . ., n M ) and in the second term in (6.8) one obtains the right product by setting ν = (n 0 − 1, n 1 , . . ., n M ), provided n 0 ≥ 1, otherwise this term does not contain this product.The last term in (6.8) is the most complicated to treat.Here by definition of the u j in (2.9) the derivative of u n j j appearing in the last expression evaluates to where it is sufficient to take the sum over l to s − 2 because ν = s − (k + 1) and k ≥ 2 imply n s−1 = 0. Thus, in order to obtain s−1 j=0 u n j j in the third term we need to take the multi-indices ν = (n 0 , . . ., n l−1 , n l + 1, n l+1 − 1, n l+1 , . . ., n M ) for all l = 0, . . ., s − 2 with n l+1 ≥ 1 (if n l+1 = 0 then the third term does not contain this product).
Observe that (6.9) and (6.10) are equivalent also in the case that n 0 = 0 or n l+1 = 0 for some l = 0, . . ., s − 2, because in this case the corresponding summand in term in (6.9) vanishes by our convention and the summand in (6.10) vanishes, too.This shows that (6.7) and (6.8) coincide also for k = 2, . . ., s which implies (6.2) and thus finishes the proof.

Appendix B: maple code
In this appendix we provide a maple code2 , which checks the conditions for the existence of a sampled-data controller satisfying (4.2) and computes the controller, if this condition is satisfied.
The algorithm has the following structure 1 set u T = u 1 T from (2.9) 2 for p from 2 to M do Since the equation to be solved in Step 6 is linear in u solve , maple will return a solution provided u solve appears in this equation.During this procedure, the algorithm will not check the boundedness of u solve , hence the boundedness of the resulting feedback u T has to be checked by the user.

3 for k from 1
to n do 3 compute the Taylor approximations T c ≈ φ(T, x) and T d (u) ≈ φ T (T, x, u) up to order T p+2 5 compute the difference ∆(u solve ) = T c − T d (u T + T p u solve ) and truncate all terms ≤ O(T p+3 ) 6 solve ∆(u solve ) = 0 and set u test k = u solve 7 if k ≥ 2 and u test k = u test k−1 stop 8 end of k-loop 9 set u T := u T + T p u test n 10 end of p-loop Starting from u T defined in Step 1, iteratively for p = 2, . . ., M the procedure computes feedback terms u test k such that (4.1) holds for M = p, u T + T p u test k and the functions h k (x) = x k , k = 1, . . ., n.If all u test k , k = 1, . . ., n, can be computed and coincide, then the new feedback u T + T p u test n will satisfy (4.2) for M = p.
uT := proc(g0::Vector, g1::Vector, u0::algebraic, dim::algebraic, M::algebraic) local uT, uTc, u, fc, fd, Lc, Tc, Ld, Td, xv, p, k, i, hd, hdiff, hdiffu, hdiffus, utest, failure, ord;# define the continuous and sampled-data vector field for one sampling period T fc := Vector(dim); fd := Vector(dim); for k from 1 to dim do fc[k] := g0[k] + g1[k]*u0: fd[k] := g0[k] + g1[k]*uTc: od; # define an auxiliary vector for computing derivatives xv:=Vector(dim,symbol=x); # define the zeroth and first order term of the sampled-data controller uT uT := simplify(u0 + T*evalm(jacobian(Vector([u0]),xv)&*fc)[1]/2): ord := M; for p from 2 to M do for k from 1 to dim do Lc[0] := [x[k]]: Tc[0] := Lc[0]: Ld[0] := [x[k]]: Remark 4.2 Note that condition (4.3) is necessary and sufficient on cl K but only sufficient on K \ cl K.This can be verified using the approach in [12, Section 3.1 and the references therein] based on the relative degree: Assume, for instance, the existence of an open subset O ⊂ K \ cl K on which (2.1) has relative degree r = 2, i.e., L g 1 h(x) = 0 and L g 1 L g 0 h(x) = 0 for all x ∈ O.Then, by straightforward computations one sees that on O the feedback from (2.10) satisfies (4.1) with M = 2 for each x ∈ O, regardless of whether (4.3) holds, which shows that this condition is in general not necessary outside cl K.At the first glance, (4.8) seems to contradict (4.4), because the two feedback laws are different for x ∈ O ⊂ K \ K.However, a closer examination reveals that under condition (4.3) in fact for any β ∈ R the feedback T is an admissible feedback satisfying (4.1) on K with M = 2.Then, this feedback must satisfy the conditions (4.6)-(4.7).Since u T is admissible, it is in particular bounded and thus (4.7) implies (4.3) for x ∈ K. Since all expressions in (4.3) are continuous in x, we also obtain (4.3) for x ∈ cl K.In addition, the inductive comparison of (4.5) with(2.11)showsthat any feedback ũT realizing (4.1) with M = 2 must satisfy (4.6)-(4.7)for x ∈ K, which shows that u T must be of the asserted form.