Discrete Feedback Stabilization Of Nonlinear Control Systems At A Singular Point

For continuous time nonlinear control systems with constrained control values stabilizing discrete feedback controls are discussed. It is shown that under an accessibility condition exponential discrete feedback stabilizability is equivalent to open loop uniform exponential asymptotic controllability. A numerical algorithm for the computation of discrete feedback controls is presented and a numerical example is discussed.


Introduction
In this paper we consider nonlinear control systems of the form _ y(t) = f(y(t); u(t)); y(0) = y 0 2 R d n f0g u( ) 2 U := fu : R! U; measurableg U R m compact (1) where f : R d R m !R d is C 2 in y and Lipschitz in u.
We assume that x is a singular point of f, i.e. that f(x ; u) = 0 for all u 2 U. Our goal is now to obtain a feedback control strategy such that x becomes an asymptotically stable equilibrium point for the closed loop system.
The problem of stabilization of nonlinear control systems has been considered for a long time by various authors (see e.g.Bacciotti 1] for an overview).
In this paper we will restrict ourselves to the case, where the system is exponentially asymptotically controllable to x by an open loop control for each initial value x 0 2 R d .
The question that arises then is whether under this condition there exists a feedback control such that the corresponding closed loop system is exponentially stable.In general this is not possible by using a continuous feedback law, cp.2].Hence we will use a more general feedback concept which we will call discrete feedback controls.
In mathematical control theory discrete feedback controls have been investigated by various authors.In one of the most recent works on this subject Clarke, Ledyaev, Sontag and Subbotin 3] show by a Lyapunov function approach that asymptotic controllability implies stabilizability of nonlinear control systems by sampled feedbacks when the discretization step (or sampling rate) tends to 0.
The construction made in this paper is based on another concept introduced by Lyapunov, namely the Lyapunov exponents.It has its origin in the numerical considerations discussed in 5].Like in many numerical algorithms a discretization of (1) is needed in order to apply the algorithm from 5].This leads to the discrete time system obtained from (1) by discretization in time.The discrete feedback discussed here can in this context be interpreted as a feedback for this discrete time system applied to the continuous time system.In contrast to the result by Clarke et al. here we obtain stabilizability using discrete feedback controls with xed discretization step size.Moreover we will indicate how one can obtain a numerical algorithm to calculate the stabilizing discrete feedback control.

Preliminary Results
The linearization of f with respect to x in the singular point x (we may assume x = 0) gives us a semilinear control system of the form _ x(t) = A(u(t))x(t); x(0) = x 0 2 R d n f0g (2) In order to characterize the (open-loop) exponential behaviour of (2) we de ne the Lyapunov exponent (x 0 ; u( )) by (x 0 ; u( )) := lim sup t!1 1 t ln kx(t; x 0 ; u( ))k; (3) and the in mal Lyapunov exponent with respect to the control by (x 0 ) := inf u( )2U (x 0 ; u( )): The following assertion is an easy consequence from the de nition of the Lyapunov exponent: For all x 0 2 R d , x 0 6 = 0 there exists a control function u x0 ( ) 2 U such that x(t; x 0 ; u x0 ( )) converges to the origin exponentially fast if and only if (x 0 ) < 0 for all x 0 2 R d n f0g.In this section we will brie y recall the stabilization results for (2) from 4].We will rst give the de nition of the discrete feedback control.
De nition 2.1 (Discrete feedback control) A discrete feedback control for the system (2) is a function where r] denotes the largest integer less or equal r 2 R. i.e.G(x 0 ; u) := x(h; x 0 ; u).This de nes a discrete time control system via The discrete feedback as de ned in De nition 2.1 can now be interpreted as a feedback for the discrete time system (5).By de ning s := x=kxk we obtain the projection of (2) onto the unit sphere S d?1 by where h(s; u) = A(u) ?s t A(u)s Id]s.
The following assumption assures local accessibility of (6), i.e. that the reachable set for any point up to any time t > 0 has nonvoid interior (cp.9]).
Let L denote the Lie-algebra generated by the vector elds h( ; u), u 2 U. Let L denote the distribution generated by L in TS d?1 , the tangent bundle of S d?1 .Assume that dim L (s) = dimS d?1 = d ? 1 for all s 2 S d?1 (H) Under this condition we can formulate the main theorem from 4].Theorem 2.3 Consider a semilinear control system (2) with the projection ( 6) satisfying (H).Then there exists an h > 0 and a discrete feedback that steers (2) to the origin exponentially fast for all initial values x 0 2 R d nf0g if and only if (x 0 ) < 0 for all x 0 2 R d n f0g.
Numerically this stabilizing discrete feedback can be calculated in the following way: Using the projected system (6) the Lyapunov exponent can be expressed as the optimal value funktion of a average time optimal control problem on the sphere S d?1 .
This value function can be approximated by the value function of a discounted optimal control problem, cp.5], which can be solved numerically as described in 5] and 7].Using the numerical approximation of the optimal value function we can then derive approximately optimal discrete feedback controls F satisfying 1 T lnkx(t; x 0 ; F(x t h h ))k < ?(7) for all x 0 2 R d , some bounded time T > 0 and a constant > 0.
Then by induction it is easily seen that lim sup and hence the trajectory is exponentially stable.(For the details of the algorithm see 4].) 3 Stabilization of Nonlinear Systems We will now extend this result to nonlinear systems and will again assume x = 0.
To apply the cited result to (1) we consider the linearization A(u) := D y f(0; u) and the corresponding control system of the form (2). System (1) can now be written as _ x(t) = A(u(t))x(t) + g(x(t); u(t)) (8) where kg(x; u)k C g kxk 2 for all x 2 B g (0); u 2 U.
The following lemma shows some Properties of the trajectories of these systems for initial values close to x = 0: Lemma 3.1 Let y(t), x(t) be the solutions of the systems (1) und (2) for a xed control function u( ) 2 U and some initial value x 0 .Let a time T > 0 be xed.Then there is a constant > 0 and a (T) 2 R, such that for all t 2 0; T] the following inequalities hold 1. ky(t)k 2 e ?t kx 0 k; e t kx 0 k] for all x 0 2 B (T ) (0) 2. kx(t)k 2 e ?t kx 0 k; e t kx 0 k] for all x 0 2 R d 3. ky(t) ?x(t)k tCe t kx 0 k 2 for all x 0 2 B (T ) (0) Proof: By using the Gronwall Lemma applied to (8), a detailed proof is given in 6].
The following robustness result of the discrete feedback is crucial for the application to nonlinear systems.Note that the discrete feedback will in general be discontinuous, hence this result can only be obtained by a thorough analysis of the corresponding discounted optimal control problems.A proof of this lemma can be found in 6].Lemma 3.2 Let F be the stabilizing discrete feedback with time step h > 0 for the system (2) satisfying (7) for some time T > 0.
Then for any " > 0 there exists a constant und ein > 0, such that for all initial values y 0 2 B (t) (0) the inequality 1 T lnky(t; y 0 ; F(y t h h ))k < ?+ " is satis ed.
Again by induction we obtain the following stabilization result.
Proposition 3.3 Consider the system (1).Assume that the projection ( 6) of the linearized system (2) satis es (H).Assume that ( 8) is exponentially asymptotically controllable to 0 for all initial values x 0 2 R d .Then there exists an h > 0, such that the discrete feedback F stabilizing (2) also stabilizes the nonlinear system (1) in a neighbourhood of 0.
To obtain an equivalence result similar to Theorem 2.3 we have to introduce the following de nition of stability, cp.
De nition 3.4 The system (1) is called uniformly exponentially controllable to 0, if there exists a neighbourhood B(0) and constants C; > 0, such that for any initial value y 0 2 B(0) there exists a control function u y0 ( ) 2 U with ky(t; y 0 ; u y0 ( ))k Ce ?t ky 0 k for all t > 0. Theorem 3.5 Consider the system (1).Assume that the projection (6) of the linearization (2) satis es (H).Then the following properties are equivalent: (i) System ( 1) is uniformly exponentially controllable to 0 (ii) There is an h > 0 and a discrete feedback that uniformly exponentially stabilizes (1) in a neighbourhood of the origin (iii) All in mal Lyapunov exponents of the linearized system satisfy (x 0 ) < 0 Proof: \(iii))(ii)" follows from Theorem 3.3, \(ii))(i)" is immediately clear.It remains to show \(i))(iii)".Using a converse version of Lemma 3.2 we obtain local exponential controllability to 0 for (2).By the linearity this immediately implies the global assertion.
The detailed proof can be found in 6].Remark 3.6 The results can be extended to systems of the form _ y(t) = f(y(t); z(t); u(t)) _ z(t) = Z(z(t); u(t)) if we assume that f and Z are a ne linear in u and the subsystem in z is completely controllable on a compact state space.Linearization with respect to y then leads to a system of the form _ x(t) = A(z(t); u(t))x(t) _ z(t) = Z(z(t); u(t)) For these systems the stabilization theory is developed in 6].

A numerical example
The following system from 10, Section 1.2] models a pendulum forced by a periodic up-and down movement of the suspension.We assume that the period of this motion can be controlled, which can be modelled by the equation _ z = + u.With y 1 denoting the angle and y 2 denoting the angular speed of the pendulum we obtain the following nonlinear control system _ y 1 = y 2 _ y 2 = ?By 2 ?(1 + A cos y 3 ) sin y 1 _ z = + u Here B describes the damping of the pendulum, A the amplitude and the frequency of the up-and down motion.
Taking into account the periodicity in z we have R=2 Z as the compact state space of the z component and this subsystem is completely controllable.
Using the techniques from 5] and 7] we calculated the stabilizing discrete feedback for this system.
Applying this feedback to the nonlinear system we obtain the trajectories shown in Figure 1 where z(0) = 0.As one would expect from the theory the trajectories are locally stable.
For the given set of parameters the system has some additional feature: For u 0 the pendulum is rotating for all initial values y 0 6 = (0; 0); this implies that since the angular speed is bounded after some time t any trajectory will enter a neighbourhood of ( ; 0).Identifying the points ((2n + 1) ; 0) for all n 2 Zthe discrete feedback now turns out stabilize the system for all initial values in this neighbourhood.Figure 2 shows a trajectory where the pendulum rotates once (from y 1 = to y 1 = 3 ), and then is stabilized at the point (3 ; 0).
In this case the stabilization of the nonlinear system in the numerical experiment turns out to be global except for the equilibrium (0; 0).

Remark 2 . 2
The following interpretation gives the motivation for the name \discrete feedback".For a given time step h > 0 and constant control values u 2 U denote by G : R d U ! R d the solution of (2) at the time h,