Analysis and design of unconstrained nonlinear MPC schemes for ﬁnite and inﬁnite dimensional systems

: We present a technique for computing stability and performance bounds for unconstrained nonlinear model predictive control (MPC) schemes. The technique relies on controllability properties of the system under consideration and the computation can be formulated as an optimization problem whose complexity is independent of the state space dimension. Based on the insight obtained from the numerical solution of this problem we derive design guidelines for non-linear MPC schemes which guarantee stability of the closed loop for small optimization horizons. These guidelines are illustrated by a ﬁnite and an inﬁnite dimensional example.


Introduction
Model predictive control (MPC, often also termed receding horizon control) is a well established method for the optimal control of linear and nonlinear systems [1,2,15].The stability and suboptimality analysis of MPC schemes has been a topic of active research during the last decades.While in the MPC literature in order to prove stability and suboptimality of the resulting closed loop often stabilizing terminal constraints or terminal costs are used (see, e.g., [12], [3], [9] or the survey paper [15]), here we consider the simplest class of MPC schemes for nonlinear systems, namely those without terminal constraints and cost.These schemes are attractive for their numerical simplicity, do not require the introduction of stabilizing state space constraints -which are particularly inconvenient when treating infinite dimensional systems -and are easily generalized to time varying tracking type problems and to the case where more complicated sets than equilibria are to be stabilized.Essentially, these unconstrained MPC schemes can be interpreted as a simple truncation of the infinite optimization horizon to a finite horizon N .

LARS GR ÜNE
For such unconstrained schemes without terminal cost, Jadbabaie and Hauser [11] and Grimm et al. [4] show under different types of controllability and detectability conditions for nonlinear systems that stability of the closed loop can be expected if the optimization horizon N is sufficiently large, however, no explicit bounds for N are given.The paper [6] (see also [5]) uses controllability conditions and techniques from relaxed dynamic programming [13,18] in order to compute explicit estimates for the degree of suboptimality, which in particular lead to bounds on the stabilizing optimization horizon N which are, however, in general not optimal.Such optimal estimates for the stabilizing horizon N have been obtained in [19,17] using the explicit knowledge of the finite horizon optimal value functions, which could be computed numerically in the (linear) examples considered in these papers.
Unfortunately, for large scale or infinite dimensional and also for moderately sized nonlinear systems in general neither an analytical expression nor a sufficiently accurate numerical approximation of optimal value functions is available.Furthermore, an analysis based on such numerical approximations typically does not provide analytic insight into the dependence between the stability properties and the system structure.For these reasons, in this paper we base our analysis on (open loop) controllability properties, which can often be estimated or characterized in sufficient detail by analyzing the system structure.More precisely, for our analysis we use KL bounds of the chosen running cost along (not necessarily optimal) trajectories.Such bounds induce upper bounds on the optimal value functions and the main feature we exploit is the fact that the controllability properties do not only impose bounds on the optimal value function at the initial value but -via Bellman's optimality principle -also along "tails" of optimal trajectories.The resulting stability and suboptimality condition can be expressed as an optimization problem whose complexity is independent of the dimension of the state space of the system and which is actually an easily solvable linear program if the KL function involved in the controllability assumption is linear in its first argument.As in [6], this procedure gives a bound on the degree of suboptimality of the MPC feedback which in particular allows to determine a bound on the minimal stabilizing horizon N , but in contrast to [6] the bound derived here turns out to be optimal with respect to the class of systems satisfying the assumed controllability property.
Since the resulting optimization problem is small and thus easy to solve, we can perform a comprehensive numerical analysis of many different controllability situations, which we use in order to derive design guidelines for the formulation of stable MPC schemes with small optimization horizon N .A distinctive feature of our approach is that our analysis applies to finite and infinite dimensional systems alike and we demonstrate the effectiveness of our approach in an infinite dimensional setting by an example of a sampled data system governed by a parabolic PDE.
The paper is organized as follows: in Section 2 we describe the setup and the relaxed dynamic programming inequality our approach is based upon.In Section 3 we describe the controllability condition we are going to use and its consequences to the optimal value functions and trajectories.In Section 4 we use these results in order to obtain a condition for suboptimality and show how this condition can be formulated as an optimization problem.Section 5 shows how our condition can be used for the closed loop stability analysis.In Section 6 we perform a case study in which we analyze the impact of different controllability bounds and MPC parameters on the minimal stabilizing horizon N .Based on the numerical findings from this analysis, in Section 7 we formulate our design guidelines for MPC schemes and illustrate them by two examples.We finish the paper by giving conclusions and outlook in Section 8 and the formulation and proof of a technical lemma in the Appendix.

Setup and preliminary results
We consider a nonlinear discrete time system given by Here we denote the space of control sequences u : N 0 → U by U and the solution trajectory for some u ∈ U by x u (n).Here the state space X is an arbitrary metric space, i.e., it can range from a finite set to an infinite dimensional space.
A typical class of systems we consider are sampled-data systems governed by a controlled -finite or infinite dimensional -differential equation ẋ(t) = g(x(t), ũ(t)) with solution ϕ(t, x 0 , ũ) for initial value x 0 .These are obtained by fixing a sampling period T > 0 and setting Then, for any discrete time control function u ∈ U the solutions x u of (2.1),(2.2) satisfy x u (n) = ϕ(nT, x 0 , ũ) for the piecewise constant continuous time control function ũ : R → U with ũ| [nT,(n+1)T ) ≡ u(n).Note that with this construction the discrete time n corresponds to the continuous time t = nT .
Our goal is to find a feedback control law minimizing the infinite horizon cost with running cost l : X × U → R + 0 .We denote the optimal value function for this problem by Here we use the term feedback control in the following general sense.
Definition 2.1 For m ≥ 1, an m-step feedback law is a map µ : X × {0, . . ., m − 1} → U which is applied according to the rule where [n] m denotes the largest product km, k ∈ Z, with km ≤ n.
In other words, the feedback is evaluated at the times 0, m, 2m . . .and generates a sequence of m control values which is applied in the m steps until the next evaluation.Note that for m = 1 we obtain the usual static state feedback concept in discrete time.

LARS GR ÜNE
If the optimal value function V ∞ is known, it is easy to prove using Bellman's optimality principle that the optimal feedback law µ is given by Remark 2.2 We assume throughout this paper that in all relevant expressions the minimum with respect to u ∈ U m is attained.Although it is possible to give modified statements using approximate minimizers, we decided to make this assumption in order to simplify and streamline the presentation.
Since infinite horizon optimal control problems are in general computationally infeasible, we use a receding horizon approach in order to compute an approximately optimal controller, To this end we consider the finite horizon functional for N ∈ N 0 (using −1 n=0 = 0) and the optimal value function Note that this is the conceptually simplest receding horizon approach in which neither terminal costs nor terminal constraints are imposed.
Based on this finite horizon optimal value function for m ≤ N we define an m-step feedback law µ N,m by picking the first m elements of the optimal control sequence for this problem according to the following definition.
Definition 2.3 Let u * be a minimizing control for (2.6) and initial value x 0 .Then we define the m-step MPC feedback law by Here the value N is called the optimization horizon while we refer to m as the control horizon.
Note that we do not need uniqueness of u * for this definition, however, for µ N,m (x 0 , •) being well defined we suppose that for each x 0 we select one specific u * from the set of optimal controls.
The first goal of the present paper is to give estimates about the suboptimality of the feedback µ N,n for the infinite horizon problem.More precisely, for an m-step feedback law µ with corresponding solution trajectory x µ (n) from (2.4) we define and are interested in upper bounds for the infinite horizon value V µ N,m ∞ , i.e., in an estimate about the "degree of suboptimality" of the controller µ N,m .Based on this estimate, the second purpose of this paper is to derive results on the asymptotic stability of the resulting closed loop system using V N as a Lyapunov function.
The approach we take in this paper relies on results on relaxed dynamic programming [13,18] which were already used in an MPC context in [5,6].Next we state the basic relaxed dynamic programming inequality adapted to our setting.Proposition 2.4 Consider an m-step feedback law μ : X × {0, . . ., m − 1} → U , the corresponding solution x μ(k) with x μ(0) = x 0 and a function V : X → R + 0 satisfying the inequality for some α ∈ (0, 1] and all x 0 ∈ X.Then for all x ∈ X the estimate holds. Proof: The proof is similar to that of [18,Proposition 3] and [6, Proposition 2.2]: Consider x 0 ∈ X and the trajectory x μ(n) generated by the closed loop system using μ.Then from (2.8) for all n ∈ N 0 we obtain Summing over n yields For K → ∞ this yields that V is an upper bound for αV μ ∞ and hence Remark 2.5 The term "unconstrained" only refers to constraints which are introduced in order to ensure stability of the closed loop.Other constraints can be easily included in our setup, e.g., the set U of admissible control values could be subject to -possibly state dependent -constraints or X could be the feasible set of a state constrained problem on a larger state space.

Asymptotic controllability and optimal values
In this section we introduce an asymptotic controllability assumption and deduce several consequences for our optimal control problem.In order to facilitate this relation we will formulate our basic controllability assumption, below, not in terms of the trajectory but in terms of the running cost l along a trajectory.
To this end we say that a continuous function ρ : R ≥0 → R ≥0 is of class K ∞ if it satisfies ρ(0) = 0, is strictly increasing and unbounded.We say that a continuous function β : R ≥0 × R ≥0 → R ≥0 is of class KL 0 if for each r > 0 we have lim t→∞ β(r, t) = 0 and for each t ≥ 0 we either have β(•, t) ∈ K ∞ or β(•, t) ≡ 0. Note that in order to allow for tighter bounds for the actual controllability behavior of the system we use a larger class than the usual class KL.It is, however, easy to see that each β ∈ KL 0 can be overbounded by a β ∈ KL, e.g., by setting β(r, t) = max τ ≥t β(r, t) + e −t r.Furthermore, we define l * (x) := min u∈U l(x, u).
Assumption 3.1 Given a function β ∈ KL 0 , for each x 0 ∈ X there exists a control function for real constants C ≥ 1 and σ ∈ (0, 1), i.e., exponential controllability, and for some real sequence (c n ) n∈N 0 with c n ≥ 0 and c n = 0 for all n ≥ n 0 , i.e., finite time controllability (with linear overshoot).
For certain results it will be useful to have the property which in turn is a necessary condition for Assumption 3.1 to hold for n = 0 and β(r, t) = α 1 (α 2 (r)e −t ).
Under Assumption 3.1, for any r ≥ 0 and any N ≥ 1 we define the value An immediate consequence of Assumption 3.1 is the following lemma.
Lemma 3.2 For each N ≥ 1 the inequality holds.
Proof: Using u x 0 from Assumption 3.1, the inequality follows immediately from In the special case (3.1) B N , N ≥ 1, evaluates to The following lemma gives bounds on the finite horizon functional along optimal trajectories.
Lemma 3.3 Assume Assumption 3.1 and consider x 0 ∈ X and an optimal control u * for the finite horizon optimal control problem (2.7) with optimization horizon N ≥ 1.Then for each k = 0, . . ., N − 1 the inequality holds for B N from (3.4).

LARS GR ÜNE
On the other hand we have Subtracting the latter from the former yields which using (3.6) implies i.e., the assertion.
A similar inequality can be obtained for V N .
Lemma 3.4 Assume Assumption 3.1 and consider x 0 ∈ X and an optimal control u * for the finite horizon optimal control problem (2.7) with optimization horizon N .Then for each m = 1, . . ., N − 1 and each j = 0, . . ., N − m − 1 the inequality holds for B N from (3.4).
Proof: We define the control function for u x 0 from Assumption 3.1 with x 0 = x u * (m + j).Then we obtain where we used (3.5) in the last step.This is the desired inequality.

Computation of performance bounds
In this section we provide a constructive approach in order to compute α in (2.8) for systems satisfying Assumption 3.1.For this purpose we consider arbitrary values λ 0 , . . ., λ N −1 > 0 and ν > 0 and start by deriving necessary conditions under which these values coincide with an optimal sequence l(x u * (n), u * (n)) and an optimal value V N (x u * (m)), respectively.
Proof: If the stated conditions hold, then λ n and ν must meet the inequalities given in Lemmas 3.3 and 3.4, which is exactly (4.1) and (4.2).
Using this proposition we can give a sufficient condition for suboptimality of the MPC feedback law µ N,m .
Proof: Consider an initial value x 0 ∈ X and the m-step MPC-feedback law µ N,m .Then there exists an optimal control u * for x 0 such that and consequently also holds.These equalities imply (4.4) for any α ∈ R.
Remark 4.3 Our analysis is easily extended to more general settings.As an example we show how an additional weight on the final term in the finite horizon optimal control problem can be included.In this case, the functional J N is generalized to for some ω ≥ 1.Note that the original form of the functional J N from (2.6) is obtained by setting ω = 1, i.e., J N = J 1 N .A straightforward extension of the proofs in the previous section reveals, that the inequalities in Lemma 3.3 and Lemma 3.4 become respectively, with Consequently, the inequalities (4.1), (4.2) and (4.3) change to and respectively.
In view of Theorem 4.2, the value α can be interpreted as a performance bound which indicates how good the receding horizon MPC strategy approximates the infinite horizon problem.In the remainder of this section we present an optimization approach for computing α.To this end consider the following optimization problem.The following is a straightforward corollary from Theorem 4.2.
holds for all x ∈ X.

Asymptotic stability
In this section we show how the performance bound α can be used in order to conclude asymptotic stability of the MPC closed loop.More precisely, we investigate the asymptotic stability of the zero set of l * .To this end we make the following assumption.
Assumption 5.1 There exists a closed set A ⊂ X satisfying: (i) For each x ∈ A there exists u ∈ U with f (x, u) ∈ A and l(x, u) = 0, i.e., we can stay inside A forever at zero cost.
(ii) There exist K ∞ -functions α 1 , α 2 such that the inequality holds for each x ∈ X where x A := min y∈A x − y .
This assumption assures global asymptotic stability of A under the optimal feedback (2.5) for the infinite horizon problem, provided β(r, n) is summable.We remark that condition (ii) can be relaxed in various ways, e.g., it could be replaced by a detectability condition similar to the one used in [4].However, in order to keep the presentation in this paper technically simple we will work with Assumption 5.1(ii) here.Our main stability result is formulated in the following theorem.As usual, we say that a feedback law µ asymptotically stabilizes a set A if there exists β ∈ KL 0 such that the closed loop system satisfies Then for each optimal control problem (2.1), (2.7) satisfying the Assumptions 3.1 and 5.1 the m-step MPC feedback law µ N,m asymptotically stabilizes the set A. Furthermore, V N is a corresponding m-step Lyapunov function in the sense that (5.2) Proof: From (5.1) and Lemma 3.2 we immediately obtain the inequality The stated Lyapunov inequality (5.2) follows immediately from (2.8) which holds according to Corollary 4.5.Again using (5.1) we obtain V m (x) ≥ α 1 ( x A ) and thus a standard construction (see, e.g., [16]) yields a KL-function ρ for which the inequality holds.In addition, using the definition of µ N,m , for n = 1, . . ., m − 1 we obtain where we have used (5.2) in the last inequality.Thus, for all n ∈ N 0 we obtain the estimate which eventually implies and thus the desired asymptotic stability with KL-function given by, e.g., Of course, Theorem 5.2 gives a conservative criterion in the sense that for a given system satisfying the Assumptions 3.1 and 5.1 asymptotic stability of the closed loop may well hold for smaller optimization horizons N .A trivial example for this is an asymptotically stable system (2.1) which does not depend on u at all, which will of course be "stabilized" regardless of N .
Hence, the best we can expect is that our condition is tight under the information we use, i.e., that given β, N, m such that the assumption of Theorem 5.2 is violated we can always find a system satisfying Assumptions 3.1 and 5.1 which is not stabilized by the MPC feedback law.The following Theorem 5.3 shows that this is indeed the case if β satisfies (3.3).Its proof relies on the explicit construction of an optimal control problem which is not stabilized.Although this is in principle possible for all m ∈ {1, . . ., N − 1}, we restrict ourselves to the classical feedback case, i.e., m = 1, in order to keep the construction technically simple.
Then there exists an optimal control problem (2.1), (2.7) satisfying the Assumptions 3.1 and 5.1 which is not asymptotically stabilized by the MPC feedback law µ N,1 .
The running cost is given by We intend to show that the set A = {x ∈ X | l * (x) = 0} is not asymptotically stabilized.This set A satisfies Assumption 5.1(i) for u = 0 and (ii) for α1 (r) = inf x∈X, x A ≥r l * (x) and α2 (r) = sup x∈X, x A ≤r l * (x).Due to the discrete nature of the state space α1 and α2 are discontinuous but they are easily under-and overbounded by continuous K ∞ functions α 1 and α 2 , respectively.Furthermore, by virtue of (3.3) the optimal control problem satisfies Assumption 3.1 for u x ≡ 0. Now we prove the existence of a trajectory which does not converge to A, which shows that asymptotic stability does not hold.To this end we abbreviate Λ = N −1 n=0 λ n (note that (9.1) implies ν > λ) and investigate the values J N ((1, 0), u) for different choices of u: Case 1: u(0) = 0.In this case, regardless of the values u(n), n ≥ 1, we obtain x(n, u) = (2 −n , 0) and thus In case that the minimum is attained in λ 0 by the (strict) inequality (4.1) for k = 0 we obtain J N ((1, 0), u) > Λ.If the minimum is attained in λ 1 then by (4.2) for j = 0 and (9.1) we obtain J N ((1, 0), u) ≥ ν > Λ.Thus, in both cases the inequality J N ((1, 0), u) > Λ holds.

LARS GR ÜNE 6 Analysis of MPC schemes
Using the optimization Problem 4. 4 we are now able to analyze the optimization horizon N needed in order to ensure stability and desired performance of the MPC closed loop.More precisely, given β from Assumption 3.1 and a desired α 0 ≥ 0, by solving Problem 4.4 we can compute the minimal horizon which yields asymptotic stability and -in case α 0 > 0 -ensures the performance Note that even without sophisticated algorithms for finding the minimum in (6.1) the determination of N needs at most a couple of seconds using our MATLAB code.
We first observe that α from Problem 4.4 is monotone decreasing in β, i.e., for β 1 and for the corresponding solutions of Problem 4.4.This property immediately follows from the fact that a smaller β induces stronger constraints in the optimization problem.Consequently, the horizon N in (6.1) is monotone increasing in β.We emphasize that this is an important feature because in practice it will rarely be possible to compute a tight bound β in Assumption 3.1 and typically only a -more or less -conservative upper bound will be available.Then the monotonocity property ensures that any N computed using such an upper bound β will also give an upper bound on the real minimal horizon N for the system.
In the sequel, we will on the one hand investigate how different choices of the control horizon m and the terminal weight ω (cf.Remark 4.3) affect the horizon N .On the other hand, we will highlight how different characteristic features of β in Assumption 3.1, like, e.g., overshoot and decay rate, influence the horizon N .Since the controllability Assumption 3.1 involves the running cost l, the results of this latter analysis will in particular yield guidelines for the choice of l allowing to design stable MPC schemes with small optimization horizons, which we formulate and illustrate in the ensuing Section 7 for finite and infinite dimensional examples.In our analysis we will concentrate on mere asymptotic stability, i.e., we will consider α 0 = 0, however, all computations yield qualitatively similar results for α 0 > 0. In what follows, for the sake of brevity we concentrate on a couple of particularly illuminating controllability functions β, noting that much more details could be investigated, if desired.
We start by investigating how our estimated minimal stabilizing horizon N depends on the accumulated overshoot represented by β, i.e., on the value γ > 0 satisfying To this end, we use the observation that if N is large enough in order to stabilize each system satisfying Assumption 3.1 with then N is also large enough to stabilize each system satisfying Assumption 3.1 with β from (6.2).In particular, this applies to β(r, n) = Cσ n r with C/(1 − σ) ≤ γ.The reason for this is that the inequalities (4.1), (4.2) for (6.3) form weaker constraints than the respective inequalities for (6.2), hence the minimal value α for (6.3) must be less or equal than α for (6.2).
Thus, we investigate the "worst case" (6.3) numerically and compute how the minimal stabilizing N depends on γ.To this end we computed N from (6.1) for β from (6.It is interesting to observe that the resulting values almost exactly satisfy N ≈ γ log γ, which leads to the conjecture that this expression describes the analytical "stability margin".
In order to see the influence of the control horizon m we have repeated this computation for m = [N/2] + 1, which numerically appears to be the optimal choice of m.The results are shown in Figure 6.2.
Here, one numerically observes N ≈ 1.4γ, i.e., we obtain a linear dependence between γ and N and in particular we obtain stability for much smaller N than in the case m = 1.However, when using such control horizons m > 1, one should keep in mind that the control loop is closed only every m steps, i.e., the re-computation of the control value based on the current measurement is performed at the times 0, m, 2m, . ... This implies that the larger m is chosen, the more limited the ability of the feedback controller to react to perturbations (caused, e.g., by external disturbances or modelling errors) becomes.On the other hand, if a large overshoot γ cannot be avoided and hardware constraints restrict the computational resources, then moderately increasing m may provide a good compromise in order to reduce N and thus the complexity of the optimization problem to be solved online.Figures 6.1 and 6.2 show how fast the necessary control horizon grows depending on γ and obviously the smaller γ is, the smaller N becomes.However, when dealing with a specific system, there are several ways in order to reduce γ.For instance, in an exponentially decaying running cost with β(r, n) = Cσ n r, it will be interesting to know whether small These four functions have in common that γ = C/(1 − σ) = 6, but -as illustrated in Figure 6.3 for r = 1 -they differ in both the size of the overshoot C, which is decreasing from (a) to (d) and the speed of decay σ which becomes slower from (a) to (d).Thus, in order to ensure stability with small optimization horizon N for exponentially decaying β in Assumption 3.1, small overshoot is considerably more important than fast decay.

LARS GR ÜNE
A similar analysis can be carried out for different types of finite time controllability.Here we can investigate the case of non-strict decay, a feature which is not present when considering exponentially decaying functions β.To this end, consider the function β(r, n) = c n r with (a) c 0 = 6, which again satisfy ∞ n=0 c n = 6 and which are depicted in Figure 6.4 for r = 1.These results confirm the conclusion drawn for the exponentially decaying functions (6.4) (a)-(d), i.e., that fast controllability with large overshoot requires a longer optimization horizon N than slower controllability with smaller overshoot.However, here the differences are less pronounced than in the exponentially decaying case.In fact, the results show that besides the overshoot a decisive feature determining the length of the stabilizing horizon N is the minimal time n c for which β(r, n c ) < r, i.e., contraction, can be observed.The longer horizon observed in (6.5)(c) compared to (6.4)(d) is mainly due to the fact that in the former we have n c = 1 while in the latter we have n c = 6.Finally, we investigate the effect of the weight ω introduced in Remark 4.3.To this end for all the functions from (6.4) and (6.5) we have determined a weight ω such that the corresponding stabilizing optimization horizon N becomes as small as possible.The following table summarizes our numerical findings.

LARS GR ÜNE
These results show that suitable tuning of ω reduces the optimization horizon in all cases except for (6.5)(d) (in (6.5)(d), a further reduction to N < 7 is not possible because N = 7 is the smallest horizon for which controllability to 0 is "visible" in the finite horizon functional J N ).It should, however, be noted that terminal weights ω > 1 have to be used with care, since a wrong choice of ω may also have a destabilizing effect: for instance, using ω = 25 in Case (6.4)(c) leads to N = 9 instead of N = 7 for ω = 1.
The results also show that (6.3) is no longer the worst case for ω > 1.On the contrary, in the case (6.5)(a) (which is exactly (6.3) for γ = 6) we obtain the largest reduction of N from 11 to 2.
A reduction to N = 2, i.e., to the shortest possible horizon given that N = 1 results in a trivial optimal control problem, is possible in cases (6.4)(d) and (6.5)(a).The reason for this is that these two cases exhibit β(r, 1) < r, i.e., we observe contraction already after one time step.Numerical evidence indicates that stabilization with N = 2 and m = 1 is always possible in this case.This result actually carries over to the general case β(r, n) < r for all n ≥ n c and some n c ≥ 1, but only if we increase the control horizon m appropriately: our numerical investigations suggest that in this case we always obtain a stabilizing MPC controller when we chose chosing N = n c + 1, m = n c and ω sufficiently large, e.g., in Example (6.4)(b), where we have n c = 2 we obtain N = 3 for m = 2 and ω = 15.
In the case just discussed we have N = m + 1, i.e., summation up to N − 1 = m in J N from (2.6), and thus the effective optimization horizon coincides with the control horizon.In the PDE optimal control literature, this particular choice of N and m in an MPC scheme is often termed "instantaneous control" (cf., e.g., [7,8,10,14] and the references therein) and thus an interesting spin off from our analysis is an additional systems theoretic insight into why and when instantaneous control renders a stable closed loop system.

Design of MPC schemes
Our numerical findings from the previous section immediately lead to design guidelines1 for the choice of l, ω and m for obtaining stable MPC schemes with small optimization horizons N .These can be summarized as follows: • design l in such a way that the overshoot γ = ∞ n=0 β(r, n)/r becomes as small as possible • in case of exponential controllability β(r, n) = Cσ n r, reducing the overshoot by reducing C is more efficient than by reducing σ • in case of finite time controllability β(r, n) = c n r, reducing the overshoot by reducing the c n is more efficient than by reducing the time to reach l * (x) = 0 • terminal weights ω > 1 often lead to smaller N , but too large ω may have the opposite effect, so ω should be tuned with care • enlarging m always leads to smaller N but may decrease the robustness of the closed loop since the feedback is evaluated less frequently • systems which are contracting after some time n c , i.e., β(r, n) < r for all n ≥ n c are always stabilized by chosing the "instantaneous control" parameters N = n c + 1, m = n c and ω suffiently large We illustrate the effectiveness of these guidelines by two examples.We start with a two dimensional example from [19] given by Since this example is low dimensional and linear, V N can be computed numerically.This fact was used in [19] in order to compute the minimal optimization horizon for a stabilizing MPC feedback law with m = 1, which turns out to be N = 5 (note that the numbering in [19] differs from ours).
In order to apply our approach we construct β and u x meeting Assumption 3.1.Because the system is finite time controllable to 0 this is quite easy to accomplish: using the control x 2 ) T one obtains the trajectory for n ≥ 2 and thus Assumption 3.1 with Solving Problem 4.4 for this β we obtain a minimal stabilizing horizon N = 12, which is clearly conservative compared to the value N = 5 computed in [19].Note, however, that instead of using the full information about the functions V N , which are in general difficult to compute, we only use controllability information on the system.Now we demonstrate that despite this conservatism our design guidelines can be used derive a modified design of the MPC scheme which yields stability for horizons N < 5. Recall that the estimate for N becomes the better, the smaller the overshoot γ is.A look at (7.1) LARS GR ÜNE reveals that in this example a reduction of the overshoot can be achieved by reducing the weight of u in l.For instance, if we modify l to l(x, u) = max{ x ∞ , |u|/2} then (7.1) leads to Solving Problem 4.4 for this β leads to a minimal stabilizing horizon N = 5.Using the terminal weight ω = 4 yields a further reduction to N = 4 and if, in addition, we are willing to implement a two step feedback, i.e., use m = 2, then we can reduce the stabilizing optimization horizon even further to N = 3.This illustrates how, just by using the controllability information of the system, our analysis can be used to the design an MPC scheme reducing the optimization horizon N by 40%.
Our second example demonstrates that our design guidelines are also applicable to infinite dimensional systems.Even though in this case an explicit construction of the controllability function β and the control u x in Assumption 3.1 is in general rather difficult, we can still apply our results by using the structure of the system equation in order to extract the necessary information about β.To this end, consider the infinite dimensional control system governed by the parabolic reaction-advection-diffusion PDE with distributed control with solutions y = y(t, x)2 for x ∈ Ω = (0, 1), boundary conditions y(t, 0) = y(t, 1) = 0, initial condition y(0, x) = y 0 (x) and distributed control u(t, •) ∈ L 2 (Ω).The corresponding discrete time system (2.1), whose solutions and control functions we denote by y(n, x) and u(n, x), respectively, is the sampled-data system obtained according to (2.2) with sampling period T = 0.025.
For the subsequent numerical computations we discretized the equation in space by finite differences on a grid with nodes x i = i/M , i = 0, . . ., M , using backward (i.e., upwind) differences for the advection part y x .Figure 7.1 shows the equilibria of the discretized system for u ≡ 0, ν = 0.1, µ = 10 and M = 25.
Our goal is to stabilize the unstable equilibrium y * ≡ 0, which is possible because with the additive distributed control we can compensate the whole dynamics of the system.In order to achieve this task, a natural choice for a running cost l is the tracking type functional which we implemented with λ = 10 −3 for the discretized model in matlab using the lsqnonlin solver for the resulting optimization problem.
The simulations shown in Figure 7.2 reveal that the performance of this controller is not completely satisfactory: for N = 11 the solution remains close to y * = 0 but does not converge while for N = 3 the solution even grows.The reason for this behavior lies in the fact that in order to control the system to y * = 0, in (7.2) the control needs to compensate for y x , i.e., any stabilizing control must satisfy u(n, •) 2 L 2 (Ω) y x (n, •) 2 L 2 (Ω) .Thus, for any stabilizing control sequence u we obtain J ∞ (y 0 , u) λ y x (n, •) 2 L 2 (Ω) which -even for small values of λ -may be considerably larger than l * (y) = y 2 L 2 (Ω) , resulting in a large β and thus the need for a large optimization horizon N in order to achieve stability.This effect can be avoided by changing l in such a way that l * (y) includes y For this l the control effort needed in order to control (7.2) to y * = 0 is proportional to l * (y).Thus, the overshoot reflected in the controllability function β is now essentially proportional to 1 +λ and thus, in particular, small for our choice of λ = 10 −3 which implies stability even for small optimization horizon N .The simulations using the corresponding discretized running cost illustrated in Figure 7.3 show that this is indeed the case: we obtain asymptotic stability even for the very small optimization horizons N = 2 (i.e., for instantaneous control) and N = 3, with slightly better performance for the latter case.

Conclusions and outlook
We have presented a stability and performance analysis technique for unconstrained nonlinear MPC schemes which relies on a suitable controllability condition for the running cost.The proposed technique leads to a stability condition which can be formulated as a small optimization problem and which is tight with respect to the class of systems satisfying the assumed controllability condition.The numerical analysis based on this optimization problem was used to derive guidelines for the design of MPC schemes guaranteeing stability for small optimization horizons N .The effectiveness of these guidelines has been illustrated by a finite and an infinite dimensional example.Future research will include the generalization of the approach to situations where V N cannot be expected to be a Lyapunov function, the inclusion of deterministic and stochastic uncertainties in the analysis and the relaxation of the Assumptions 3.1 and 5.1(ii) to more general controllability and detectability assumptions.In case that (4.2) for j = N − m − 1 is an equality, we set ν (depending on ε) such that equality in (4.2) for j = N − m − 1 holds, as well.This implies ν ≤ ν and thus all other inequalities in (4.2) remain valid for all ε ∈ (0, λ N −1 ).Now by continuity of B k the value ν depends continuously on ε, hence for ε > 0 sufficiently small we obtain (9.1) for ᾱ = α/2 < 0.