Analysis of unconstrained nonlinear MPC schemes with time varying control horizon

For discrete time nonlinear systems satisfying an exponential or finite time controllability assumption, we present an analytical formula for a suboptimality estimate for model predictive control schemes without stabilizing terminal constraints. Based on our formula, we perform a detailed analysis of the impact of the optimization horizon and the possibly time varying control horizon on stability and performance of the closed loop.


Introduction
The stability and performance analysis of model predictive control (MPC) schemes has attracted considerable attention during the last years.MPC relies on an iterative online solution of finite horizon optimal control problems in order to deal with an optimal control problem on an infinite horizon.To this end, a performance criterion -often the distance to some desired reference -is optimized over the predicted trajectories of the system.This method is particularly attractive due to its ability to explicitly incorporate constraints in the controller design.Due to the rapid development of efficient optimization algorithms MPC becomes increasingly applicable also to nonlinear and large scale systems.
Two central questions in the analysis of MPC schemes are asymptotic stability, i.e., whether the closed loop system trajectories converge to the reference and stay close to it, and closed loop performance of the MPC controlled system.In particular -since desired performance specifications (like, e.g., minimizing energy or maximizing the output in a chemical process) can be explicitly included in the optimization objective -the latter provides information on how good this objective is eventually satisfied by the resulting closed loop system.For MPC schemes with stabilizing terminal constraints the available analysis methods have reached a certain degree of maturity, see, e.g., the survey [15] and the references therin.Despite their widespread use in applications, cf.[17], for schemes without stabilizing terminal constraints -considered in this paper -corresponding results are more recent and less elaborated.Concerning stability, the papers [1,5,11] show (under different types of controllability or detectability conditions) that stability can be expected if the optimization horizon is chosen sufficiently large, without, however, aiming at giving precise estimates for these horizons.
Closed loop performance of MPC controlled systems is measured by evaluating an infinite horizon functional along the closed loop trajectory.Suboptimality estimates, which typically allow to conclude stability of the closed loop, are then obtained by comparing this value with the optimal value of the infinite horizon problem.In [19] an estimation method of this type for discrete time linear systems is presented which relies on a numerical approximation of the finite time optimal value function.Since for nonlinear or large scale systems this function is usually not computable, in [10] a method for finite or infinite dimensional discrete time nonlinear systems using ideas from relaxed dynamic programming has been presented.This approach allows for performance estimates based on controllability properties.Motivated by these results, in [6] a linear program has been developed whose solution precisely estimates the degree of suboptimality from exponential or finite time controllability.
The present paper builds upon [6] extending the analysis from this reference to MPC schemes with time varying control horizon, i.e., the interval between two consecutive optimizations or, equivalently, the interval on which each resulting open loop optimal control is applied.This setting is motivated by networked control systems in which the network performance determines the control horizon, see [8,9] and the discussion after Remark 2.4, below.In particular, we thoroughly investigate the impact of different -possibly time varying -control horizons on the closed loop behavior.
Moreover, we give an analytic solution to the linear program from [6] and -as a consequence -an explicit formula for the suboptimality estimate based on the K L 0 -function characterizing our controllability assumption.This allows for a much more detailed analysis which is the main contribution of this paper.We investigate -among others -the impact of the optimization horizon, i.e., the interval on which the predicted trajectory is optimized (and which we choose identical to the prediction horizon), on the suboptimality and stability of the MPC closed loop.Especially, we prove conjectures from [6] with respect to minimal stabilizing horizons which were based on numerical observations.Furthermore, we analyze the influence of adding a final weight in the finite horizon cost functional.
The paper is organized as follows.In Section 2 we describe the setup and problem formulation.In Section 3 we introduce our controllability assumption and briefly summarize the needed results from [6].In Section 4 we show that our suboptimality result can be used to conclude stability, extending [6,Section 5] to time varying control horizons.In Section 5 we present the explicit formula for our suboptimality index α in Theorem 5.4.In the ensuing sections we examine effects of different parameters on α.In particular, in Section 6 we investigate the impact of the optimization horizon and in Sections 7 and 8 we scrutinize qualitative and quantitative effects, respectively, of different control horizons.Finally, in Section 9 we illustrate our results with numerical examples.A number of technical lemmata and their proofs can be found in the appendix in Section 10.

Setup and Preliminaries
We consider a nonlinear discrete time control system given by x(n + 1) = f (x(n), u(n)), x(0) = x 0 (1) with x(n) ∈ X and u(n) ∈ U for n ∈ N 0 .Here the state space X and the control value space U are arbitrary metric spaces.We denote the space of control sequences u : N 0 → U by U and the solution trajectory for given u ∈ U by x u (•).Note that constraints can be incorporated by replacing X and U by appropriate subsets of the respective spaces.For simplicity of exposition, however, we will not address feasibility issues in this paper.
A typical class of such discrete time systems are sampled-data systems induced by a controlled -finite or infinite dimensional -differential equation with sampling period T > 0. In this situation, the discrete time n corresponds to the continuous time t = nT .
Our goal is to minimize the infinite horizon cost J ∞ (x 0 , u) = ∑ ∞ n=0 l(x u (n), u(n)) with running cost l : X × U → R + 0 by a multistep state feedback control (rigorously defined below in Definition 2.2).
We denote the optimal value function for this problem by V ∞ (x 0 ) := inf u∈U J ∞ (x 0 , u).Since infinite horizon optimal control problems are in general computationally infeasible, we use a receding horizon approach in order to compute an approximately optimal controller.To this end, we consider the finite horizon functional with optimization horizon N ∈ N inducing the optimal value function By solving this finite horizon optimal control problem we obtain N control values µ(x 0 , 0), µ(x 0 , 1), . . ., µ(x 0 , N −1) depending on the state x 0 .Implementing the first m 0 ∈ {1, . . ., N −1} elements of this sequence yields a new state x(m 0 ).Iterative application of this construction then provides a control sequence on the infinite time interval, whose properties we intend to investigate in this paper.To this end, we introduce a more formal description of this construction.
Definition 2.1.Given a set M ⊆ {1, . . ., m }, m ∈ N, we call a control horizon sequence (m i ) i∈N 0 admissible if m i ∈ M holds for all i ∈ N 0 .Furthermore, for k, n ∈ N 0 we define Using this notation, the applied control sequence can be expressed as A closed loop interpretation of this construction can be obtained via multistep feedback laws.
Definition 2.2.For m ≥ 1 and M ⊆ {1, . . ., m } a multistep feedback law is a map µ : X × {0, . . ., m − 1} → U which for an admissible control horizon sequence (m i ) i∈N 0 is applied according to the rule x µ (0 Using this definition, the above construction is equivalent to the following definition. Definition 2.3.For m ≥ 1 and N ≥ m + 1 we define the multistep MPC feedback law µ N,m (x 0 , n) := u (n), where u is a minimizing control for (3) with initial value x 0 .
Remark 2.4.For simplicity of exposition here we assume that the infimum in (3) is a minimum, i.e., that a minimizing control sequence u * exists.
Note that in "classical" MPC only the first element of the obtained finite horizon optimal sequence of control values is used.Our main motivation for considering this generalized feedback concept with varying control horizons m i are networked control systems (NCS) in which the transmission channel from the controller to the plant is subject to packet dropouts.In order to compensate these dropouts, at each successful transmission time σ (k) a whole sequence of control values is transmitted to the plant.This sequence is then used until the next successful transmission at time σ (k + 1) = σ (k) + m k , for details see [8].Note that in this application the control horizon m k is not known at time σ (k).
In this paper we consider the conceptually simplest MPC approach imposing neither terminal costs nor terminal constraints.In order to measure the suboptimality degree of the multistep feedback for the infinite horizon problem we define Our approach relies on the following result from relaxed dynamic programming [13,18], which is a straightforward generalization of proposition [6, Proposition 2.4], cf.[8] for a proof.Proposition 2.5.Consider a multistep feedback law μ : X × {0, . . ., m − 1} → U, a set M ⊆ {1, . . ., m } and a function V : X → R + 0 and assume that for each admissible control horizon sequence (m i ) i∈N 0 and each x 0 ∈ X the corresponding solution x μ (n) with x μ (0) = x 0 satisfies for some α ∈ (0, 1].Then for all x 0 ∈ X and all admissible

Controllability and performance bounds
In this section we introduce an asymptotic controllability assumption and deduce several consequences for our optimal control problem.In order to facilitate this relation we will formulate our basic controllability assumption, below, not in terms of the trajectory but in terms of the running cost l along a trajectory.
To this end, we say that a continuous function ρ : R ≥0 → R ≥0 is of class K ∞ if it satisfies ρ(0) = 0, is strictly increasing and unbounded.Furthermore, we say that a continuous function β : R ≥0 × R ≥0 → R ≥0 is of class K L 0 if for each r > 0 we have lim t→∞ β (r,t) = 0 and for each t ≥ 0 we either have Note that in order to allow for tighter bounds for the actual controllability behavior of the system we use a larger class than the usual class K L .It is, however, easy to see that each β ∈ K L 0 can be overbounded by a β ∈ K L , e.g., by setting β (r,t) = sup τ≥t β (r, τ) + e −t r.Moreover, we define l (x) := min u∈U l(x, u).

Special cases for
for real constants C ≥ 1 and σ ∈ (0, 1), i.e., exponential controllability, and for some real sequence (c n ) n∈N 0 with c n ≥ 0 and c n = 0 for all n ≥ n 0 , i.e., finite time controllability (with linear overshoot).
For certain results it will be useful to have the property Property (8) ensures that any sequence of the form λ n = β (r, n), r > 0, also fulfills It is, for instance, always satisfied in case (6) and satisfied in case (7) if and only if c n+m ≤ c n c m .If needed, this property can be assumed without loss of generality, cf.[6, Section 3].
In order to ease notation, we define the value for any r ≥ 0 and any N ∈ N ≥1 .An immediate consequence of Assumption 3.1 and Bellman's optimality principle V N (x) = min u∈U {l(x, u) +V N−1 ( f (x, u))} are the following lemmata from [6].
Lemma 3.3.Suppose Assumption 3.1 holds and consider x 0 ∈ X and an optimal control u for the finite horizon optimal control problem (3) with optimization horizon N ≥ 1.Then for each j = 0, . . ., N − 2 the inequality and for each m = 1, . . ., N − 1 and each j = 0, . . ., N − m − 1 the inequality holds for B N− j from (9).Now we provide a constructive approach in order to compute α in (5) for systems satisfying Assumption 3.1.Note that (5) only depends on m 0 and not on the remainder of the control horizon sequence.Hence, we can perform the computation separately for each control horizon m and obtain the desired α for variable m by minimizing over the α-values for all admissible m.
holds true and if furthermore ν = V N (x u (m)) we have Proof.If the stated conditions hold, then λ n and ν meet the inequalities given in Lemma 3.3, which is exactly (13) and (14).Using this proposition a sufficient condition for suboptimality of the MPC feedback law µ N,m is given in Theorem 3.5 which is proved in [6].Theorem 3.5.Consider β ∈ K L 0 , N ≥ 1, m ∈ {1, . . ., N −1}, and assume that all sequences λ n > 0, n = 0, . . ., N − 1 and values ν > 0 fulfilling (13), (14) for some α ∈ (0, 1].Then for each optimal control problem (1), (3) satisfying Assumption 3.1 the assumptions of Proposition 2.5 are satisfied for the multistep MPC feedback law µ N,m and in particular the inequality αV In view of Theorem 3.5, the value α can be interpreted as a performance bound which indicates how good the receding horizon MPC strategy approximates the infinite horizon problem.In the remainder of this section we present an optimization based approach for computing α.To this end, consider the following optimization problem.Problem 3.6.Given β ∈ K L 0 , N ≥ 1 and m ∈ {1, . . ., N − 1}, compute subject to the constraints (13), (14), and λ 0 , . . ., λ N−1 , ν > 0.
The following is a straightforward corollary from Theorem 3.5.
As already mentioned in [6,Remark 4.3], our setting can be easily extended to the setting including an additional weight ω ≥ 1 on the final term, i.e., altering our finite time cost functional by adding (ω − 1)l(x u (N − 1), u(N − 1)).Note that the original form of the functional J N is obtained by setting ω = 1.All results in this section remain valid if the statements are suitably adapted.In particular, (2) and (9) become and the formula in Problem 3.6 alters to 4 Asymptotic stability In this section, which extends [6, Section 5] to varying control horizons, we show how the performance bound α = α ω N,m can be used in order to conclude asymptotic stability of the MPC closed loop.More precisely, we investigate the asymptotic stability of the zero set of l .To this end, we make the following assumption.(i) For each x ∈ A there exists u ∈ U with f (x, u) ∈ A and l(x, u) = 0, i.e., we can stay inside A forever at zero cost.
(ii) There exist K ∞ -functions α 1 , α 2 such that the inequality holds for each x ∈ X where x A := min y∈A x − y .
This assumption assures global asymptotic stability of A under the optimal feedback for the infinite horizon problem, provided β (r, n) is summable.We remark that condition (ii) can be relaxed in various ways, e.g., it could be replaced by a detectability condition similar to the one used in [5].However, in order to keep the presentation in this paper technically simple we will work with Assumption 4.1(ii) here.Our first stability result is formulated in the following theorem.Here we say that a multistep feedback law µ asymptotically stabilizes a set A if there exists β ∈ K L 0 such that for all admissible control horizon sequences the closed loop system satisfies x µ (n) A ≤ β ( x 0 A , n).Theorem 4.2.Consider β ∈ K L 0 , m ≥ 1 and N ≥ m + 1 and a set M ⊆ {1, . . ., m }.Assume that α := min m∈M {α ω N,m } > 0 where α ω N,m denotes the optimal value of optimization Problem 3.6.Then for each optimal control problem (1), (3) satisfying the Assumptions 3.1 and 4.1 the multistep MPC feedback law µ N,m asymptotically stabilizes the set A for all admissible control horizon sequences (m i ) i∈N 0 .Furthermore, the function V N is a Lyapunov function at the transmission times σ (k) in the sense that holds for all k ∈ N 0 and x 0 ∈ X.
Proof.From (18) and Lemma 3.2 we immediately obtain the inequality Note that B N • α 2 is again a K ∞ -function.The stated Lyapunov inequality (19) follows immediately from the definition of α and (5) which holds according to Corollary 3.7 for all m ∈ M. Again, using (18) we obtain V m (x) ≥ α 1 ( x A ) and thus a standard construction (see, e.g., [16]) yields a K Lfunction ρ for which the inequality V N (x µ N,m (σ (k))) ≤ ρ(V N (x), k) ≤ ρ(V N (x), σ (k)/m ) holds.In addition, using the definition of µ N,m , for p = 1, . . ., m k −1, k ∈ N 0 , and abbreviating x(n) = x µ N,m (n) we obtain where we have used (19) in the last inequality.Hence, we obtain the estimate and thus asymptotic stability with K L -function given by, e.g., β (r, n Remark 4.3.(i) For the "classical" MPC case m = 1 and β satisfying (8) it is shown in [6,Theorem 5.3] that the criterion from Theorem 4.2 is tight in the sense that if α < 0 holds then there exists a control system which satisfies Assumption 3.1 but which is not stabilized by the MPC scheme.We conjecture that the same is true for the general case m ≥ 2.
(ii) Note that in Theorem 4.2 we use a criterion for arbitrary but fixed m ∈ M in order to conclude asymptotic stability for time varying m i ∈ M.This is possible since our proof yields V N as a common Lyapunov function for all m ∈ M, cf. also [12,Section 2.1.2].

N,m
In this section we continue the analysis of Problem 3.6 in the extended version (17), i.e., including an additional terminal weight.Although this is an optimization problem of much lower complexity than the original MPC optimization problem, still, it is in general nonlinear.However, it becomes a linear program if β (r, n) (and thus B k (r) from ( 9)) is linear in r.
Lemma 5.1.Let β (r,t) be linear in its first argument.Then Problem 3.6 yields the same optimal value α ω N,m as subject to the (now linear) constraints ( 13), ( 14) with B N (k) from ( 16) and For a proof we refer to [6, Remark 4.3 and Lemma 4.6], observing that this proof is easily extended to ω ≥ 1.
Proposition 5.2.Let β (•, •) be linear in its first argument and define γ k := B k (r)/r.Then the optimal value of Problem 3.6 equals the optimal value of the optimization problem Proof.We proceed from the linear optimization problem stated in Lemma 5.1 and show that Inequality ( 14), j = N − m − 1, is active in the optimum.To this end, we assume the opposite and deduce a contradiction.λ N−1 > 0 allows -due to the continuity of B m+1 (λ N−1 ) with respect to λ N−1 -for reducing this variable without violating Inequality ( 14), j = N − m − 1.As a consequence the objective function decreases strictly whereas all other constraints remain valid.Hence, But then the right hand side of ( 14), j = N − m − 1, is equal to zero which -in combination with ν ≥ 0 -leads to the claimed contradiction.This enables us to treat Inequality ( 14), j = N − m − 1, as an equality constraint.In conjunction with the non-negativity conditions imposed on λ m , . . ., λ N−1 this ensures ν ≥ 0.Moreover, λ 0 ≥ 0 is satisfied for all feasible points due to Inequality (13), k = 0, and the linearity of B N .Next, we utilize Equalities ( 22) and ( 14), j = N − m − 1, in order to eliminate ν and λ 0 from the considered optimization problem.Using these equalities and the the definition of γ m+1 converts the objective function from Lemma 5.2 into the desired form.Furthermore, Equality (22) provides the equivalence of Inequalities ( 13), k = 0, and (23).Taking Equality (14) Shifting the control variable j shows the equivalence to (25), j = m, . . ., N − 2. Paraphrasing ( 13) provides (24) for k = 1, . . ., N − 2.
Before we proceed, we formulate Problem 5.3 by dropping Inequalities (24), j = m, . . ., N − 2. The solution of this relaxed (optimization) problem paves the way for dealing with Problem 3.6.
Theorem 5.4.Let β (•, •) be linear in its first argument and satisfy (8).Then the optimal value α = α ω N,m of Problem 3.6 for given optimization horizon N, control horizon m, and weight ω on the final term satisfies α ω N,m = 1 if and only if ω ≥ γ m+1 .Otherwise, we get Proof.We have shown that the linear optimization problem stated in Proposition 5.2 yields the same optimal value as Problem 3.6 for K L 0 -functions which are linear in their first argument.Technically, this is posed as a minimization problem.Taking the restriction λ N−1 ≥ 0 into account leads to the determinable question whether the coefficient of λ N−1 is positive or not.As a consequence, the aim is either minimizing or maximizing λ N−1 .In the first case, i.e., γ m+1 − ω ≤ 0, choosing λ 1 = . . .= λ N−1 = 0 solves the considered task and provides the optimal value α ω N,m = 1.In order to prove the assertion we solve the relaxed Problem 5.3 and show that its optimum is also feasible for Problem 3.6.Suppose that λ m+1 − ω > 0 holds, then Lemma 10.4 shows the optimum's crucial characteristic to satisfy the linear system of equations Aλ = b with A and b from Problem 5.3.We proceed by deriving formulae for λ N−2 , . . ., λ 1 depending (only) on λ N−1 .These allow for an explicit calculation of λ N−1 from A 1 λ = b1 .To this end, define δ i := −d i > 0 and begin with showing the equality for i = 1, . . ., N − 1 − m by induction which is obvious for i = 1.Thus, we continue with the induction step using Lemma 10.2: Similarly, in consideration of (33) applied with N − 1 = m one obtains the representation We consider the left hand side of A 1 λ = b1 : The common denominator of this expression is . Thus, the nominator equals λ N−1 with the coefficient where we used (33) from Lemma 10.2 with δ N−1− j = γ N−m+ j − 1.Hence, taking the coefficient (γ m+1 − ω) of λ N−1 in the objective function and b1 = γ N − 1 into account, we obtain formula (26) as the optimal value of Problem 5.3.However, the assertion claims this to be the optimal value for Problem 3.6 as well.In order to prove this it suffices to show that the optimum of Problem 5.3 satisfies the Inequalities (24), j = m . . ., N − 2. As a consequence, it solves the optimization problem stated in Proposition 5.2 which is equivalent to Problem 3.6.As a byproduct, this covers the necessity of the previously considered condition γ m+1 − ω ≤ 0 in order to obtain α ω N,m = 1.We perform a pairwise comparison of Inequality (25) and (24) for j ∈ {m, . . ., N − 2} in order to show that the Inequalities (24), j = m, . . ., N − 2 are dispensable.To this end, it suffices to show Equation ( 27) characterizes the components λ j , j = m, . . ., N − 2, in the optimum of Problem 5.3 by means of the equation Using this representation λ j which (only) depends on λ N−1 Inequality (28) is equivalent to Since the left hand side of this expression is equal to Remark 5.5.If condition (8) is not satisfied the α ω N,m -value which has been deduced in Theorem 5.4 may still be used as a lower bound for the optimal value of Problem 3.6 for K L 0 -functions which are linear in their first arguments, cf.Corollary 6.1.
At first glance, exponential controllability with respect to the stage costs may seem to be restrictive.However, since the stage costs can be used as a design parameter, cf.[6, Section 7], this includes even systems which are only asymptotically controllable.In order to illustrate this assertion we consider the control system defined by x(n 1 This system is asymptotically stabilizable with control function u(•) ≡ −1, i.e., x(n + 1) = x(n) − x(n) 3 .However, it is not exponentially stabilizable.Defining for 0 < x(n) < 1 and l(x(n), u(n)) := x(n) otherwise allows for choosing β (r,t) = re −t/e , i.e., a K L -function of type (6).We have to establish the inequality which implies Assumption 3.1 inductivly and is equivalent to Since x(n) ≤ 1 this inequality holds.Thus, we have obtained exponential controllability with respect to suitably chosen stage costs.
Remark 5.6.Note that Assumption 3.1 is not merely an abstract condition.Rather, in connection with Formula (26) it can be used for analyzing differences in the MPC closed loop performance for different stage costs l(•, •) and thus for developing design guidelines for selecting good cost functions l(•, •).This has been carried out, for instance, for the linear wave equation with boundary control in [2], for a semilinear parabolic PDE with distributed and boundary control in [3] (see also [6] for a preliminary study), and for a discrete time 2d test example in [6].
6 Characteristics of α ω N,m depending on the optimization horizon N Theorem 5.4 enables us to easily compute the performance bounds α ω N,m which are needed in Theorem 4.2 in order to prove stability provided β is known.However, even if β is not known exactly, we can deduce valuable information.The following corollary is obtained by a careful analysis of the fraction in (26).Corollary 6.1.For each fixed m, β of type (6) or (7) and ω ≥ 1 we have lim N→∞ α ω N,m = 1.In particular, for sufficiently large N the assumptions of Theorem 4.2 hold and hence the closed loop system is asymptotically stable.
It suffices to investigate the case γ m+1 − ω > 0 because otherwise the assertion holds trivially.We have to show that the subtrahend of the difference in formula (26) converges to zero as the optimization horizon N tends to infinity.To this aim, we divide the term under consideration into two factors.One of them is the following which is bounded for sufficiently large N, i.e., N > m + m, Hence, we focus on the other factor, i.e., Showing the convergence of this term to zero for N tending to infinity completes the proof.Thus, it suffices to prove ∏ N m+2 (γ i − 1)/γ i −→ 0 for N tending to infinity.Taking into account γ m − (ω − 1)c m ≤ γ i for all i ≥ m, we derive the desired convergence by the estimate Corollary 6.1 ensures stability for sufficiently large optimization horizons N which has already been shown in [5] under similar conditions (see also [11] for an analogous result in continuous time).Our result generalizes this assertion to arbitrary, but fixed control horizons m.Furthermore, similar to [10] for ω = 1, it also implies that for N → ∞ the infinite horizon cost V µ N,m ∞ will converge to the optimal value V ∞ (using the inequality ≤ V N from Theorem 3.5 and the obvious inequality However, compared to these references, our approach has the significant advantage that we can also investigate the influence of different quantitative characteristics of β , e.g., the overshoot C and decay rate σ in the exponentially controllable case (6).For instance, the task of calculating all parameter combinations (C, σ ) implying a nonnegative α ω N,m and thus stability for a given optimization horizon N can be easily performed, cf. Figure 1 2 .
As expected, the stability region grows with increasing optimization horizon N.Moreover, Theorem 5.4 enables us to quantify the observed enlargement, e.g., doubling N = 2 increases the considered area by 129.4 percent.Furthermore, we observe that for a given decay rate σ there always exists an overshoot C such that stability is guaranteed.Indeed, Theorem 5.4 enables us to prove this.To this end, we deal with the special case C = 1 exhibiting a significantly simpler expression for α ω N,m .
Proof.We define the auxiliary quantity η := 1 + σ ω − ω.Then, we obtain the equalities γ Thus, the necessary and sufficient condition (γ m+1 − ω) ≤ 0 from Theorem 5.4 holds if and only if η ≤ 0. Hence, we restrict ourselves to η > 0 and the right hand side of formula (26) is equal to where we have omitted the control index.Remark 6.3.Note that the optimal value α ω N,m , i.e., the solution of Problem 3.6, does not depend on the control horizon m for C = 1.Consequently, the control horizon m does not play a role for this special case.Proposition 6.2 states that we always obtain a strictly positive value α ω N,m for C = 1.Due to continuity of the involved expressions this remains true for C = 1 + ε for sufficiently small ε.Thus, for any decay rate σ ∈ (0, 1) and sufficiently small C > 1 (depending on N, m and ω) we obtain α ω N,m > 0 and thus asymptotic stability.However, this property does not hold if we exchange the roles of σ and C, i.e., for a given overshoot C > 1 stability cannot in general be concluded for a sufficiently small decay rate σ > 0.
Next, we investigate the relation between γ = ∑ ∞ n=0 c n and the optimization horizon N for finite time controllability in one step, i.e., for a K L 0 -function of type (7) satisfying (8) defined by c 0 = γ and c n = 0 for all n ∈ N ≥1 .For this purpose, let γ be strictly greater than ω ≥ 1. Otherwise Theorem 5.4 provides α ω N,m = 1 regardless of the optimization horizon N. In this case, Formula (26) yields .
We aim at determining the minimal optimization horizon N guaranteeing stability for a given parameter γ.In order to ensure stability, we have to show α ω N,m ≥ 0. We begin our examination with the smallest possible control horizon m = 1.This leads to the inequality Since the logarithm is monotonically increasing this is in turn equivalent to We show that f (γ) tends to γ ln γ asymptotically.To this end, we consider where we have used L'Hospital's rule.Clearly, ceiling the derived expression for the optimization horizon N doesn't change the obtained result.
We continue with analysing the coherancy between γ and N for control horizons m which provide the largest optimal value, i.e., m = N/2 , cf.Section 7 below.Analogously, α ω N, N/2 ≥ 0 induces lower bounds 7 Qualitative characteristics of α ω N,m depending on varying control horizon m In the previous section we have investigated the influence of the optimization horizon N on the optimal value α ω N,m of Problem 3.6 in the extended version.E.g., we have proven that Theorem 5.4 ensures stability for sufficiently large optimization horizons N. Thus, choosing N appropriately remains crucial in order to obtain suitable α ω N,m -values.However, Theorem 4.2 assumes the positivity of several α ω N,m -values with different control horizons m.Section 6 already indicated that, e.g., the minimal stabilizing horizon depends sensitively on the parameter m.Thus, the question arises whether changing the control horizon persistently causes additional difficulties in order to guarantee stability.
Before proceeding, we state results concerning symmetry and monotonicity properties of the optimal value α ω N,m with respect to the control horizon m.These results -which are proven in Subsections 7.1, 7.2 -do not only pave the way to answer the asked question but are also interesting in their own rights.These symmetry and monotonicity properties have the following remarkable consequence for our stabilization problem.Theorem 7.3.Let β be of type (6) and ω ∈ {1} ∪ [1/(1 − σ ), ∞) or of type (7) with c n = 0 for n ≥ 2. Then for each N ≥ 2 the stability criterion from Theorem 4.2 is satisfied for m = N − 1 if and only if it is satisfied for m = 1.
Proof.Proposition 7.1 and 7.2 imply α ω N,m ≥ α ω N,1 for all m ∈ M which yields the assertion.In other words, for exponentially controllable systems without or with sufficiently large final weight and for systems which are finite time controllable in at most two steps, we obtain stability for our proposed networked MPC scheme under exactly the same conditions as for "classical" MPC, i.e., m = 1.In this context we recall once again that for m = 1 the stability condition of Theorem 4.2 is tight, cf.Remark 4.3.

Symmetry Analysis
In this subsection we carry out a complete symmetry analysis of the optimal value α ω N,m given in Theorem 5.4 with respect to the control horizon m.To this end, we distinguish the special case ω = 1 from ω > 1, i.e., the szenario including an additional weight on the final term.The following symmetry property for ω = 1 follows immediately from Formula (26).
Corollary 7.4.For m = 1, . . ., N  2 the values from Theorem (5.4) satisfy α 1 N,m = α 1 N,N−m .Depicting the corresponding α ω N,m -values for some K L 0 -functions, Fig. 3 illustrates the assertion of Corollary 7.4.In addition, we observe that increasing the control horizon improves the optimal value α ω N,m quantitatively.Hereby, Figure 3 shows the interaction between symmetry and monotonicity properties which is essential for the proof of Theorem 7.3.Moreover, adding an additional weight on the final term may lead to a further improvement of the guaranteed stability behaviour.But instead of the equality α 1 N,m = α 1 N,N−m we observe the inequality α ω N,m ≤ α ω N,N−m for the setting including an additional weight on the final term for K L 0 -functions which are linear in their first argument and satisfy (8) In the remainder of this subsection we prove Proposition 7.1 and demonstrate that a generalization including K L 0 -functions of type ( 7) not satisfying the given assumptions is not possible.Consequently, we aim at establishing α ω N,m ≤ α ω N,N−m for ω > 1.Note that, if the necessary and sufficient condition γ m+1 − ω ≤ 0 holds, then Formula (26) solely provides values α ω N,m greater or equal one.Thus, showing the desired inequality using the expressions provided by Formula (26) covers the assertion in the absense of the condition Expanding these terms and reducing by (ω − 1) > 0 leads to These preliminary considerations enable us to derive results referring to the above mentioned symmetry property of α ω N,m with respect to the control horizon m.First, we assume finite time controllability.This case is completely characterized by Lemma 7.5 and Remark 7.6.Lemma 7.5.Assume that the K L 0 -function is of type (7) satisfying (8).In addition, let c 0 , c 1 , c 2 ≥ 0 and c n = 0 for all n ∈ N ≥3 .Then α ω N,N−m − α ω N,m ≥ 0 holds for m < N − m, m, N ∈ N.
In the sense of Remark 7.6 the assertion of Lemma 7.5 is strict.Hence, the deduced results only hold for a subset of the class of finite time controllable systems which satisfy (8).In contrast to that, we are able to derive much more general results for the exponentially controllable case.To this end, assume that the K L 0 -function is of type (6).We begin our analysis with the case γ m+1 − ω ≤ 0, i.e., α ω N,m = 1.This condition guarantees the preservation of the symmetry property stated in Corollary 7.4 and implies α ω N, m = 1 for all m ∈ {m + 1, . . ., N − 1}.
Proof.We have Thus, it suffices to establish the following inequality based on the obtained estimate for ω: The second inequality is equivalent to (C − 1)σ m < (C − 1)σ m showing the assertion for m > m.Proposition 6.2 and Lemma 7.7 enable us to carry out a complete symmetry analysis of the optimal values obtained in Theorem 5.4 (for a proof of the following lemma we refer to Subsection 10.2).Note that we do not distinguish whether an additional weight on the final term is included and do note impose any restrictions with respect to the considered K L -functions.

Monotonicity Properties
Apart from the symmetry proven in Corollary 7.4 and its generalized counterparts for the szenario including an additional weight on the final term, cf.Lemmata 7.5 and 7.8, one also observes a certain monotonicity property: we have α ω N,m+1 ≥ α ω N,m for m = 1, . . ., N/2 − 1.This is a very desirable feature because it implies -in combination with the derived symmetry results -that if the stability condition in Theorem 4.2 holds for m = 1 then is also holds for all m ≤ N − 1, cf.Theorem 7.3.However, the next example shows that this monotonicity property does not always hold.
Example 7.9.We consider the K L 0 -functions β 1 and β 2 of type (7) defined by c 0 = 1.24, c 1 = 1.14, c 2 = 1.04 and c i = 0 for all i ≥ 3 for β 1 and c 0  Example 7.9 shows that the desired monotonicity property does not hold for arbitrary K L 0functions β .However, the following lemmata show that monotonicity holds for β of type (6) and at least for a subset of β of type (7).First, we adress monotonicity properties with respect to the control horizon m for K L 0 -functions of type (7).More precisely, we aim at establishing α ω N,m+1 ≥ α ω N,m .As seen in Example 7.9, even in the setting without an additional weight on the final term it is not possible to prove this inequality for arbitrary K L 0 -functions of type (7).Thus, the following lemma gives a complete analysis.
Corollary 7.11.Let the K L 0 -function be of type (6) and As a consequnece, the term −a 31) is positive.Since the remaining terms in (31) are also positive the assertion follows.
The following lemma deals with the special case N = 2m + 2 and is proven in Subsection 10.3.
Lemma 7.12.Let the K L 0 -function be of type (6) Assume that (31) holds for N. We aim at establishing the considered inequality for N + 1 using the assertion for N as an induction assumption.Moreover, a is defined as above and ã stands for the same expression with N + 1 instead of N.Then, it suffices to show where we have omitted the control variable.This in turn is equivalent to Taking into account the following equations the inequality in consideration is equivalent to (32) Lemma 7.13.Let the K L 0 -function β be of type (6) and Proof.Due to Lemma 7.12, it suffices to show inequality (32) to prove α ω N,m+1 ≥ α ω N,m .ω = 1 implies (ωγ m+2 − ωγ m+1 + γ m+1 − ω) = (γ m+2 − 1) and η = σ .Thus, the right hand side of this inequality can be simplified significantly.Using the definition of γ i , we obtain the equivalent inequality Proposition 6.2 provides p(1) = q(1).Furthermore, note that p has exactly one negative root and m+1 strictly positive roots which are located in the open interval (0, 1).Additionaly, q can be represented as C m+1 (C + c) with c > 0. Thus, the only condition needed to be verified for the application of Lemma 10.5 is the one with respect to the (m + 1) st -derivative.To this end, we have to show the inequality Hence, the inequality implies the assertion.
8 Quantitative characteristics of α ω N,m depending on varying control horizon m In the previous section we derived qualitative characteristics of the obtained stability bounds α ω N,m with respect to the control horizon m, e.g.symmetry or monotonicity properties, which can be exploited according to Theorem 7.3.Nevertheless, the decisive question remains whether we are able to guarantee stability or not, i.e., whether the sign of the α ω N,m -values corresponding to a given K L 0function β (r,t) is positive or not.Clearly, choosing the optimization horizon N sufficiently large ensures stability, cf.Corollary 6.1.On the other hand, enlarging the optimization horizon N implies higher computational costs in order to find an optimal control with respect to (2).In contrast to that, changing the control horizon m or adding an additional weight on the final term does not significantly change the effort needed to solve the finite horizon optmization problem induced by (2).Hence, we aim at analyzing the influence of these set screws more closely.

Control horizon
Assuming exponential controllability, i.e., a K L 0 -function of type (6), Theorem 5.4 enables us to determine the maximal feasible overshoot C for a given decay rate σ ensuring stablity, i.e., a positive α ω N,m -value.For simplicity of exposition, we restrict ourselves to the case without an additional weight on the final term, i.e., ω = 1.In order to determine the impact of the control horizon m for a given optimization horizon N, we carefully examine the stability region, i.e., the set of all parameter combinations C ≥ 1, σ ∈ (0, 1) guaranteeing stability of the underlying discrete time systems.Due to the assertion of Corollary 7.4 it suffices to deal with m ∈ {1, . . ., N/2 }.Exemplarily, Figure 5 shows the stability regions for N = 7 and N = 11 respectively.Apparently, increasing the control horizon m enlarges the stability region, i.e., allows for larger overshoots C for given decay rates σ .To be more precise, incrementing the control horizon augments the area of interest.Here, the monotonicity property according to Lemma 7.13 is reflected.Moreover, we are able to quantify this aspect.E.g., for N = 7, the area containing feasible (C, σ ) pairs is scaled up by 21 and 30 percent.For longer optimization horizons increasing the control horizon enhances the attainable gain even further, e.g.m = 2 and m = 5 enlarge the stability region by 23 and 48 percent respectively.
As we have seen, the exponential controllable case provides various desirable properties, cf.Theorem 7.3, which do not hold for finite time controllability, i.e., K L 0 -functions of type ( 7) satisfying (8), in general.Still, we expended the effort to give a complete characterization referring to this setting, cf.Lemmata 7.5 and 7.10.The reason for putting so much emphasis on this is given in the following example.
Thus, it is in general favorable to work with a K L 0 -function ensuring finite time controllability in contrast to using an upper bound provided by an estimated K L -function of type (6).Particularly, this is the case since positivity of the needed α ω N,• -values can easily be checked by means of Theorem 5.4.Moreover, note that even for K L 0 -functions which do not satisfy the assumptions of Theorem 7.3, the assertion with respect to symmetry and monotonicity often holds, cf. Figure 6.16,• for the K L 0 -functions of type (6,•) and (7, ) respectively.On the right we depict the corresponding K L 0 -functions.

Influence of an additional weight on the final term
In order to evaluate the benefit provided by a final weight, we discern whether the coefficient c m is strictly smaller than one or not.Since c m < 1 allows for guaranteeing the necessary and sufficient condition γ m+1 − ω ≤ 0, cf.Theorem 5.4, for sufficiently large ω, adding an appropriate additional weight on the final term enables us to ensure stability.Moreover, note that the probability of being able to fulfill this condition increases with longer control horizons in general, cf.K L 0 -functions of type (6).
However, without this condition being satisfied analyzing the effects of including a final weight is much more subtle.Thus, we start our investigation by looking at the following example which demonstrates positive effects of adding a final weight to the considered setting.
At first, note that the generalized symmetry as well as the monotonicity property hold in Example 8.2 although the assumptions of Lemmata 7.5 and 7.10 are not satisfied (c 3 , c 4 > 0).Furthermore, we observe the interplay of adding a final weight and the two mentioned properties.Together, these adjusting screws imply our stability condition for m = 4 which is not fulfilled for ω = 1.
The next example points out a possible pitfall as well as an adequate approach to overcome it.
Example 8.3.Again, we assume finite time controllability, i.e., a K L 0 -function of type (7) given by c 0 = 1, c 1 = 3/2, c 2 = 2/3, c 3 = 1, and c i = 0 for all i ∈ N ≥4 .Note that these coefficients satisfy condition (8).In Fig. 7 we have depicted α ω 5,m , m = 1, 2, 3, 4 with several different additional weights on the final term.Although increasing ω seems to significantly improve the guaranteed stability behaviour in general, an additional weight on the final term -chosen too big -may even invalidate our stability condition, cf. Figure 7.However, in the szenario of Example 8.3 we are able to compensate this drawback by shifting to a larger control horizon.Once more, we stress the fact that Theorem 5.4 allows for easily calculating the α ω N,m -values which has to be taken into account.

Example
In this section we compare our analytical results with numerical MPC simulations.The first example is a linear inverted pendulum which is solved for a grid of initial values.The second example is a nonlinear form of the inverted pendulum.We use it to show that enlarging the control horizon m exhibits a stabilizing effect even for long optimization horizons N. Since numerical optimization for large control horizons appears to be difficult, we added a third example, a nonlinear arm-rotorplatform model, which allows us to show numerical results for m = 1, . . ., N − 1.

Linear Inverted Pendulum
Our first example is a linear inverted pendulum on a cart given by in which we want to stabilize the upright position x = (0, 0, 0, 0) using linear MPC.Here, we used the optimization horizon N = 10, the sampling interval T = 0.7 and the cost functional Along each trajectory we then compute α ω N,m as the minimum of the suboptimality degrees from Formula (5) applied with x 0 = x µ N,m (n), n = 0, m, 2m, . . ., 19.A selection of these values is plotted in Fig. 8, in which each dashed line represents the values α ω N,1 , . . ., α ω N,N−1 for a corresponding initial value.In addition, the minima over all trajectories are plotted as a solid line.The results indicate that the closed loop is asymptotically stable for each m i and confirm that choosing control horizons m i > 1 may indeed improve the suboptimality bound.Moreover, it is interesting to compare Fig. 8 with Fig. 3.While Fig. 3 shows the minimal α-values for a set of exponentially controllable systems, the curves in Fig. 8 represent the numerically computed α-values for one particular system and a grid of initial values.Yet, the curves in Fig. 8 at least approximately resemble the shape of the curves in Fig. 3.The second part of Fig. 3 treats the enlarged weights on the final term.Comparing the presented analytical results to our simulations depicted in the right part of Fig. 8 we obtain the same tendency: If the weight on the final term is increased, then the degree of suboptimality α is growing faster for small control horizons m.

Nonlinear Inverted Pendulum
In order to show that similar effects can be experienced for nonlinear control problems, we consider the presented inverted pendulum on a cart problem in the nonlinear form with gravitation constant g = 9.81, length l = 10 and friction terms k R = k L = 0.01.Again, we aim at stabilizing the upright position x = (0, 0, 0, 0).Here, we consider the stage cost l(x(i), u(i)) :=   For this simulation, we were able to increase the range of m from 3 to 20 for which acceptable α values can be computed by internally repeating the optimization at every sampling point.This allows us to keep track of a suitable initial guess of the control while the system is running in open-loop.For m ≥ 21 we experience numerical problems during the optimization and although most of the resulting trajectories converge to the target point, we cannot see this from our estimates.Note that for ω = 1 the α values are negative for control horizons m = 1, . . ., 4. Still, larger control horizons exhibit a positive α value such that stability is guaranteed.Additionally, an increase in α can be experienced for all control horizons m considered in this example if ω is increased.This corresponds to the known stabilizing effect of terminal costs.Since computing these estimates can be done with very small additional effort compared to the MPC procedure, such an analysis should be done before enlarging the optimization horizon.

Nonlinear Arm-Rotor-Platform Modell
Last, we consider an arm/rotor/platform (ARP) model: For details on the specification of the model parameters we refer to [4, Chapter 7.3.2].
For this example, we fix the initial values to x(t 0 ) = (0, 0, 0, 0, 10, 0, 0, 0), the absolute and relative tolerances for the solver of the differential equation both to 10 −10 , the length of the open-loop horizon within the MPC-algorithm to H = N • T with N = 13 and sampling period T = 0.05, and set the optimality tolerance of the SQP solver to 10 −8 .Moreover, the cost functional is given by and the practical region of the equilibrium is estimated using the constant ε = 5 • 10 −7 .Again, we used a startup sequence of 5 NMPC iterations with m = 1 to improve the initial guess of the control.Similar to Fig. 9, Fig. 10 shows the minimal α-value along a closed-loop trajectory for a variety of final weights ω and all possible control horizons m = 1, . . ., N. Again, we obtain an improvement of our suboptimality estimate if the final weight ω is increased.Considering the control horizon m, however, we hardly experience any improvement for small m and a decrease of α for m being large.Note that suboptimality estimates which are computed for every sampling instant inherit a different weighting of these instants if m is changed.Yet, a fair comparison can be obtained if only those control horizon lengths are taken into account which are an integer factor of the largest one and use the estimate (5).In the left of Fig. 11 we display such a comparison for m ∈ {1, 2, 3, 4, 6, 12}.The corresponding cost of the closed-loop costs V µ N,m considering the first are shown on the right of Fig. 11.It exhibits the expected decrease in ω, but also the rise in m.Moreover, increasing the control horizon m implies that the system remains in open-loop for a longer period of time which may be harmful even in terms of stability if modelling errors or external perturbations occur, cf.[14].A detailed quantitative analysis of these effects in our setting is currently under investigation.
Proof.We carry out an induction with respect to j.The induction start, j = N −2, follows for arbitrary k ∈ N from ≥ 0.

Proof of Lemma 7.8
The purpose of this subsection is to prove the symmetry properties stated in Lemma 7.8 for ω > 1 (the case ω = 1 is covered by Corollary 7.4).To this end, we require the following technical lemma which is also an essential tool in proving monotonicity properties.Lemma 10.5.Let p : R → R be a polynomial of degree k > 1 such that all k roots z 1 , . . ., z k are real, exactly one of them is negative, and at most one is equal to zero.In addition, let the root of p (k−1) (z) be strictly smaller than − c k with c ≥ 0 and let p(z) = zk−1 (z + c) for some z > max{z 1 , . . ., z k }.Then it follows p(z) > z k−1 (z + c) for all z > z.
Proof.Lemma 7.7 covers the case γ m+1 − ω ≤ 0. Hence, we assume γ m+1 − ω > 0.Moreover, we suppose γ N−m+1 − ω > 0 because otherwise α ω N,N−m = 1 holds and the assertion follows.Thus, we only have to deal with (26) in order to establish the desired inequality.Choosing N as small as possible for given m ∈ N ≥1 , i.e., N = 2m + 1 implies N − m = m + 1.Moreover, take the following equality into account If γ m+1 − γ m+2 = −σ m η ≥ 0, i.e., η ≤ 0 this inequality holds because ∏ N i=m+2 (γ i − 1)/γ i ∈ (0, 1).Hence, we only have to deal with the case η > 0. We aim at using Lemma 10.5 to establish the inequality which has to be proven.For this purpose, we require the equalities Overall, we obtain for inequality (37) where we have suppressed the argument of the roots.Since η ∈ (0, 1) the polynomial p(C) has clearly m + 1 strictly positive roots and exactly one negative root.We show that the positive root −1/(2σ m η) + . . . is located in the interval (0, 1), i.e., the term (γ i+1 − 1)/(γ i /C) more closely.This leads to Thus, the polynomial (γ i+1 − 1)/(γ i /C) has its root at (1 − σ )/(1 − σ i η), i.e., in the interval (0, 1).We aim at determining the roots of the term To this end, we have to solve the equation This leads to Thus, there is one negative and one positive root and p(C) may be represented by p(C) = ∏ m+2 i=1 (C − z i ) where z i denote the determined roots.Show that the obtained positive root is also located in (0, 1), i.e., Calculate the (m + 1) st derivative of p(C) and q(C) We show that the root of p (m+1) is strictly smaller than its counterpart of q (m+1) , i.e., 1 Thus, it suffices to show 0 ≤ 1 Using Observe that the first summand is positive, η < 1 and ∑ m−1 i=0 σ i > mσ m , and ∑ N−1 i=m+1 σ i ≥ (N − m − 1)σ N−1 ≥ mσ N−1 .Hence, the following calculation establishes the desired inequality In consideration of Proposition 6.2, all assumptions of Lemma 10.5 are satisfied.This shows the assertion due to the definition of p and q.

Assumption 4 . 1 .
There exists a closed set A ⊂ X satisfying:

Figure 1 :Proposition 6 . 2 .
Figure 1: Illustration of the stability region guaranteed by Theorem 5.4 for various optimization horizons N given a K L -function of type (6) for "classical" MPC, i.e., m = 1.

Figure 2 :
Figure 2: Minimal stabilizing optimization horizons for one step finite time controllability for m = 1 and m = N/2 in comparison with their asymptotic approximations.

Figure 5 :
Figure 5: Illustration of parameter combinations (C, σ ) ensuring stability in dependency of the control horizon m for optimization horizons N = 7 and N = 11 respectively.
Id and R = 4 Id.Moreover, we use the constants g = 9.81 and k = 0.1 for gravitation and friction respectively.Since Assumption 3.1 exhibits a set-valued nature we consider a uniform grid G of initial values from the set [−0.05, 0.05] 2 × [−1, 1] × [−0.05, 0.05] which contains the origin x .For each m = 1, . . ., 9 we simulate the MPC closed loop trajectories x µ N,m with control horizon m i ≡ m and initial value x ∈ G .
i), u(i)) + ωl(x(N − 1), u(N − 1)) over an optimization horizon N = 70 and sampling length T = 0.05.Moreover, the tolerances of the optimization routine and the differential equation solver are set to 10 −6 and 10 −7 respectively.Since this cost functional is 2π-periodic, we add box-constraints limiting X to the interval [−2π + 0.01, 2π − 0.01].Using the initial value x 0 = (π + 1.5, 0, 0, 0), we simulated the MPC closed-loop trajectories x µ N,m for m = 1, . . ., 10 and final weights ω = 1, . . ., 10 with control horizon m i ≡ m.Since the numerical optimization is not reliable on start of the NMPC algorithm, a startup sequence of 20 NMPC iterations using m = 1 is implemented to obtain an initial guess of the control which can be assumed to be close to the global optimum.Moreover, the problem appears to be practically stabilizable only.To compensate this issue, we used the estimation constant ε = 10 −4 , cf. [7, Theorem 21] for details.In Fig.9, the minimal α-value along a closed-loop trajectory is shown for a variety of final weights ω and control horizons m.