Numerical approximation of the maximal solutions for a class of degenerate Hamilton-Jacobi equations

In this paper we study an approximation scheme for a class of Hamilton--Jacobi problems for which uniqueness of the viscosity solution does not hold. This class includes the eikonal equation arising in the shape-from-shading problem. We show that, if an appropriate stability condition is satisfied, the scheme converges to the maximal viscosity solution of the problem. Furthermore we give an estimate for the discretization error.


Introduction
Given a Hamilton-Jacobi equation, a general result due to  says that any \reasonable" approximation scheme (based f.e. on nite di erences, nite elements, nite volumes, discretization of characteristics, etc.) converges to the viscosity solution of the equation.Besides some simple properties that the approximation scheme has to satisfy, it is only requested that the equation satis es a comparison theorem for discontinuous solutions, which in particular implies uniqueness of the viscosity solution.This result covers a wide class of rst and second order Hamilton-Jacobi equations, yet there are interesting examples of equations coming from the applications for which uniqueness of the viscosity solution does not hold.A signi cant example is given by the Eikonal equation jDuj = f(x) (1.1) on some open and bounded domain R n coupled for example with a Dirichlet boundary condition on @ .This equation arises in the Shape-from-Shading problem in image analysis and a large literature has been devoted to its study (see 4] for a description of the problem This paper was written while the second author was visiting the Dipartimento di Matematica, Universit a di Roma \La Sapienza" supported by DFG-Grant GR1569/2-1.The research was partially supported by the TMR Network \Viscosity solutions and their applications".and 16] for a viscosity solution approach).It is well known that if f vanishes at some points, there are in nite many viscosity solutions to (1.1) (see 15]).Nevertheless, among these solutions, in general only one is the relevant solution (for example, from the physical point of view, from the control theoretic one, etc.).In 6] (see also 14]), requiring a stronger condition for supersolution than that for the standard viscosity solution, a Comparison Principle, which characterizes the maximal viscosity solution of the problem, has been obtained for the following class of Hamilton-Jacobi problems H(x; Du) = f(x) x 2 ; (1.2) u(x) = g(x) x 2 @ : (1.3)Here is a bounded domain of R N , H and f are nonnegative continuous functions and f can have a very general zero set (the Eikonal equation (1.1) ts into this class of equation).It is worth noting that this maximal solution is the value function of a control problem associated in a suitable way to (1.2){(1.3).There are, in general, two approaches to the discretization of problem (1.2)- (1.3).A rst possibility is to discretize problem (1.2)-(1.3)directly, but imposing some additional condition which among the in nite many solutions singles out the one we want to approximate: for example, in 17], it is assumed that the solution is known on the zero set of f, which is now a part of the boundary of the domain where the problem is discretized.A second possible approach (see 4], 5] and references therein) is to discretize a regularized version of problem (1.2){(1.3),obtained by cutting from below f at some positive level > 0 (note that for f > 0 problem (1.2){(1.3)has a unique viscosity solution).To prove the convergence of the scheme, both and the discretization step h have to be send to 0. Since the limit problem does not have a unique viscosity solution, it is not possible to apply the Barles-Souganidis theorem and, to our knowledge, there is no convergence theorem for this class of schemes, at least for a general zero set of f.Furthermore, if and h are not related by some condition, the approximation scheme shows numerical instability and it is not really known which solution is approximated (see 12] for some numerical tests in this sense).Aim of this paper is to describe an approximation scheme for which it is possible to prove the convergence to the maximal solution of problem (1.2){(1.3),without requiring any additional assumptions.The scheme is based on a two step discretization of the control problem associated to the regularized problem: rst in the time variable, discretization step h, and then in the space variable, discretization step k (see 2], 13] for related ideas).In the rst part (Sections 3, 4), we study the approximation scheme obtained by discretization in time.We show that, if and h are related in an appropriate way, the scheme converges to the maximal solution of (1.2){(1.3)for and h going to zero.This result is in the spirit of 3], in the sense that it is based on stability properties of the maximal viscosity solution and on its characterization given by the comparison theorem in 6].Therefore, the proof of the convergence theorem can be easily modi ed to manage other boundary conditions instead of (1.3) or, also, di erent approximation schemes not necessarely based on the control theoretic interpretation of the problem.
In the second part (Section 5) we study the discretization error for the fully discrete scheme.We show that, if the zero set of f is not too \wild", it is possible to estimate in terms of and of the discretization steps the L 1 -distance between the approximate solution and the maximal solution of the continuous problem.This part deeply employs the control theoretic interpretation both of the discrete problem and of the continuous one.

Continuous problem: assumptions and results
In this section we brie y recall the characterization of the maximal solution of problem (1.2)-(1.3)obtained in 6].Here and in the remainder of the paper by (sub, super)solutions we mean Crandall-Lions viscosity (sub, super)solutions (see 1] for a general treatment).
We rst set the assumptions on the data of the problem.The hamiltonian H : R N !R is assumed to be continuous in both variables and to verify H(x; 0) = 0; (2.2) Note that the hypothesis (2.2) replaces the stronger one of convexity of H in p.
The function f : !R is nonnegative, continuous in .Moreover, de ned K := fx 2 : f(x) = 0g, it is assumed that K \ @ = ;: Finally we assume g : R N !R to be a continuous and bounded function.
We introduce the gauge function and the support function of the convex set Z(x), namely (x; p) = inff > 0 : p 2 Z(x)g (2.4) (x; p) = supfpq : q 2 Z(x)g; (2.5) for any (x; p) 2 R N .Both these functions are convex and homogeneous in the variable p, and are l.s.c. and respectively continuous in (note that, if x 2 K, (x; 0) = 0 and (x; p) = +1 for jpj 6 = 0).Moreover they are related by the following equality (x; p) = sup (x;q) 1 fpqg x 2 , p 2 R N : (2.6) Example 2.1 Let : R + !R + be a continuous function such that (0) = 0 and is strictly increasing.Consider the equation (jDu(x)j) = f(x) x 2 : (2.7) In this case we have Z(x) = B(0; 1 (f(x)); (x; p) = jpj 1 (f(x)) ; (x; p) = 1 (f(x))jpj: We now de ne a nonsymmetric semidistance on by L(x; y) = inff It can be shown that the family B L (x; r) induces a topology L on .If K consists of isolated points this topology is equivalent to the Euclidean topology and the problem can be studied in the framework of viscosity solution theory (see 14]).In general, L is weaker than the Euclidean topology and, for x 2 K, the set of points having zero L-distance from x is a subset of K.
To obtain the characterization of the maximal solution, the de nition of viscosity solution will be adapted to the topology L .
De nition 2.2 Given a l.s.c.function v : !R, a Lipschitz continuous function is called L-subtangent to v at x 0 2 if, for some > 0, x 2 B L (x 0 ; ): The L-subtangent is called strict if (x) < v(x) outside B L (x 0 ) = fx 2 : L(x 0 ; x) = 0g.
We remark that the convexity assumption (2.2) allows us to use Lipschitz continuous test functions instead of C 1 test functions as in the standard de nition of viscosity solution.For a Lipschitz continuous function , we denote by @ (x) the generalized gradient of at x, i.e. @ (x) = cofp 2 R N : p = lim n D (x n ) for a sequence x n !x s.t.
is di erentiable at x n g: De nition 2.3 A l.s.c.function v : !R is said to be a singular supersolution of (1.2) if for any x 0 2 and for any , L-subtangent to v at x 0 such that @ (x) = f0g in x 2 B L (x 0 ; ) \ K, there exists a sequence x n 2 n K and a sequence p n 2 @ (x n ) for It is worth noting that the de nition of singular supersolution reduces to the standard de nition of viscosity supersolution if x 0 2 n K.In fact, in this case, since the topology L and the Euclidean topology are equivalent in neighborhood of x 0 , L-subtangents at x 0 coincide with standard subtangents.Moreover, in ( n K) R N , (x; p) 1 (resp. (x; p) 1) if and only if H(x; p) f(x) (resp.H(x; p) f(x)).In the following theorem, we compare viscosity subsolutions and singular supersolutions of equation (1.2).Theorem 2.4 Let u 2 USC( ), v 2 LSC( ) be a viscosity subsolution and a singular supersolution of equation (1.2), respectively, such that u v on @ .Then u v in : Hypothesis (2.2) allows us to give a control theoretic interpretation of problem (1.2){(1.3).Let U be the value function of the control problem with dynamics ( _ (t) = q(t) t 2 0; 1) (2.10) where x 2 and q is any bounded measurable function from 0; +1) to R n such that T := infft > 0 : (t) 6 2 g < +1, and with cost functional The dynamic programming equation associated to the control problem (2.10){(2.11) is sup jqj 1 fqDu(x) (x; q)g = 0 x 2 : (2.12) This equation turns out to be equivalent to equation (1.2), in the sense that any viscosity sub or supersolution of equation (2.12) is also a viscosity sub-or supersolution of equation (1.2) and vice versa.
In the following we will assume that the boundary datum g veri es the compatibility condition g(x) g(y) L(x; y) for any x, y 2 @ .
(2.13) It is standard to show that, under hypothesis (2.13), U is a viscosity solution of (1.2) and satis es the boundary condition (1.3).Furthermore we have Proposition 2.5 The value function U is a singular supersolution of equation (1.2) in .Theorem 2.4 and Proposition 2.5 now allow us to characterize the maximal solution of (1.2){(1.3):Let S denote the set of functions v 2 USC( ) which are viscosity subsolutions of (1.2) and which satisfy v g on @ .From Theorem 2.4 and Proposition 2.5 it follows that the value function U of the control problem (2.10){(2.11) is the maximal element of S, i.e. the maximal subsolution of problem (1.2){(1.3).Moreover U is a singular supersolution of (1.2) satisfying U = g on @ , hence it is the maximal solution.
Remark 2.6 If H is convex in p, then U coincides with the value function of control problem with dynamics (2.10) and cost functional J(x; q) = Z T 0 f( (t)) + H ( (t); q(t))dt + g( (T )): where H (x; ) denotes the Legendre transform of H(x; ), cp.15].Note, however, that (x; q) and f(x) + H (x; q) in general do not coincide pointwise.
We conclude this section stating a particular case of a general stability theorem proved in 6] needed for the construction of the approximation scheme.
Proposition 2.7 Set f (x) = maxff(x); g and let u be the sequence of viscosity solutions of x 2 @ : (2.14) uniformly in , where U is the maximal solutions of (1.2)-(1.3).
Note that for any > 0 xed, since f > 0 in , problem (2.14) admits a unique viscosity solution.Moreover this solution is given by the value function of the control problem with dynamics (2.10) and cost functional where (T ) 2 @ and (x; q) is de ned as (x; q) with f instead of f.
for x 2 , fq n g R n such that jq n j = 1.
The cost is given by where N = inffn 2 N : x n 6 2 g (we assume the convention that P 1 n=0 = 0).The value function for this control problem is u h (x) = inffJ h (x; q n ) : fq n g such that N < +1g: By a standard application of the discrete dynamic programming principle, the function u h is a solution of the problem ( u h (x) = inf jqj=1 fh (x; q) + u h (x + hq)g x 2 ; u(x) = g(x) x 2 R N n : (3.1) The following result holds true Proposition 3.1 There is a constant C (independent of h and ) such that ju h (x)j C for any x 2 .
Proof: We rst observe that it is always possible to assume, by adding a constant, that g 0. It follows that u h 0. Moreover where M is as in (2.16).Let v 1 , v 2 be two bounded solution of (3.1) and set w i (x) = 1 e v i (x) , for i = 1; 2. Then w i satis es ( w i (x) = Sw i ](x) x 2 w i = 1 e g(x) x 2 R N n ; (3.3) where S ](x) = inf jqj=1 n 1 h (x; q) + e h (x;q) (x + hq) o : It follows that sup jSw 1 (x) Sw 2 (x)j sup jw 1 (x) w 2 (x)j with = e h ( ) < 1, and w 1 = w 2 = g in R N n .
We conclude that for any > 0 and h > 0 there exists at most one bounded solution of (3.3) and therefore of problem (3.1).This solution is given by u h .Remark 3.2 If we discretized the control problem (2.10){(2.11)directly (which corresponds to setting = 0 in the previous approximation scheme), the resulting approximating equation does not have a unique bounded solution, similarly to what happens in problem (1.2)- (1.3).This causes the drawback that any algorithm designed to solve that approximating equation could not converge to the maximal viscosity solution and, in any case, displays high numerical instability (see 12]). 4 Convergence of the semidiscrete scheme In this section, we prove the convergence of the approximation scheme introduced in the previous section to the maximal solution of (1.2){(1.3).
Given a locally uniformly bounded sequence of functions v : !R, > 0, we set is a singular supersolution of (1.2).
Proof: Because of (3.2), the function u is well de ned in .Let : !R be L-subtangent to u at x 0 2 .It is possible to assume without loss of generality (see 6], Proposition 5.1) that is a strict L-subtangent to u at x 0 .Employing a standard argument in viscosity solution theory, we nd a sequence x of minimum points for u h such that L(x 0 ; x ) ! 0 as tends to 0 + .Then for some q with jq j = 1.
Let q = q = (x ; q ).By the homogeneity of (x; q) with respect to q, we have q 2 fq 2 R N : (x ; q) 1g.Dividing (4.4) by (x ; q ) and recalling (2.6), we get Since the sequence x belongs to n K and L(x 0 ; x ) ! 0, as !0 + , we conclude, thanks to hypothesis (4.1), that u is a singular supersolution of ( Proof: We set for x 2 .These function are well de ned because of (3.2).
From Proposition 4.1, it follows that u is a singular supersolution of equation (1.2).Moreover it is standard to show that u is a subsolution of (2.12) and therefore of (1.2) in (see, f.e., 1] or 2]).If we show that u u on @ , then Theorem 2.4 and Proposition 2.5 imply that u = u = U in and therefore (4.7).We will show that u(x) g(x) u(x) for any x 2 @ .(4.8)To show that u(x) g(x) on @ , we need an estimate on the behavior of u h in a neighborhood of @ .Let > 0 be su ciently small and set = fx 2 : d(x; @ ) < g.For x 2 , let y 2 @ be such that d(x; @ ) = jy xj.De ne a control law fq n g for the discrete control problem by q n = x y jx yj n 2 N and, denoted by x n the corresponding discrete trajectory, let N = inffn > 0 : x n 6 2 g.Observing that Nh jy xj, we get where M is as in (2.16) and !g is a modulus of continuity of g.If x 0 2 @ and x 2 is a sequence converging to x 0 , we have either u h (x ) = g(x ), if x 2 @ , or u h (x ) Mjy x j + g(y ) + !g (h) if x 2 , where y 2 @ is such that d(x ; @ ) = jx y j.Since also y converges to x 0 , we get u(x 0 ) g(x 0 ) on @ .To get the other inequality in (4.8), if g 0, then u h (x) 0 in and therefore u 0 on @ .If (4.6) holds, by adding a constant, we can always assume that g 0. For x 2 , let q n be an -optimal control for u h (x), x n the corresponding discrete trajectory and N the exit time from .Since ( )Nh N 1 X n=0 h (x n ; q n ) + g(x N ) u h (x) + we have Nh C + 1 ( ) with C as in (3.2).Let q(t) be a control law for the continuous problem obtained by setting q(t) = q i for t 2 ih; (i + 1)h), i = 0; 1; : : : ; N 1.If (t) and T are respectively the trajectory and the exit time corresponding to q(t), we have u (x) R Nh 0 ( (t); q(t))dt + g( (T )) N 1 P n=0 h ( (x n ; q n ) + ! (h)) + g(x N ) + !g (j (T ) x N j) u h (x) + + C ! (h) ( ) + !g (h); (4.9) where the estimate j (T ) x N j h holds because of the convexity of .Since u (x) = g(x) for any x 2 @ and the assumption (4.1) is satis ed, from (4.9) we easily get other inequality in (4.8).
Remark 4.3 For the Eikonal equation (1.1) we have (x; q) = f (x)jqj and therefore condition (4.1) reduces to where !f is the modulus of continuity of the function f on .
5 Discretization error for the fully discrete scheme In this section we will discuss a fully discrete scheme derived from the semidiscrete one as developed in the previous sections.In order to simplify the calculations we assume that the function g de ning the boundary condition is uniformly Lipschitz with constant L g , and that the domain is convex.We will introduce a space discretization which transforms (3.1) into a nite dimensional problem.For this purpose we choose a grid covering consisting of simplices S j with nodes x i and look for the solution of (3.1) in the space W := fw 2 C( ) j rw const on S j g of piecewise linear functions on .By the parameter k we denote the maximal diameter of the simplices S j .For simplicity we assume that the boundary of the gridded domain coincides with the boundary of .(In the general case we can always achieve an error scaling linearly with the distance between these two boundaries due to the fact that g is Lipschitz).Thus we end up with the fully discrete scheme for all nodes x i 2 with the boundary condition u k ;h (x i ) = g(x i ) for the nodes x i 6 2 and linear interpolation between the nodes.Note that there exists a unique bounded solution of (5.1).The boundedness of any solution of (5.1) follows from the fact that u k ;h (x i ) h (x i ; q) + u k ;h (x i + hq) holds for any q 2 R n with jqj = 1.Thus we can always choose q such that u k ;h (x i + hq) depends on nodes which are closer to the boundary @ than x i and (if h < k) on x i itself, but with a weight strictly less than one.Since the value in the boundary nodes is bounded we obtain boundedness for each node by induction.Due to the boundedness the existence of a unique solution u k ;h is now easily proved by applying the Kruzkov transformation as in the proof of Proposition 3.1.Note that the function appearing in the scheme is de ned implicitely via H and f .In order to solve the scheme we assume that we can compute this function analytically as e.g. in Example 2.1.(In the case of a convex Hamiltonian one may alternatively use a numerical approximation of the integrand from Remark 2.6 via the Legendre transform as given e.g. in 10].Note, however, that this procedure yields a di erent cost function than in the following analysis.) We will now start by estimating the discretization error ju (x) u k ;h (x)j, x 2 .Since we allow nonconstant boundary conditions we introduce the following auxiliary functions which will be useful for the estimation of the error.
De nition 5.1 For each point x 2 we de ne where ( ) is an optimal path for the initial value x and (T ) 2 @ .For each node x i of the grid pick a control q i minimizing (5.1) and let w 2 2 W be the unique solution of w 2 (x i ) = h (x i ; q i ) + w 2 (x i + hq i ) (5.2) with the boundary condition w 2 (x) = 0 and interpolation between the nodes.Finally we de ne w(x) = maxfw 1 (x); w 2 (x)g.Remark 5.2 The existence of optimal paths follows from the continuous dependence of the functional J(x; q) from the control function q using the weak -metric (as de ned for control functions e.g. in 9]), using the Gronwall Lemma as in 8, Proof of Lemma 3.4(ii)] and the structure of .Note that the a-priori boundedness of the length of approximately optimal trajectories | following from the positivity of | is crucial for this continuous dependence.Thus in general the existence of optimal trajectories does not hold for the non-regularized problem since there for any sequence of approximately optimal trajectories the length of these trajectories may grow unbounded when we restrict jq(t)j = 1 for all t 0. Note that we do not require uniqueness of the optimal paths in De nition 5.1.In the case that there is no unique optimal path we may use one that minimizes w 1 .
De nition 5.1 de nes functions which are 0 at @ and away from @ essentially grow like u and u k ;h , respectively.More precisely we have that w 1 ( (t)) w 1 (x) = u ( (t)) u (x) and w 2 (x i + hq i ) w 2 (x i ) = u k ;h (x i + hq i ) u k ;h (x i ) for ( ) and q i as used in the de nition.Note that in particular if g(x) c is constant we obtain w(x) = maxfu (x); u k ;h (x)g c: Using this w we can give the following estimate for the discretization error.Proposition 5.3 Let u k ;h 2 W be the unique solution of (5.1).Then the estimate ju holds for each x 2 and for all su ciently small k > 0 and h > 0 with M, ! and as de ned in (2.16){(2.18),( ) = inf x2 ;jqj=jpj=1 (x; p) (x; q) ; and some constant C independent from ; h and k.
The proof can be found in the appendix.Remark 5.4 (i) Note that estimate (5.3) is stronger than the usual L 1 estimate since essentially the error scales with the function w(x) being 0 at @ .The reason for this behaviour origins in the fact that the error is estimated along the optimal trajectories whose length depends on the optimal value.
(ii) The constant ( ) essentially depends on the growth of H in jpj, e.g. in Example 2.1 we have ( ) = 1 ( ).The constant ( ) is determined by the di erence between H(x; p) and H(x; q) for jpj = jqj.In particular if H(x; p) 2 C 1 jpj ; C 2 jpj ] we have that ( ) C 1 =C 2 independently from .Finally, !(which gives a bound for !for all > 0) combines the continuity properties of H and f, i.e. in Example 2.1 we have that !(r) = ! 1 f (r).(iii) Note that the requirement on h ensuring the convergence of the fully discrete scheme is that ! (h)= ( ) ! 0 as !0, thus it is consistent with condition (4.1) for the convergence of the semidiscrete scheme.(iv) The appearance of the value ( ) in the denominator in (5.3) is due to the fact that here we implicitely included the worst case, i.e. that the length of the optimal trajectories may grow like 1= ( ) for ( ) ! 0. Since this is not necessarily the case in many practical examples one can expect better convergence behaviour for ( ) ! 0.
(v) A particular nice formulation of estimate (5.3) can be obtained if we consider the Eikonal equation (1.1) (implying ( ) = 1 and ( ) = ), assume that f is uniformly Lipschitz (implying !(h) L f h) and impose a homogeneous boundary condition, i.e. g c (implying L g = 0).In this case the estimate becomes ju for some constant C > 0 independent from ; h and k.In particular this implies convergence of the scheme if " !0, h=" !0 and k=h !0.
We will now turn to the discussion of the error obtained when equation (1.2) is replaced by equation (2.14), i.e. the error introduced by the regularization of the problem.Proposition 2.7 already implies that u " converges to U, where U is the maximal subsolution of (1.2).Unfortunately, in general this convergence can be arbitrary slow.In the optimal control interpretation this is due to the fact that the length of approximately optimal trajectories may grow unbounded as the approximation gets better and better.Since these long pieces of the trajectories can only appear in regions where f is su ciently small (otherwise the cost would be large contradicting the approximate optimality), we can derive an estimate for the regularization error by de ning a criterion for the sets where f is small which in turn gives a bound on the length of approximately optimal trajectories.The following de nition is our main tool for this purpose.
Proof: Follows immediately from the Propositions 5.3 and 5.6.
Remark 5.8 (i) A possible modi cation of the scheme can be made if we allow smaller time steps at the boundary @ , i.e. for x i 2 and x i + hq 6 2 we use the restricted time step h = supfh 2 0; h] j x i + hq 2 g: Although slightly more di cult to implement this modi cation usually gives better numerical results.The proof of Proposition 5.3 also applies to this modi ed scheme.
(ii) Due to the structural similarity of the scheme described in this section with the scheme considered in 13], the adaptive grid scheme developed there can also be applied here.Similar convergence results as in 13] can be obtained for our scheme using the technique from the proof of Proposition 5.3.
6 Appendix: Proof of the Propositions 5.3 and 5.6 In order to prove Proposition 5.3 we will rst state a useful estimate for the local error along the functional.Lemma 6.1 For each measurable q( ) with jq(t)j = 1 for almost all t 2 0; h] and the path ( ) with _ (t) = q(t) and (t) 2 for all t 2 0; h] there exists p 2 R n with jpj = 1 such that h ( (0); p) Z h 0 ( (t); q(t))dt + h! (h) and (0) + hp = (h): Conversely, for each p 2 R n with jpj = 1 and each x 2 with x + hp 2 there exists a measurable function q( ) with jq(t)j = 1 for all t 0 such that Z h 0 ( (t); q(t))dt h (x; p) + h! (h) and (0) = x; (h) = x + hp where _ (t) = q(t) and (t) 2 for all t 2 0; h].
Proof: The convexity of in the second argument implies (x; Z h 0 q(t)dt) Z h 0 (x; q(t))dt: Hence by de ning p = 1 h Z h 0 q(t)dt the rst assertion immediately follows from the continuity of which is measured by ! .
where i ( i ) 2 @ and _ i (t) = q i (t).In this case by the convexity of we can conclude that there exists p 2 R n with jpj = 1 and x i + hp 6 2 such that jx i + hp i ( i )j h.Thus we obtain u k ;h (x i ) u (x i ) (M + L g )h + L g k (6.7) (iii) x i 2 and for the optimal path i ( ) with i (0) = x i from De nition 5.1 the equality u (x i ) = Z h 0 ( i (t); q i (t))dt + u ( i (h)) (6.8) holds where i (0) = x i , i (h) 6 2 @ and _ i (t) = q i (t).
In this case Lemma 6.1 and the de nition of u k ;h imply where by (6.8) we can estimate and by De nition 5.1 thus also w 1 ( i (h)) w 1 (x i ) h inf jqj=1 (x i ; q) + h! (h): (6.10) Taking into account that the coe cients in (6.5) sum up to 1 we derive and combining (6.3), (6.4), (6.5), (6.10) and (6.11) we obtain (w 1 (x ) + ) h! (h) + j k + (w 1 (x ) + ) h inf x2S i ;jqj=1 (x; q) + h! (h) + j k !from which we conclude that h! (h) + j k h inf x2S i ;jqj=1 (x; q) h! (h) j k : (6.12) Estimating j (") 1 inf x2S j ;jqj=1 (x; q) + !(k) and writing j = inf x2S j ;jqj=1 (x; q) (note that j ( )) this becomes h! (h) + (") Now we specify the assumption \h; k > 0 su ciently small" by choosing them such that !(h) << ( ), k << (")h and !(k)k << ( )h which yields (recall ( ) j ) for some constant C > 0 and thus which implies the desired estimate for w 1 (x) + since j ( ).Since all values in this resulting inequality are independent from > 0 this also implies the estimate for = 0.The inequality for u (x) u k ;h (x) follows with the same technique and the obvious modications using w 2 ; note that here the convexity of is also needed in Lemma 6.1 used in case (iii).Proceeding in this way we end up with the analogous estimate to (6.12) h! (h) + j k h inf x2S i ;jqj=1 (x; q) which leads to the desired result here without using the assumptions on k and h.
Proof of Proposition 5.6 For any measurable and bounded q and any x 2 denote the solution of (2.10) by (t; x; q( )).Fix > 0, 1 > 0 and 2 2 (0; ) arbitrary and pick some x 2 .Then by the optimal control representation of U (2.10){(2.11)there exists a solution 1 (t) = (t; x; q 1 ( )) with jq 1 (t)j = 1 and a time T 1 > 0 such that 1 (T 1 ) 2 @ and Z T 1 0 ( 1 (t); q 1 (t))dt + g( 1 (T 1 )) < U(x) + 1 : We now divide the connected components K i , i 2 I of K into two classes by de ning I 1 := n i 2 I j f(x) 2 for all x 2 K i o and I 2 = I n I 1 .Then by the continuity of H there exists a constant ( 2 ) with ( 2 ) ! 0 as 2 !0 such that j (x; p) (x; p)j < ( 2 ) for all x 2 K i ; i 2 I 1 ; jpj = 1: (6.13) Furthermore by the uniform continuity of f every set K i , i 2 I 2 , has a volume bounded from below by some uniform constant depending on 2 and hence there are only nitely many of these sets; we may number them by i = 1; : : : ; N. Now we de ne for each of these K i " , i = 1; : : : ; N which is hit by the trajectory 1 times t i and t i + by t i = infft 2 0; T 1 ] j 1 (t) 2 K i " g and t i + = supft 2 0; T 1 ] j 1 (t) 2 K i " g where we omit those sets K i " for which t i ; t i + ] t j ; t j + ] for some j 6 = i holds.This gives us a nite number r of pairwise disjoint intervals t i ; t i + ] which we assume to be numbered according to their order, i.e. t i + < t i+1 for i = 1; : : : ; r 1.