The Directed and Rubinov Subdifferentials of Quasidifferentiable Functions, Part II: Calculus

We continue the study of the directed subdifferential for quasidifferentiable functions started in [R. Baier, E. Farkhi, V. Roshchina: The directed and Rubinov subdifferentials of quasidifferentiable functions, Part I: Definitions and examples, Nonlinear Anal., same volume]. Calculus rules for the directed subdifferentials of sum, product, quotient, maximum and minimum of quasidifferentiable functions are derived. The relation between the Rubinov subdifferential and the subdifferentials of Clarke, Dini, Michel-Penot, and Mordukhovich is discussed. Important properties implying the claims of Ioffe’s axioms as well as necessary and sufficient optimality conditions for the directed subdifferential are obtained.


Introduction
The directed subdifferential was first introduced in [5] for a difference of convex (DC) function f = g − h with g, h convex, as the difference of the convex subdifferentials of g and h, which are embedded in the Banach space of directed sets [3,4]. In [6] this definition has been extended to the class of quasidifferentiable functions. It is computable and also may be visualized. The Rubinov subdifferential, defined as the visualization of the directed subdifferential, is a compact, generally non-convex set in R n .
In the first part of this work [6], the definition and basic properties of the directed subdifferential for quasidifferentiable (QD) functions have been discussed. Furthermore, the subclasses of amenable and lower-C k functions are studied and their directed subdifferentials are calculated. In this paper, we show that the desired properties (axioms) of the subdifferentials [25] also hold in this class. Further, we obtain necessary and sufficient optimality conditions and calculus rules in terms of the directed subdifferential, some of which have been proved in [5] for the case of DC functions. We also compare the Rubinov subdifferential with the subdifferentials of Clarke, Dini, Michel-Penot, and Mordukhovich. Its intermediate position, as a superset of the Dini subdifferential and a subset of the Michel-Penot and Clarke subdifferential, enables to obtain a strong necessary condition for a minimum in terms of its positive part (which is identical to the Dini subdifferential), as well as a strong sufficient condition in terms of the interior of the same part. In a symmetric way, the corresponding optimality conditions for a maximum are available via the negative part (which equals the Dini superdifferential).
The paper is organized as follows. In Section 2 we briefly remind the definitions of the directed sets, their visualization and the related definitions of the directed and Rubinov subdifferentials. In more details, the supremum and infimum of directed sets are studied. In Section 3 we discuss calculus rules for the directed subdifferential of a QD function, and in Section 4 we provide optimality conditions in terms of the directed and Rubinov subdifferentials. In Section 5 we discuss Ioffe's axioms and the relations between the directed subdifferential and other well-known subdifferentials.
We refer the reader to the first part of the paper [6] for more details on definition, basic properties of the directed subdifferential and examples of QD functions. As this article is designed as a second part of [6], only essentials in [6] are repeated here to save space, however, this article should be understandable for any reader with basic knowledge of convex analysis and nonsmooth optimization.

Directed subdifferential of quasidifferentiable functions
Directed subdifferential was first introduced in [5] for DC functions, and then extended to the quasidifferentiable functions in [6]. The main idea is based on the concept of directed sets introduced and studied in [3,4]. In this section we briefly remind some definitions related to the Banach space of directed sets, introduce and prove several important properties, and define the directed and Rubinov subdifferentials of a QD function.
Throughout the paper we apply the notation introduced in [6, Sec. 2-4].

Directed sets
We again denote by C(R n ) the cone of convex compact nonempty subsets of R n and by D(R n ) the Banach space of the n-dimensional directed sets. A directed set − → A consists of two components ( −−−−→ A n−1 (l), a n (l)) parametrized by unit vectors l ∈ S n−1 for n ≥ 2. For directed intervals, i.e. the case of n = 1, only the second component a 1 (l) appears. The arithmetic operations are defined componentwise.
In C(R n ), we denote by +, · the usual Minkowski sum resp. the scalar multiplication of sets by a constant factor. The pointwise negation is denoted by A and A B = A + ( B) is the algebraic difference for A, B ∈ C(R n ). In the vector space The embedding J n maps C(R n ) into D(R n ) and is defined -as usual for directed sets -recursively by setting a n (l) = δ * (l, A) and −−−−→ A n−1 (l) = J n−1 (P n−1,l (Y (l, A))). Hence, not only the support function, but also the supporting face projected to R n−1 and embedded in D(R n−1 ) are saved in the two components of a directed set. The embedding is positively linear, i.e.
We refer the reader to [3,4,5,6] for detailed discussions and illustrative examples.
It is convenient to define the operations of supremum and infimum on directed sets. We will use them later to express the directed subdifferential of max-type and min-type functions. Such definitions have already been given in [3] for two directed sets, here we extend the definition to any finite number of directed sets.
(ii) for a finite number of directed sets where we used the two sets of active indices I 1 (l) = {i ∈ I | a i n (l) = max j∈I a j n (l)}, I 2 (l) = {i ∈ I | a i n (l) = min j∈I a j n (l)}.
We will use the notation sup i∈I {C i } = co i∈I C i for convex sets C i ∈ C(R n ), i ∈ I.
It is not difficult to observe that for any finite collection of directed sets { − → A i } i∈I and any two subsets I 1 and I 2 of I such that I 1 ∪ I 2 = I, we have We next show that the supremum and infimum operators are symmetric to each other which was already stated, but not proved in [3,Remark 4.7].
Proof. For directed intervals Assume (3) holds for n ≤ k for some k ≥ 1. We prove it for n = k + 1. We have by the definition of supremum and our inductive assumption since I 1 (l) = I 2 (l) = {i ∈ I | − a i n (l) = min j∈I (−a j n (l))} in this case.
Note that for two convex sets A, B ∈ C(R n ) we have sup{J n (A), J n (B)} = J n (co{A ∪ B}). This relation proven in [3,Proposition 4.20] can be generalized for any finite number of sets.
Proof. The first relation follows from [3,Proposition 4.20] (where it is proved for two sets). We get what we need by applying the result to two of the sets, then their convex hull and the third set, and so on, until all sets have been considered. The second relation follows from Proposition 2.2.
We will also need the following technical statement.
} holds for n = k ≥ 1. We prove with induction that it also holds true for n = k + 1. We have where in the last two lines we have used the inductive assumption and the definition of the supremum (observe that the index set I 1 (l) remains the same). The relevant relation for infimum is obtained analogously.
One can define an order relation in the space : ⇐⇒ (i) ∀ l ∈ S n−1 : a n (l) ≤ b n (l) (4) (ii) if n ≥ 2 and l ∈ S n−1 with a n (l) = b n (l), It is not difficult to observe (see [3,Theorem 4.17]) that for A, B ∈ C(R n ) It is handy to define the directed zero in D(R n ). We let − → 0 n = J n ({0 n }). The following properties can be obtained directly from the definitions of the convex embedding and the directed zero, hence the proof is omitted.
Remark 2.5. It is obvious that for the zero element − → 0 n ∈ D(R n ), the directed zero defined above, and − → A ∈ D(R n ), the following holds by [3, Proposition 4.14]: We remind the reader on the definition of a linear image of a directed set ]). Let M ∈ R n×m and set Especially, it holds for a difference of embedded convex sets A, B ∈ C(R m ). The well-definedness is shown in [2,Sec. 3].
To shorten later the formulation of the directed subdifferential of a separable function, we introduce the direct product of directed sets which is based on the linear image of a directed set.
and Π i is the transpose of the corresponding matrix.

Visualization of directed sets
is a compact, generally non-convex set in R n which consists of three parts: the convex, the concave and the mixed-type parts, i.e.
For convex sets C, D ∈ C(R n ), we continue to use the notation C − * D and C − · D for the geometric resp. Demyanov difference. The essential connection between these two differences and the visualization of directed sets is given in the following equations (see [4] and [6,Sec. 4]).

Quasidifferentiable functions and the directed subdifferential
For a function f : R n → R we denote by f (x; l) the directional derivative at x in direction l ∈ S n−1 . Quasidifferentiable (QD) functions are the ones for which the directional derivative has the special DC representation with the quasisubdifferential ∂f (x) and the quasisuperdifferential ∂f (x). Both sets are in C(R n ) and they form one representation of the quasidifferential Df (x) = [∂f (x), ∂f (x)] which is an equivalence class of pairs of sets. Therefore, we follow the settings in [19, Sec. III.1, 3.], namely we use the following equivalence relation: The linear operations are defined as The directed subdifferential for a QD function is defined as . It was shown in [6] that the directed and Rubinov subdifferentials are welldefined, i.e. do not depend on the chosen pair representing the equivalence class. Moreover, this definition coincides with the earlier definition of the directed subdifferential of DC functions originally given in [5].

Calculus rules for the directed subdifferential for quasidifferentiable functions
Apart from the sum rule, which is the cornerstone of the subdifferential calculus, we prove other important rules, such as multiplication, division and a simple form of superposition. Observe that all these rules hold in the form of equalities. The proofs rely on corresponding results for the quasidifferential (see [19]), which we repeat here for the reader's convenience.
The following result is due to [19, Chapter III, Theorem 2.1].
Theorem 3.1 (Sum Rule and More Properties).
(i) Let the functions f, f 1 , f 2 : R n → R be quasidifferentiable at x ∈ R n and λ ∈ R. Then the sum and the scalar multiple of these functions are quasidifferentiable at x and also where the corresponding operations on the pairs of sets are defined by (18)- (19).
(ii) For functions quasidifferentiable at a point x, the product of these functions is also quasidifferentiable at x and (iii) Let the function f be quasidifferentiable at a point x and f (x) = 0.
Then the function f 1 (x) = 1 f (x) is quasidifferentiable and Now it is not difficult to prove the corresponding calculus results for the directed subdifferential based on this theorem and results from Section 2.

Theorem 3.2 (Sum Rule and More Properties).
Let the functions f 1 , f 2 : R n → R be quasidifferentiable at x ∈ R n . Then the sum of these functions is quasidifferentiable at x and Proof. From Theorem 3.1(i) and the property of the convex embedding (1) From Theorem 3.1(ii) and (1) The next example illustrates the "inflation of size" in the representation of the quasidifferential, if calculus rules are applied.
Obviously f = 0 and the quasidifferential Df (0) can be represented by The- After applying calculus rules to the DC representation such as the sum rule for quasidifferentials in Theorem 3.1(i), another representation follows Due to the equivalence relation (17), both representations result in the same quasidifferential (seen as an equivalence class of pairs of sets), but if r is big, the two representing sets (the quasisubdifferential and the quasisuperdifferential) are large sets resulting in the so-called "inflation in size". For the existence and construction of minimal pairs for a better representation, see [31], [19,Sec. III.8] .
For the directed subdifferential, we are in the vector space of directed sets and the corresponding calculation reads as where the sum rule in Theorem 3.2 is applied. Since the directed subdifferential just calculates a difference (which is zero), the two problems mentioned above do not appear here.
For other subdifferentials, we usually have to use the Minkowski sum of sets in a calculation as in (23). Thus, we would normally obtain a bigger set and we would lose the equality.
A simple example shows that the sum rule does not hold for the Rubinov subdifferential (i.e. the visualization of the directed subdifferential).
Observe that for l ∈ S 0 = {−1, 1} we can write 3], {0} ], and by Remark 2.5 This example demonstrates that we can only hope for a sum rule inclusion for the Rubinov subdifferential (and not an equality as for the directed subdifferential).

Proposition 3.5.
Let the functions f 1 , f 2 : R n → R be quasidifferentiable at x ∈ R n . Then the product of these functions is quasidifferentiable at x and Proof. We need to consider different signs of f 1 (x) and f 2 (x). First assume that f 1 (x), f 2 (x) ≥ 0. Then by Theorem 3.1(i) and (1) we have (x is omitted here for the sake of brevity) Analogously, for f 1 (x) ≥ 0 and f 2 (x) < 0 we set g = −f and g 1 = f 1 , Eq. (22) in Theorem 3.2 shows that We obtain (26) for the other cases f 1 , f 2 < 0 and f 1 < 0, f 2 ≥ 0 by the same arguments.
Proposition 3.6. Let f : R n → R be quasidifferentiable at x ∈ R n , and f (x) = 0. Then the function Proof. By Theorem 3.1(iii) and (1) we have (with omitted argument x) The next corollary corresponds to the calculus rule for the quasidifferential in [19, Sec. III.2, Corollary 2.1].
Proof. From Propositions 3.5 and 3.6 we have In nonsmooth optimization it is important to be able to deal with mintype and max-type functions. To prove the corresponding calculus rules, we need the following result from [19, Sec. III.2, Theorem 2.2].
Then the functions ϕ 1 and ϕ 2 are quasidifferentiable at x and where ∂ϕ 1 (x) = co Here, [∂f k (x), ∂f k (x)] is a quasidifferential of the function f k at the point x, We are now ready to state the relevant result for the directed subdifferential. Proposition 3.9. Let the functions f i : R n → R, i ∈ I = {1, . . . , m}, be quasidifferentiable at a point x ∈ R n . Let Then the functions ϕ 1 and ϕ 2 are quasidifferentiable at x and − → ∂ ϕ 1 (x) = sup

Proof. From Theorem 3.8 and Lemma 2.3 we have (again, the variable is omitted for brevity)
where we have used property (1), Lemma 2.4 and Remark 2.5. The relation for the pointwise min-function is obtained analogously.
We next prove a simple composition result.
Lemma 3.10. Let A ∈ R m×n , b ∈ R n , x ∈ R n , and g : R m → R be quasidifferentiable at y = Ax + b. Then the superposition h(x) = g(Ax + b) is quasidifferentiable and Proof. Observe that since g is directionally differentiable at y = Ax + b, From the definition of the quasidifferential, the first equation in (28) follows immediately. For the directed subdifferential we apply the definition of the linear image in (7) and get Next we study the directed subdifferential for a separable function.

Optimality conditions, descent and ascent directions
For directionally differentiable functions, ascent and descent directions are characterized via positive or negative directional derivatives, see e.g. [15] and [19,Sec. V.1,2.]. Definition 4.1. Let f : R n → R be directionally differentiable, x ∈ R n and l ∈ S n−1 . Then • the point x is a strict saddle-point of f , if there exists a direction of ascent and a direction of descent from x.
The directions of ascent/descent are easily recognizable by the directed subdifferential. If the second component of the directed subdifferential is positive/negative for some direction l, it is a direction of ascent resp. descent. This fact is already known for DC functions (see [5] in which ascent and descent directions can be discovered visually for several functions). The next proposition generalizes [5,Proposition 5.3] to QD functions.
Then, d n ( ) = f (x; ) and the following holds: • If d n (l) < 0, then l is a direction of descent of f at x.
• If d n (l) > 0, then l is a direction of ascent of f at x.
Proof. From the definition (20) of the directed subdifferential and the QD function, it follows that The three assertions follow immediately from this one by Definition 4.1.
The known results for necessary and sufficient optimality conditions for QD functions by [16], [19, Chap. V, Theorem 3.1] carry over to the directed subdifferential. In [20] these conditions are formulated in the DC case with the help of geometric differences of Moreau-Rockafellar subdifferentials.
The next two propositions generalize [5, Proposition 5.1 and 5.4] to QD functions.
), then f has a strict saddle-point in x.
Proof. In the above mentioned citations, the necessary conditions for a local minimum of QD functions are stated as By the definition of the geometric difference this is equivalent to which coincides with the positive part of − → ∂ f (x), see (10). Observe that (29) also yields . The necessary condition for the local maximum follows immediately by applying the necessary optimality condition to −f . The main reasons are and (15). Taking into account Proposition 4.2 and the definition of the positive/negative visualization part, we can find two different directions l, η ∈ S n−1 such that f (x; l) > 0 and f (x; η) < 0. Hence, x is a strict saddle-point by Propositon 4.2.
), then f has a local minimum in x.
Proof. The proof starts from the well-known sufficient optimality condition Having the equality int(A − * B) = (int A) − * B in mind for sets A, B ∈ C(R n ), we could argue as in the proof of Proposition 4.3.

Connections to other subifferentials
To compare the directed subdifferential with some well-known subdifferentials, we first recall a few definitions. We cannot mention everything, but try to be as concise as possible, and discuss the most relevant constructions.
The classical (Moreau-Rockafellar) subdifferential of a convex function f : R n → R at x ∈ R n is: The vector s ∈ ∂f (x) is called (convex) subgradient of f at x. This subdifferential is a convex, compact and nonempty set for convex f : R n → R (see e.g. [35]), and its support function is the directional derivative where the directional derivative of f at x in direction l is defined as The class of 'simple' subdifferentials (according to the classification of [27]) that generalize this notion to non-convex functions are the ones that correspond to the Moreau-Rockafellar subdifferential of the convexification of a certain first-order approximation to the graph of the function. There is some confusion in the notation of 'simple' subdifferentials; different names are used for the same construction.
One of the most popular of such 'simple' subdifferentials is the Fréchet subdifferential One can think of the Fréchet subdifferential (see [33,25,27]) as of a support set to the subderivative or lower Hadamard directional derivative i.e.
The Fréchet subdifferential is always defined for f : R n → R, but is often empty. In some studies the function is assumed to be Hadamard directionally differentiable, i.e.
One can define a weaker notion of subdifferential via the weaker Dini (lower) directional derivative (when η ≡ l in the corresponding limits (33) and (35)). Such subdifferential is called Gâteaux or radial subdifferential, and for a Dini directionally differentiable function f : R n → R, it can be expressed as In this paper we always call this subdifferential Dini subdifferential, since for a locally Lipschitz, directionally differentiable function the Dini directional differentiability implies the (generally stronger) Hadamard directional differentiability, and in this case all mentioned 'simple' subdifferentials coincide (see [ Observe that one can also define a symmetric "upper" construction: for f : R n → R directionally differentiable at x ∈ R we let {s ∈ R n | ∀l ∈ R n : s, l ≥ f (x; l)}. These subdifferentials are identical to the convex one for convex functions, but for a non-convex function f the directional derivative is not necessarily sublinear. However, the support function of ∂ D f (x) is the convex hull, i.e. the bi-conjugate of the directional derivative in l, see [26, Chap. III, §3.5, Proposition 2].
If f : R n → R is a DC function with f = g − h, it is observed in [21], [19,Chap. III,Proposition 4.1], that the Dini subdifferential equals the geometric difference of the two Moreau-Rockafellar subdifferentials, i.e.
The Dini subdifferential may be empty, since the geometric difference may be empty (cf. [21, Section 2.1]), otherwise it is compact and convex. The Michel-Penot subdifferential [28,15] is determined by the so-called Michel-Penot directional derivative of a function f : R n → R in a direction l ∈ R n at x, The Michel-Penot subdifferential of f at x is The Michel-Penot subdifferential is always nonempty, convex and compact.
The following connection between the Michel-Penot subdifferential and the Demyanov difference follows from [15,Theorem 6.1] for any DC function The Clarke's subdifferential (cf. [9,10,11,12]) of a locally Lipschitz (generally non-convex) function, is also a convex set. For f : R n → R and l, x ∈ R n , the Clarke directional derivative of f at x in direction l is the limit: The Clarke subdifferential is defined as It is well-known, cf. e.g., [13,15], that and they are equal in the case of a convex function f . These inclusions may be strict as it is shown e.g. in the examples in [5]. Another indication for this inclusion is that the geometric difference is a subset of the Demyanov's difference together with the representations (37) and (39). One of the most famous non-convex subdifferentials is the basic (lower) subdifferential of Mordukhovich, [29], [30,Definition 1.77], ∂ M f (x), which is equivalent to the approximate subdifferential of Ioffe in finite dimensions [22,23], [30,Theorem 3.59] and may be defined as i.e. it consists of all possible limits of Fréchet subdifferentials of points around x. As with the Dini subdifferential, it is possible to define the related upper construction , and the symmetric Mordukhovich subdifferential It is well-known that for a locally Lipschitz function the Mordukhovich subdifferential is a compact in R n and the Clarke subdifferential is its (closed) convex hull (see e.g., [23], [30,Theorem 3.57]).
Finally, the quasidifferential of Demyanov-Rubinov [18], [19, Chapter III, Section 2] of QD functions discussed in the previous sections is defined as an element of a linear normed space of equivalence classes generated by pairs of convex sets, following the approach of Rådström [34]. Quasidifferentials were later generalized to exhausters by Demyanov (see, e.g. [14], [17]). These tools are different from other subdifferentials, as they are represented by several sets, not one.

Relations between the directed subdifferential and other subdifferentials
There are a few useful relations (some in the form of equalities) between the directed subdifferential and other non-convex subdifferentials. The apparent reason to express other generalized constructions via the directed subdifferential is the presence of exact calculus rules for the directed subdifferential and calculus rules involving only inclusions for most other subdifferentials.
In [19, The positive and negative part as well as the convexified visualization of the directed subdifferential form the Dini subdifferential/superdifferential resp. the Michel-Penot subdifferential.
We now extend the class in [5,Theorem 4.3] from DC to QD functions.
Proposition 5.1. Let f : R n → R be a QD function and x ∈ R n . Then, If f is additionally locally Lipschitz, then For locally Lipschitz QD functions, we have the following inclusions for subdifferentials: Proof. The proof follows directly from (42)-(43) and (12)- (14).
For QD functions, the directed subdifferential of the directional derivative equals the one of the function (see [7,Proposition 3.15]). There is also a close connection between the Mordukhovich subdifferential and the directed subdifferential for DC functions with two arguments (see [7,Theorem 3.17]). In the next two propositions both results are now extended for QD functions.
Proposition 5.2. Let f : R n → R be a QD function and x ∈ R n . Then, Proof. It is not difficult to observe that if f is directionally differentiable, for the positively homogeneous function h(·) = f (x; ·) we have Furthermore, h is QD by [6,Lemma 3.4] and by the definition of a QD function, Dh(0) = Df (x). The assertion follows by the definition (20).
The next result is only valid in R 2 until now, but it shows that the Rubinov subdifferential (visualization of the directed one) coincides with the basic symmetric subdifferential of Mordukhovich. In addition, the basic subdifferential and the superdifferential can be recovered via parts of the visualization.
(SD 3 ) If f is convex, then ∂f (x) coincides with the classical convex subdifferential.
(SD 4 ) If f satisfies the Lipschitz condition with constant L in a neighborhood of x, then s ≤ L for all s ∈ ∂f (x).
(SD 5 ) If x is a local minimizer of f , then 0 ∈ ∂f (x).
(SD 6 ) If n = n 1 + n 2 and Ioffe argues that these properties are often used for further proofs and constructions, not the definition of a subdifferential itself. Moreover, he suggests to define a subdifferential via these axioms, i.e. any mapping f (x) → ∂f (x) satisfying the properties (SD 1 )-(SD 8 ) is called a subdifferential. No matter whether one agrees with such definition or not, the properties suggested by Ioffe are undoubtedly of fundamental importance in Nonsmooth Optimization.
All the subdifferentials mentioned in the previous section (except for the multi-set quasidifferentials and exhausters) satisfy all the above axioms. In addition, the Moreau-Rockafellar subdifferential fulfills the following stronger form of (SD 6 ) for convex functions g, h : R n → R and x ∈ R n , sometimes called Moreau-Rockafellar theorem or the Sum Rule (cf. [35,Theorem 23.8]): (SR) ∂(g + h)(x) = ∂g(x) + ∂h(x). This strong equality is not fulfilled for the other known subdifferentials of non-convex functions introduced above without additional regularity assumptions. It is curious though that for some subdifferentials an inclusion holds instead of the sum rule. For example, Fréchet subdifferential satisfies We show that the directed subdifferential satisfies axioms (SD 2 )-(SD 7 ) and even their stronger versions. In our setting for QD functions defined with finite values on the whole R n , (SD 1 ) and (SD 8 ) are not relevant.
Observe that most axioms discussed in the next proposition hold in a stronger form than required by Ioffe's axioms. We highlight this in the statement (see [5,Proposition 4.2] for DC functions).
We apply [3,Proposition 4.11] to estimate the norm of directed sets by the Demyanov metric which is expressed with the help of the Demyanov difference (see [6, (2)]). With these tools we are ready to estimate the directed subdifferential: − Since the property (SD 4 ) holds for Clarke's subdifferential (see [12, Propositions 2.1.1 and 2.1.2]), the proof is finished. The same estimate holds for the Rubinov subdifferential by (12)- (14), since Property (SD 5 ) is shown in Proposition 4.3, the proof of (SD 6 ) follows from Theorem 3.2 and Proposition 3.11, finally, (SD 7 ) is shown in Lemma 3.10.

Conclusions
This two-part work serves to extend the notion of the directed subdifferential to a class of QD functions more general than DC. The directed subdifferential has the same exact calculus rules as the quasidifferential and all relevant axioms from Ioffe's list are fulfilled, while the non-uniqueness and the "inflation in size" in the representation of the quasidifferential is avoided. However, this exciting subject is far from being exhausted. It seems plausible that the directed subdifferential can be expressed via directional derivatives alone without using any DC representation. We are also curious about the perspective of studying essentially non-Lipschitz functions, e.g. the ones taking infinite values, as this would allow us to incorporate the first and eighth of Ioffe's axioms into the theory.