The best approximation of a given function in $L^2$-norm by Lipschitz functions with gradient constraint

The starting point of this paper is the study of the asymptotic behavior, as $p\to\infty$, of the following minimization problem $$ \min\left\{\frac1{p}\int|\nabla v|^{p}+\frac12\int(v-f)^2 \,, \quad \ v\in W^{1,p} (\Omega)\right\}. $$ We show that the limit problem provides the best approximation, in the $L^2$-norm, of the datum $f$ among all Lipschitz functions with Lipschitz constant less or equal than one. Moreover such approximation verifies a suitable PDE in the viscosity sense. After the analysis of the model problem above, we consider the asymptotic behavior of a related family of nonvariational equations and, finally, we also deal with some functionals involving the $(N-1)$-Hausdorff measure of the jump set of the function.


INTRODUCTION
Let us assume that Ω ⊂ R N is a bounded open set with smooth, say C 1 , boundary.The main goal of this work is to study the optimal approximation in the L 2 -norm of a given function f ∈ L 2 (Ω) by functions in W 1,∞ (Ω) with a constraint on the gradient.Namely, we consider the following minimization problem (1.1) inf where the set X(Ω) is given by X(Ω) := v ∈ W 1,∞ (Ω) : |Dv| ≤ 1 for a.e.x ∈ Ω .
In order to obtain a minimizer to (1.1) one can argue by direct methods (taking a minimizing sequence), or rather noticing that such a problem appears naturally as Γ-limit, for n → ∞, of the following classical energy functional where p n is a sequence of numbers that diverges to +∞.Setting u n the minimizer of J n , the heuristic of the limiting process is that the measure of the set {|∇u n | > 1} has to go to zero in order to keep the energy bounded (notice that J n (u n ) ≤ J n (0) = f 2 L 2 (Ω) ).Therefore, we expect that u n converge to the unique minimizer u ∈ X(Ω) of J ∞ (see Proposition 2.2 below).
An interesting feature of the minimizer of (1.1) that we find here, is that it satisfies a kind of representation formula by cones inside the region {u = f }.
Before stating our first result, in order to gain some intuition, let us consider three concrete examples in the special case Ω = (−1, 1).
• Let f 1 (x) = kχ (−r,r) with r ∈ (0, 1] and k ∈ R. Observe that if r = 1 then f 1 ∈ X(−1, 1) and so u 1 ≡ f 1 is the minimizer.Otherwise, it is not hard to see that is the minimizer.In Figure 1 the case r > k 2 is represented.
-r r k f u • Let f 2 (x) = 2|x|, then the minimizer is given by See Figure 2. • Let f 3 (x) = |x|, then is is not hard to see that the minimizer is for |x| ≤ 4  9 , |x| for |x| ≥ 4  9 .See Figure 3.In the three examples above (see Section 6 for the explicit computations) the behavior of the solution u i , i = 1, 2, 3 follows the same idea: any u i tries to be as close as possible to the datum f i as long as f i is smooth and has gradient bounded by one.Otherwise, the best that the solution can do in order to minimize the L 2 norm of the difference, is to growth as much as it is allowed, that is, as a line with slope ±1.
More in general we have the following result.
In addition, the following representations formulas hold true: After considering the problem from a variational viewpoint, we want to address it from a PDE perspective.Since it is not clear a priori which equetion does the minimizer of (1.1) solve, we go back to the approximating sequence of minimizers of J n (see (1.2)).The minimizers of (1.2) are solutions to (1.5) in Ω , ∂u n ∂ν (x) = 0 on ∂Ω , where ∆ p v = div(∇v|∇v| p−2 ), with p > 1, is the p-Laplacian and we denote by ν the external unit normal to ∂Ω.As usual, we consider such solutions in the weak sense (see Definition 2.1).Our aim is, now, to pass to the limit in (1.5) to find the PDE solved by the limit function, u.
Usually, the limit as p → ∞ of solutions to equations related to the p-Laplacian yields to equations that involve the infinity Laplacian, namely, In particular we refer to the pioneer paper by T. Bhattacharya, E. Di Benedetto and J. Manfredi (see [9]) that first studied such type of problem.We recall that the infinity Laplacian is a second order differential operator (in nondivergence form) that appears in many contexts.For instance, infinity harmonic functions (solutions to −∆ ∞ u = 0) appear naturally as limits of p-harmonic functions (solutions to −∆ p u = −div(|Du| p−2 Du) = 0) and they have applications to optimal transport problems, image processing, etc. See, among the others, [4], [9], [25] and references therein.Moreover, the infinity Laplacian plays a fundamental role in the calculus of variations of L ∞ functionals, see e.g.[3], [6], [7], [18], [21], [22], [23], [27] and the survey [4].Notice that such operator is degenerate elliptic (that is non-degenerate only in the direction of the gradient).
Before stating our next result, let us introduce some notation: here and in the rest of the paper for u, f ∈ C(Ω) we denote Now we are ready to write the problem solved by u.
Theorem 1.2.Assume f ∈ C(Ω), then the unique minimizer u ∈ X(Ω) of (1.1), is a viscosity solution to Moreover it satisfies the following boundary conditions: Quite surprisingly, in the limit the second order operator is somehow lost: indeed there is no trace of the infinity Laplacian and in fact u satisfies a first order equation.Actually, equation (1.6) is the proper one in order to reflect the information carried out by (1.3): in the region {u − f < 0} the function u looks like a cone pointing upwards and therefore it solves the eikonal equation (and analogously in {u − f > 0}, with the eikonal equation with the reverse sign).As far as at the boundary conditions are concerned, it is natural to wonder if it would be possible to extend the equation in (1.6) up to Ω. However the answer is negative, and, to get convinced that (1.7) are the natural conditions to impose on ∂Ω, it is enough to look at Example 2 above.
For the sake of clarity, notice that in Theorem 1.2 f is continuous and the open set Ω + u coincides with the interior of A + .We will provide an explanation for the absence of second order operators in (1.6) in Proposition 1.6 below.
It is also worth to mention explicitly that the regions Ω + u and Ω − u (and their boundary counterparts) depend on the solution u itself.A consequence of this fact is that, even if the minimizer of (1.1) is unique, problem (1.6)-(1.7)does not posses a unique solution.In fact, we can construct minimal and maximal solutions to problem (1.6)-(1.7)that, in general, do not coincide with the minimizer of J ∞ .
Theorem 1.3.Assume that f ∈ C(Ω), then there exists a unique maximal (viscosity) solution u ∈ X(Ω) to the following obstacle problem with boundary conditions Clearly the solution u provided by Theorem 1.3 solves (1.6)-(1.7),but it is not a minimizer of (1.6) (unless f ∈ X(Ω)).Indeed, as the intuition suggests, since the minimizer u has to be close to f in L 2 , then u − f has to change sign in Ω (see Figure 4 and Proposition 2.4 for more details).The limit procedure showed above can be, in fact, generalized to a larger class of differential equations that involve lower order terms that have a polynomial growth with respect to the gradient.More precisely, we consider the following problem (1.10) where the Hamiltonian term H n ∈ C(Ω × R N ) satisfies the following growth conditions These two types of lower order terms, accounting for the "linear (p − 1) growth" and "natural growth", are thoroughly studied in literature (for fixed p).Typically, the presence of the Hamiltonian affects the coercivity of the operator and some specific techniques are required to obtain apriori estimates: the first case can be dealt with symmetrization procedures [8], log-type estimates [10], slicing techniques [16] or arguing by contradiction [11].For the case of natural growth we mention the exponential test function approach developed in [12]- [14], again symmetrization in [1], and an argument based on Schauder fixed point Theorem in [31].
Let us notice that, under assumption (1.11), for a smooth f and a fixed n, the existence of a solution v n ∈ W 1,pn (Ω) ∩ L ∞ (Ω) to problem (1.10) is a straightforward consequence of the results contained in [14] (see Theorem 3.1 below for more details).We also explicitly point out that the presence of the zeroth lower order term allows us to solve (1.12) without any smallness assumption on the datum f .
The mail difficulty when one wants to pass to the limit as p diverges in (1.10) is to obtain apriori estimates for v n that are stable with respect to n.Up to our knowledge, the only related result for quasilinear equations with gradient lower order terms is contained in [17], where a large solution problem is considered.
Our main result concerning problem (1.10) is the following.Theorem 1.5.Assume (1.11)-(1.12)and let f ∈ C(Ω).Then there exists which solves (in viscosity sense) the following equation together with the following boundary conditions: and (1.16) At the first glance we notice that the structure of (1.14) is more involved then the one in (1.6).First of all v does not belong in general to X(Ω) since the bound on ∇v depends also on the Hamiltonian (see Section 3).Moreover, as expected, the presence of the Hamiltonian term changes both the limit equation and the boundary conditions.
Let us now go back to the minimizer u ∈ X(Ω) of (1.1) and notice that it also solves the equation (1.14) with H ∞ ≡ 0. Apparently, this provides some more information about −∆ ∞ u that is missing in (1.6).However, we have the following result.
Proposition 1.6.Assume that f ∈ C(Ω), and that w ∈ X(Ω).Then w solves (1.6) if and only if it solves Roughly speaking this is due to the bound |∇w| ≤ 1 and the fact that cones are solutions to the eikonal equation and are also infinity harmonic away from their singular points.
Finally, we include some considerations about a very well known family of functionals that allow free discontinuities.Unfortunately, we are able to provide only partial results in this case and we left a more complete analysis of the limiting behavior of such functionals for p → ∞ for future research.Let us consider the functional I p : SBV (Ω) → R ∪ {+∞} defined as (1.17) Here SBV (Ω) is the space of Special Functions of Bounded Variation, ∇v is the absolutely continuous part of Dv with respect to the Lebesgue measure, S v is the singular set of v and H N −1 is the N − 1 dimensional Hausdorff measure (more details and formal definitions can be found in Section 5).We set The functional I p (v) was introduced in the seminal paper by De Giorgi,Carriero and Leaci ([19]) to provide a weak formulation of the Mumford-Shah image segmentation problem, namely, minimizing An in-depth overview about the history of these functionals, their relation with applications, and the huge impact that their study has had in the field of Calculus Variation is definitively out of reach for our contribution.We refer the interested reader to the original paper [19], the classical monograph [2], and to the nice review [26].For our aims, we simply recall that the main difficulty in dealing directly with (1.18) is that the Hausdorff measure is not lower semicontinuous with respect to any reasonable metric the collection of closed set of Ω can be equipped with.Therefore, the strategy of [19] was to find a minimizer u ∈ SBV (Ω) of I p and then show that (S u , u) was also a minimizer to (1.18).The most delicate step in their argument was to prove that H N −1 (S u ∩Ω) = H N −1 (S u ) and this crucial step was achieved by a lower bound on the density of S u , namely (see Theorems 7.15, 7.21 and 7.22 in [2]).
At this point it is natural to wonder what happens with the limiting minimization problem relative to (1.18) as n → ∞.The answer is given by the following Proposition.
Proposition 1.7.Assume that f ∈ L ∞ (Ω) and that p n → +∞, then the sequence of functionals I pn : The next step would be to prove that a minimizer of (1.20) is also a minimizer to Unfortunately, we are not able to show such a property since we do not know how to obtain a lower density bound, as in (1.19), for the limit function u.
We leave this as an interesting open problem.

NOTATIONS AND BASIC DEFINITIONS
Let us introduce the notion of geodesic distance (see [32], for instance): for x, y ∈ Ω we define The geodesic distance is the natural quantity to consider in our setting since the condition Now, let us introduce some extra notations that will be used in the rest of the paper.Let us recall that (see Proposition 4.17 in [15]) the support of a general function f : Ω → R is defined as By definition supp(f ) is a closed set with respect to the subset topology relative to Ω.For any f ∈ L 2 (Ω), and for any continuous function u we define the following sets For future use we need to give the precise definition of a solution to a partial differential equation in the viscosity sense.Definition 1.8.Let D be a locally compact subset of R N and let F : We say that u ∈ C(D) is a viscosity sub (super) solution to Finally, u ∈ C(D) is a viscosity solution of (1.24) if it is both sub and super solution.
In the sequel we will use the following result.
Proposition 1.9.Any function in X(Ω) satisfies both in the viscosity sense.
Proof of Proposition 1.9.Let us observe that (1.25) is equivalent to say that the following inequalities hold true at any x 0 ∈ Ω: In order to prove the first inequality in (1.25) let x 0 ∈ Ω and assume that there exists ϕ, a C 2 function of a neighborhood of x 0 , such that u − ϕ has a strict local maximum at x 0 , with u(x 0 ) = ϕ(x 0 ).Suppose by contradiction that |∇ϕ(x 0 )| > 1. Taking the first order Taylor expansion of ϕ(x) at x 0 it follows that Let us consider This is contradicts u ∈ X(Ω).
The second inequality in (1.25) follows in the same way.

THE BEST LIPSCHITZ APPROXIMATION WITH GRADIENT CONSTRAINT
Let p n be an increasing sequence such that p n → ∞ as n → ∞.Let us set, for any f ∈ L 2 (Ω), the following functional For any fixed n, it is hot hard to see that there exists a unique (exploiting the convexity of (2.1)) minimizer u n ∈ W 1,pn (Ω) to (2.1).Thus, from direct variational arguments, the minimizer, u n , turns out to be a weak solution to (2.2) Let us recall here such a formulation.
Definition 2.1.A weak solution u n to (2.2) is a W 1,pn (Ω) function that satisfies The aim of this section is to deal with the limiting behaviour of these three objects -the minimizer, the functional, and the equation -as p n diverges.
We start with the following proposition that provides a limit for the sequence of minimizers together with a limiting minimization problem associated to the limit of (2.1).Proposition 2.2.Assume f ∈ L 2 (Ω) and let {u n } ⊂ W 1,pn (Ω) be the sequence of minimizers of (2.1), then there exists u ∈ X(Ω) such that Moreover, u is the unique minimizer of the functional Proof.The minimality of u n gives us that This implies that (2.5) For N < p n , let us recall that Morrey inequality (see [15,Corollary 9.14]) implies that, for any x, y ∈ Ω, where C M = C M (N, Ω) can be chosen independent on p (see in particular formula (28) in the proof of [15,Theorem 9.12]).Therefore, we deduce that and consequently Combining (2.5) with (2.8) we conclude that (2.9) L 2 (Ω) .On the other hand, using Hölder's inequality, it follows that, for any q such that N < q < p n , Thanks to the two estimates above, we deduce that the sequence {u n } is bounded in W 1,q (Ω) and then up to a (not relabeled) subsequence, u n → u weakly in W 1,q (Ω) and uniformly in Ω.The lower semi continuity of the norm also implies that for any q ≥ 1 we have Passing to the limit as q → ∞ we conclude that u ∈ X(Ω).
In order to show that u is, in fact, a minimizer to (2.4), let us notice that, by definition, u n satisfies We can now pass to the limit in the previous inequality (recall that u n → u uniformly in Ω), and we deduce that u is a minimizer.The uniqueness follows by a the convexity of J ∞ .
Remark 2.3.When f ∈ L ∞ (Ω) we have a explicit uniform bound for u n , Indeed, T k (u n ) is a competitor for u n for the energy (2.1) and the uniqueness of the minimizer of J n implies In other words, the minimizer of (2.4) is the best approximation of f among all the functions in X(Ω).The gradient constraint imposes to the minimizer a specific geometric structure.The goal of the following results is to clarify such a structure.Proposition 2.4.Assume that f ∈ L 2 (Ω) and let u ∈ X(Ω) be the minimizer to (2.4).Then Proof.We observe that if f ∈ X(Ω), then u ≡ f and hence and the last term in the right hand side is negative for ε small enough.This contradicts the minimality of u.
Remark 2.5.In fact, the proof shows that However, note that the statement of the proposition highlights that it is enough to have Next we deduce a representation formula for the unique minimizer to (2.4).
Theorem 2.6.Assume f ∈ L 2 (Ω) and let u be the unique minimizer to (2.4), then the following representations formulas hold true: (2.10) Proof.We assume that f / ∈ X(Ω), otherwise u ≡ f and there is nothing to prove.Proposition 2.4 assures us that both A + and A − are nontrivial, then the max and the min in (2.10) and (2.11) are well defined.We start by proving the second formula above.Let us define The strategy of the proof is to show that such a v belongs to X(Ω) and that it is a competitor for u in the minimization of (2.4).Then, uniqueness of the minimizer implies that The first step is to show that (2.12) By definition we have that (2.13) and for some x, z ∈ ∂A − .If x = z we have that On the other hand, if x = z, we get and Thus by (2.13)-(2.14),we deduce that and (2.12) follows.
Next we prove that v ≡ u on ∂A − .
To conclude the proof we need to show that v ≡ u in A − .If we assume that there exists x ∈ A − such that u(x) > v(x), then, by definition of v, we have that v that contradicts the fact that u ∈ X(Ω).Therefore, let us assume, arguing again by contradiction, that there exists x ∈ A − such that sup We set and we define Since both u and v are continuous it follows that ũ is continuous in Ω, and moreover ũ ∈ X(Ω) (since both u and v are).Observe now that that ũ satisfies ũ < f in A − : indeed in A − \ D this follows by the definition of A − , while since the two quantities above coincide in Ω \ D while Then, we have that that is in contradiction with the fact that u minimizes J ∞ .
Thus, we have that v(x) = u(x) in Ω and in particular v(x) = u(x) in A − and consequently in A − the representation formula given by (2.11) holds true.
Analogously one can prove that also (2.10) is in force.
Proof of Theorem 1.1.The proof follows immediately putting together the results of Proposition 2.2 and Theorem 2.6.
Remark 2.7.Let us observe that a similar version of the previous results can be obtained working in W 1,pn 0 (Ω), i.e. considering homogeneous Dirichlet boundary conditions.Actually, this latter case can be recovered as a sort of corollary of the W 1,pn (Ω).To get convinced of this fact notice that the limit space for the Dirichlet problem is in Ω and u = 0 on ∂Ω .
We consider the two minimization problems inf where f (x) = max{−δ(x), max{f (x), δ(x)}}.We claim that the minimizer u ∈ X 0 (Ω) of the first functional coincides with the minimizer ũ ∈ X 0 (Ω) of the second one.Indeed, thanks to the definition of f (x) one easily deduce that ũ ∈ X 0 (Ω).Moreover if u and ũ would differ in some region one could build a competitor for one of the two, contradicting uniqueness.

THE PROBLEM WITH GRADIENT LOWER ORDER TERM
In this section we address the limit, as n → ∞, of the family of problems (1.10).
Our first result provides existence and uniqueness of a solution for problem (1.10), for any fixed p n ∈ (1, ∞), together with some estimates useful in order to study the asymptotic behaviour as n → ∞.Theorem 3.1.Let us assume (1.11) and f ∈ L ∞ (Ω).Then, for any fixed n ∈ N, there exists a unique solution v n ∈ W 1,p (Ω) ∩ L ∞ (Ω) to problem (1.10) in the following weak sense Moreover the following estimates hold true Proof.The existence of a bounded weak solution to (3.1) follows by [12] - [14], while the uniqueness is a consequence of [28, Theorem 1.2].
In order to prove that v n L ∞ (Ω) ≤ A, assume by contradiction that meas({x ∈ Ω : therefore, there exists k > A such that meas({x ∈ Ω : Let us consider µ n = p n K 1 + 1 an let us choose for k ≥ 0, as a test function in (3.1).Using assumption (1.11), we get that If we choose any k > A we get a contradiction, since the left hand side above turns out to be strictly positive.Consider now k = 0 in (3.3), then we get Gathering together the above estimates, we obtain (3.2).Now we are ready to prove the main result of the section.
Proof of Theorem 1.5.Thanks to (3.2), we have that Using Hölder's inequality, it follows that for any N < q < p n we have 2) and thanks to the L ∞ bound, we deduce that the sequence {v n } is bounded in W 1,q (Ω) and, up to a (not relabeled) sequence, we get that v n → v weakly in W 1,q (Ω) and uniformly in Ω.The lower semicontinuity of the norm also implies that Therefore we conclude that v(x) satisfies Let us now focus on the equation solved by v.We start by proving that Indeed, let us pick any x 0 ∈ Ω + v and any ϕ ∈ C 2 (Ω) such that v(x) − ϕ(x) has a strict local maximum at x 0 with v(x 0 ) − ϕ(x 0 ) = 0. We need to check that Let us recall that v is the (uniform) limit of solutions to (3.1), so that there exist a sequence of real numbers ε n → 0 and a sequence of points Since both f and H ∞ belong to C 0 (Ω), then (see Theorem 1 in [29]) any v n belongs to C 1,α (Ω) for some α ∈ (0, 1) (see also [20]) and thus v n turns out to be also a viscosity solution to (1.10).Hence we have that (3.5) is strictly positive and we can drop it from the previous inequality (3.5).Hence, we divide (3.5) by (p n − 2)|∇ϕ(x n )| pn−4 and we take the limit with respect to n: exploiting (1.12) we deduce that Conversely, if |∇ϕ(x 0 )| < 1, we can directly pass to the limit in (3.5), and we get ϕ(x 0 ) − f (x 0 ) = u(x 0 ) − f (x 0 ) ≤ 0, that is in contradiction with the case under consideration.
Next we prove that Consider any x 0 ∈ Ω + v and take any ϕ ∈ C 2 (Ω) such that v(x) − ϕ(x) has a strict local minimum at x 0 with v(x 0 ) − ϕ(x 0 ) = 0. We want to prove that for such a ϕ we have that If 1 − |∇ϕ(x 0 )| ≥ 0 the inequality is trivially satisfied, so we deal with the case |∇ϕ(x 0 )| > 1.As before, we recall that there exist a sequence of real numbers ε n → 0 and a sequence of points Dividing the above inequality by (p n − 2)|∇ϕ(x n )| pn−4 and taking the limit with respect to n, we conclude that Gathering (3.4) and (3.6), we conclude that v is a viscosity solution to The proof that v solves follows similarly to the previous case.
Let us focus now on the boundary conditions.For the sake of brevity we only prove the first one of the four inequalities (1.15)-(1.16),since the proofs of the other ones follow in the same way.
Assume that there exists ϕ ∈ C 2 (Ω) such that v − ϕ has a strict local maximum at x 0 , with v(x 0 ) = ϕ(x 0 ).Since {v n } converges uniformly in Ω to v, there exist a sequence of real numbers ε n → 0 and a sequence of points Let us assume that (up to a subsequence) {x n } ⊂ (∂Ω) + v ; we recall that since any v n belongs to C 1,α (Ω) then ∂v n (x) ∂ν = 0, ∀x ∈ ∂Ω.
Using that ϕ(x) + ε n touches v n from above at x n , we have that Passing to the limit as n → ∞, it follows ∂ϕ(x 0 ) ∂ν ≤ 0.
On the contrary, if we assume that {x n } ⊂ Ω + v , we take advantage to the fact that v n is a viscosity subsolution to (2.2), i.e. inequality (3.5) holds true.Going back to the proof of Theorem 1.5 we observe that |∇ϕ(x 0 )| < 1 yields to a a contradiction and that |∇ϕ(

THE LIMIT EQUATION FOR THE MODEL PROBLEM
Now, let us go back to the limit as p → ∞ to (1.10) with H ≡ 0.
Proof of Theorem 1.2.Since u ∈ X(Ω), by Proposition 1.9 we have that both The reverse inequalities in Ω + u and Ω − u follow directly from Theorem 1.5.As far as the boundary conditions are concerned, notice that inequality follows directly from Theorem 1.5.In order to prove that (4.1) 1 − |∇u| ≤ 0 on (∂Ω) + u , assume by contradiction that there exists ϕ ∈ C 2 that touches u from above at x 0 and that verifies Take a small r such that B r (x 0 ) ∩ ∂Ω ⊂ (∂Ω) + and |∇ϕ(y)| < 1 for all |y − x 0 | ≤ r and B r (x 0 ).Therefore, set and consider, for ε := min{µ, η}, the following function Thanks to the definition of ε, ũ is continuous, ũ > f in A + and ũ(x 0 ) < u(x 0 ).Moreover the choice of r also implies that ũ ∈ X(Ω).This would imply J ∞ (ũ) < J ∞ (u), that contradicts the minimality of u.Therefore (4.1) holds true.
To deal with the boundary condition on (∂Ω) − , we can follow exactly the same strategy and hence we omit the details.
Even if the minimizer of (2.4) is unique, in general problem (1.6)-(1.7)possesses more than one solution.
In the following we provide a solution to (1.6), that satisfies the same boundary condition, and that differs from the minimizer of J ∞ (unless f ∈ X(Ω)).
Proof of Theorem 1.3.Let us consider the set and notice that it is not empty since the constant f L ∞ (Ω) ∈ M , and let us set By definition of M , we have that α ≥ Ω f and we consider a sequence {v n } ⊂ M whose integral on Ω converges to α. Observe that for any n ∈ N, we have that v n ≥ f and moreover one can assume, without loss of generality, that Moreover, since {v n } ⊂ X(Ω), then it is also equicontinuous.Thus by Ascoli Arzelà Theorem, we have that v n converges uniformly in Ω to some u ∈ X, that satisfies α = Ω u.
Next, we prove that u turns out to be a viscosity solution to (1.8).Thanks to Proposition 1.9, we immediately deduce that u satisfies Hence, to conclude the proof, we take any x 0 ∈ Ω + u ∩ (∂Ω) + u and assume by contradiction that there exists ϕ ∈ C 2 that touches u from above at x 0 and that satisfies Since ϕ is smooth, there exists a small r > 0 such that |∇ϕ(y)| < 1 for all y ∈ B r (x 0 ) ∩ Ω.Therefore, we set µ := inf Thus, thanks to the definition of ε, ũ is continuous, ũ > f in Ω + u and ũ(x 0 ) < u(x 0 ).Moreover the choice of r also implies that ũ ∈ X(Ω).However, by construction this would imply that Ω ũ < α, that yields to a contradiction.Hence, we obtain that 1 − |∇ϕ(x 0 )| ≤ 0.
Finally, the uniqueness of the solution of the obstacle problem follows from the minimality of u.
Arguing as before, we have that also u solves (1.6).
Remark 4.2.As already observed, we have that in general in Ω , while the variational solution u is such that u − f cannot have constant sign (unless f ∈ X(Ω)).
Proof of Proposition 1.6.Let us define We prove, at first, that v + solves 1 − |∇v + | = 0 and that −∆ ∞ v + ≤ 0 in Ω + w .Thanks to Lemma 2.6 (we recall that v + ∈ X(Ω + )) we already now that 1 − |∇v + | ≥ 0. Notice now that for any y ∈ ∂Ω + w , the function is locally a cone (recall that the geodesic distance coincides locally with the euclidean one).Therefore, it solves the eikonal equation and it is infinity harmonic outside its vertex.Since v + is defined as maximum of such functions, the stability properties of viscosity solutions (see Proposition 4.3 of [24]) imply that w .Consequently, we found that Let us assume now that w solves (4.4) Thanks to the comparison principle (see Theorem 2.1 of [27]) we have that w ≡ v ± in Ω ± w , respectively.Thanks to the properties of v ± , we deduce that w is also a solution to (4.5) On the other hand if w is a solution to (4.5) then again for the comparison principle (see Theorem 5.9 of [5]) w ≡ v ± in Ω ± w and then it also solves (4.4).

A FUNCTIONAL ALLOWING JUMPS
We start this section recalling some basic facts abound the space of special functions of bounded variation SBV (Ω).For more details we address the interested reader to the overview [26] or the classical monograph [2].Let us recall that a function v ∈ L 1 (Ω) belongs to BV (Ω) (bounded variation) if and only if there exists a vector valued Radon measure Du such that Thanks to the Radon-Nikodym Theorem, we can uniquely decompose Dv as Dv = Dv a + Dv s with Dv a H N and Dv s ⊥ H N .
To give a better description of the absolutely continuous part and the singular one, we need to introduce some more concepts.For any x ∈ Ω, we say that v ∈ BV (Ω) admits approximate limit v(x) ∈ R if It can be proved (see Theorem 3.83 of [2]) that any v ∈ BV (Ω) is approximately differentiable a.e. and that ∇v ∈ (L 1 (Ω)) N is the density of the absolutely continuous part of Dv with respect to the Lebesgue measure, namely Dv a = Dv H N = ∇vH N .
In order to describe also the singular part of Dv, let us consider the set C v ⊂ Ω of all points where (5.1) holds true and denote S u = Ω \ C v .The set S v is called the set of approximated discontinuity of v and it is (N − 1)-countably rectifiable.Now, we introduce The measure Dv j takes into account the jumps of the function v, while D c is the Cantor part of Dv.We notice that in [2] Dv j is defined through the slightly smaller set J v , the set of approximated jumps point of v; since H N −1 (S v \ J v ) = 0 we keep using S v for the sake of simplicity (see also [26]).At this point it is easy to define the space of special functions of bounded variation as Now we are ready to define our functional I : SBV (Ω) → R as (5.2) Comparing (5.2) with (1.18), we note that the new funcional framework allows to pass from a two variable functions to a single variable one, where the discontinuity set K is replaced by the singular (jump) set S u .
We also stress that BV (Ω) would not be a good ambient space for (5.2) since the functional is not coercive with respect to the cantor part of gradient measure.To be convinced that SBV (Ω) is the right ambient space to settle the minimization of I p , it is enough to look at the following result.
Then there exists v ∈ SBV (Ω) such that, up to a subsequence v n → v strongly in L 1 (Ω), ∇v n ∇v weakly in L p (Ω), Dv n Dv weakly star in the sense of measure and Theorem 5.1 easily implies the existence of a minimizer u p ∈ SBV (Ω) for (5.2).Indeed it is easy to see that any minimizing sequence v n can be chosen so that v n L ∞ (Ω) ≤ f L ∞ (Ω) , basically because truncations with a large constant always decrease the energy of the functional.
What is not clear is that if the couple (S up , u p ) is a minimizer of (1.18).The main difficulty here is that in general the singular set of a SBV function need not to be closed and can be even dense in Ω.What is possible to prove is that any minimizer u p ∈ SBV (Ω) of (5.2) satisfies H N −1 (S up \ S up ) = 0 and that indeed (S up , u p ) is a minimizer of (1.18).We gather all this information in the following result.
Then there exists a minimizer u p ∈ SBV (Ω) to (5.2).Moreover there exists θ depending on N and p such that for any x ∈ S up and ρ > 0.
This lower bounds implies that H N −1 (S u \ S u ) = 0. Thus the pair given by K = S up and u ∈ W 1,p (Ω \ S up ) and is a minimizer of the classical Mumford-Shah problem (1.18).Our next step is to prove that any sequence {u n } ⊂ SBV (Ω) of minimizers of (5.2), associated to a sequence p n → ∞, converges (up to a subsequence) to a minimizer of Theorem 5.3.Let {u n } ⊂ SBV (Ω) be the sequence of minimizer of the functional (5.2) relative to a diverging sequence of real number p n .Then any u ∞ ∈ SBV (Ω), accumulation point of u n → u (in L 1 (Ω)), is a minimizer of (5.3).
Proof.Recalling that M pn (u n ) ≤ M pn (0) and that u n L ∞ (Ω) ≤ f L ∞ (Ω) , it results Using Hölder inequality we get for any q < p n .Theorem 5.1 and a diagonal argument then implies that there exists u ∞ ∈ SBV (Ω), with |∇u ∞ | ≤ 1 such that u n → u ∞ strongly in any L q (Ω), ∇u n ∇u ∞ weakly in any (L q (Ω)) N , with q ∈ [1, ∞), Du n Du ∞ weakly star in the sense of measure and

Now we claim that
This concludes the proof since, by a simple truncation argument, if reached, the infimum of J ∞ in SBV (Ω)∩ L ∞ (Ω) has to be bounded.By definition of u n we have that (5.5) Taking the liminf with respect to n we obtain the desired result.
Notice that in the proof of Theorem 5.3 we have proved that the functional M pn Γ-converges to the functional M ∞ .For completeness let us recall the definition of Γ-convergence.Definition 5.4.Let F n : X → R be a sequence of functionals.We say that F n Γ-converges to F : X → R if i) for any x ∈ X and any x n → x, we have that ii) for any x ∈ X, there exists x n ⊂ X such that x n → x, and lim sup n→+∞ F n (x n ) ≤ F (x) .
In our case i) has been proved in (5.4) while ii) is trivially obtained taking the limit in the right hand side of (5.5).

EXAMPLES
In this section we collect some explicit examples that clarify the behavior of the minimizers of some of the functional presendet in this manuscript.
Case 1. Choose f 1 (x) = kχ (−r,r) (x) with r ∈ (0, 1] and k ∈ R (see Figure 1).We first observe that if r = 1 then f 1 ∈ X(−1, 1) and thus the choice u 1 (x) ≡ f 1 (x) is allowed.Conversely, if r ∈ (0, 1), then f 1 ∈ X(−1, 1), and we look for a solution u 1 (x) of the form u h (x) = min{(h − |x|) + , k}, with h to be determined.Thus J ∞ (u h ) can be explicitly computed and it turns out that Minimizing its value with respect to h, we have that the minimum is achieved at h * = r + k 2 .The heuristic that motivates this example is that in a neighbor where f is not smooth the solution is a line with maximum slope allowed (i.e.±1) and far away it tries to be as close as possible to f .Case 2. Consider f 2 (x) = 2|x| (see Figure 2).
In this case the picture is much more clear.Since f 2 ∈ X(−1, 1), the closest function to f 2 in X(−1, 1) is a line with slope +1 in (0, 1] and with slope −1 in [−1, 0].Since f 2 is even, u 2 inherits the same property, so that J ∞ (u h ) = Case 3. Consider f 3 (x) = |x| (see Figure 3).By symmetry it follows that the solution is even and that it has the form Therefore we deduce that the minimum is achieved for s = 4 9 .Example 6.2.Let us underline the importance of minimizing J ∞ in SBV (Ω) by showing that for certain f , when it is allowed, the minimizer does jump.
In order to avoid technicalities, we present first full details in dimension 1, being the extension of these ideas to any dimension quite similar.
As already observed in the previous example the minimizer of we have I(f ) < J ∞ (u * h ), and we conclude that the minimizer indeed has jumps.
As far as the case in dimension N ≥ 2 is concerned, we repeat the same argument: in the domian Ω = B R (0) we set f (x) = kχ Br(0) (x) with r > 0 to be chosen large enough.