A dynamical approach to the variational inequality on modified elastic graphs

Abstract We consider the variational inequality on modified elastic graphs. Since the variational inequality is derived from the minimization problem for the modified elastic energy defined on graphs with the unilateral constraint, a solution to the variational inequality can be constructed by the direct method of calculus of variations. In this paper we prove the existence of solutions to the variational inequality via a dynamical approach. More precisely, we construct an L2-type gradient flow corresponding to the variational inequality and prove the existence of solutions to the variational inequality via the study on the limit of the flow.


Introduction
In this paper we are interested in a dynamical approach to the obstacle problem for modi ed elastic graphs: nd u : [ , ] → R such that min The functional W is the squared integral of the curvature κv of the curve γ(x) := (x, v(x)) with respect to the arc length parameter of γ and is called the bending energy, the elastic energy, or the one-dimensional Willmore functional. One of strategy to nd a solution to (1.1) is the direct method of calculus of variations. Indeed, [3,13] proved the existence of minimizer of (1.1) with λ = . If u is a minimizer of (1.1), then u satis es the following variational inequality: The purpose of this paper is to prove the existence of solutions to (1.6) via a dynamical approach. Although problem (1.1) can be solved by the direct method of calculus of variations, it is signi cant to prove the existence of solutions to (1.6) via the other strategy. Because, it is not clear whether the set of all minimizers of (1.1) is equivalent to the set of all solutions to (1.6) or not, due to the lack of the convexity of the functional λL + W.
Recently M. Müller [14] gave a dynamical approach to the existence of solutions to (1.6) with λ = . In [14] the H -gradient ow for the minimization problem (1.1) with λ = was constructed, and it was proved that a solution of the ow subconverges to a solution of (1.6) as t → ∞. We are interested in an L -type gradient ow for the minimization problem (1.1). As the Willmore ow, which is the L -gradient ow for the Willmore functional, was inspired by the Willmore conjecture (e.g., see [11,12,20]), generally it is natural to construct L -gradient ows corresponding to variational problems on the elastic energy (e.g., see [5,7,16,21]).
To this end, we consider the following variational inequality of parabolic type: (P) Following the motivation as we stated above, we prove the following: (i) problem (P) possesses a unique local-in-time solution with an L -gradient structure for λL + W; (ii) problem (P) possesses a unique global-in-time solution for some initial data u and the solution converges to a solution of (1.6) as t → ∞. The rst main result of this paper is concerned with (i): Theorem 1.1. Let ψ : [ , ] → R satisfy (1.5). Then for each u ∈ K ψ there exists T = T(u ) > such that (P) possesses a unique solution u ∈ K T . Moreover, E λ (u(t)) is well-de ned for all t ∈ [ , T] and satis es E λ (u(t )) − E λ (u(t )) ≤ − t t |∂ t u| dxdt for ≤ t ≤ t ≤ T. (1.7) We infer from Theorem 1.1 that problem (P) is solvable for all natural initial data, and the energy E λ is nonincreasing along the orbit of the solution to (P). Moreover, we obtain the regularity properties of the solutions to (P) (see Theorem 5.1). In order to prove (ii), we need a certain restriction on λ, ψ and u . Indeed, even if λ = , problem (1.6) has no solution for some ψ satisfying (1.5) (e.g., see [3,13]). Let (1.8) Then the second main result of this paper is stated as follows: Theorem 1.2. Let < λ < c / and assume that ψ satis es (1.5) and Then for each u ∈ K ψ satisfying E λ (u ) ≤ c / , problem (P) possesses a unique global-in-time solution u ∈ K∞. Moreover, the solution u subconverges to a solution u * ∈ K ψ of (1.6) as t → ∞.
If the obstacle ψ is small enough, then it is veri ed that the assumption (1.9) holds true (see Remark 6.5). Theorem 1.2 means that, for ψ satisfying (1.9), a solution of (1.6) can be constructed from a limit of a solution to (P). For the case λ = , we can obtain the same assertion as Theorem 1.2 under an assumption which is stronger than assumption (1.9) (see Remark 6.6).
We also prove that a solution to problem (P) may blow up (see Theorem 6.2 and Lemma 6.3). It will be interesting to study the aspect of blow up solutions to problem (P). Moreover, if the uniqueness of solutions to problem (1.6) does not hold, it may be interesting to study a stability notion of solutions to (1.6) via parabolic problem (P). This paper is organized as follows: In Section 2, we collect notations and inequalities which are used in this paper. In Section 3, we prove the uniqueness of solutions to (P). In Section 4, we discuss a family of approximate solutions via minimizing movements: existence, regularity and convergence. We prove Theorems 1.1 and 1.2 in Sections 5 and 6, respectively.

Preliminary
From now on, we denote by I the open interval ( , ). We denote by H(I) the Hilbert space H (I) ∩ H (I) equipped with the scalar product In this paper we employ the norm · H(I) on H(I) as which is equivalent to · H (I) (see e.g. [9, Theorem 2.31]), i.e., there exists a positive constant c H such that (2.1) We collect interpolation inequalities used in this paper.

Proposition 2.2.
Let Ω ⊂ R N be a bounded open set satisfying the cone condition. Let k, l, and m be integers As in (1.2), we let (s) := ( + s ) / . By the mean value theorem we have for s, t ∈ R and m > , where the constant C depends only on m. From now on, the letter C denotes generic positive constants and it may have di erent values also within the same line.

Uniqueness
In this section we prove the uniqueness of solutions to (P).
Proof. Assume that (3.1) does not hold. Then there exist ≤ τ < τ ≤ T and v ∈ K T such that We rst consider the case of < τ < τ < T. For < ε < , we de ne ηε ∈ H ( , T) by otherwise, and set vε := ( − ηε)u + ηε v. Since vε ∈ K T for ε > , taking vε as the test function in (P), we have where We claim that J ≤ C √ ε. Since u, v ∈ K T , we infer from Hölder's inequality that τ +ε where we used the fact that u, v ∈ C([ , T]; W ,∞ (I)). Similarly, we have Thus we obtain J ≤ C √ ε. Along the same line as above, we have J ≤ C √ ε. Letting ε > small enough, we see that (3.2) contradicts J < . Similarly, we also lead a contradiction if τ = or τ = T. Therefore Lemma 3.1 follows.
Theorem 3.2. Let u , u ∈ K T be solutions to (P) for some T > and assume that u | t= = u | t= in I. Then u = u in L ∞ ( , T; H(I)) ∩ H ( , T; L (I)).
Proof. Fix τ ∈ [ , T] arbitrarily. It follows from Lemma 3.1 that for v ∈ K T and i = , . Setting v = u in (3.3) for u and v = u in (3.3) for u , and adding them, we infer from Fubini's Theorem that where we used u | t= = u | t= in I. We deduce from (2.3) that Since it follows from Hölder's inequality and Lemma 2.1 that we deduce from Young's inequality that for ε > . Next we turn to J . We reduce J into Since (s) ≥ and u , u ∈ C([ , T]; W ,∞ (I)), along the same line as in the estimate on J , we have for τ > . Then Gronwall's inequality implies that Combining (3.5) with (3.6) we also see that Thus u = u in L ∞ ( , T; H(I)). Moreover, since u = u in L ( , T; L (I)), we deduce from the uniqueness of weak derivatives that u = u in H ( , T; L (I)). We complete the proof.

Approximate solutions
In this section we construct a family of approximate solutions of a solution to (P) via De Giorgi's minimizing movements (e.g., see [2]). We divide Section 4 into three subsections, existence and uniform estimates, regularity, and convergence.

. Existence and uniform estimates
We construct a family of approximate solutions of a local-in-time solution to (P). For this purpose, we rst de ne a constant T > as follows: We de ne T > by a constant small enough to satisfy Under the above setting, we de ne the discrete approximate solutions. For T de ned by (4.1) and n ∈ N we set τn := T n .
We de ne inductively {u i,n } := {u i,n } n i= as follows: Let u ,n (x) := u (x). For each i = , . . . , n, we de ne u i,n by a solution of the minimization problem Proof. Let {v j } j∈N ⊂K ψ be a minimizing sequence on problem (M i,n ), that is, Since we may assume that Recalling (2.1), we see that {v j } j is uniformly bounded in H (I). Thus, extracting a subsequence, we nd u ∈ H (I) such that v j ū weakly in H (I). This clearly implies thatū( ) =ū( ) = ,ū ≥ ψ inĪ and max x∈Ī |ū (x)| ≤ M . Thus we haveū ∈K ψ . We prove thatū is a minimizer of G i,n inK ψ . It follows from (4.4) that Moreover, we deduce from (4.3) and (4.4) that where the last inequality followed from Hölder's inequality. Thus we obtain This completes the proof.
In the following, we de ne V i,n :Ī → R by We de ne the piecewise linear interpolation of {u i,n } as follows: We also make use of the piecewise constant interpolation of {u i,n } and {V i,n }: We derive the following uniform estimates on {u i,n } and Vn: Proof. It su ces to consider the case of ≤ i ≤ n. By the minimality of u i,n we have for each i = , . . . , n, where we used the fact that u i− ,n ∈K ψ . This clearly implies that for i = , . . . , n. This together with (2.1) implies that We turn to the estimate on Vn. Since it follows from (4.7) that It follows from (2.1) and Lemma 4.4 that Since ∂un /∂t = Vn, we observe from Fubini's theorem and Lemma 4.4 that Plugging (4.11) and (4.12) into (4.10), we have This together with (4.1) implies that In particular, taking t = iτn and t = in (4.13), we have . . , n. Thus Lemma 4.5 follows.

. Regularity
In this subsection we discuss the regularity of approximate solutions with respect to the space variable. Fix a nonnegative function φ ∈ C ∞ c (I) arbitrarily. Then it follows from Lemma 4.5 that u i,n +εφ ∈K ψ for i = , . . . , n and ε > small enough. Hence, by the minimality of u i,n we have The inequality is reduced into for φ ∈ C ∞ c (I) with φ ≥ . We observe from (4.14) that L i,n is a nonnegative distribution on C ∞ c (I). Therefore, thanks to the Riesz representation theorem (e.g., see [18, 2.14 Theorem]), we nd a nonnegative Radon measure µ i,n such that for all φ ∈ C ∞ c (I) and i = , . . . , n.
De nition 4.6. For each i = , . . . , n, we de ne N i,n ⊂ I by We note that, since u i,n , ψ ∈ C(Ī), the set N i,n is open. For any n ∈ N and each i = , . . . , n, the measure µ i,n has a support on I \ N i,n , that is, Indeed, for any φ ∈ C ∞ c (I) with supp φ ⊂ N i,n , we have u i,n ± εφ ≥ ψ inĪ for ε > small enough. Thus we have L i,n (φ) = , and then (4.15) yields  Proof. We prove the existence of the constant a > . Since ψ ∈ C(Ī) and ψ( ) < , there exists δ ∈ I such that for n ∈ N and i = , . . . , n. Since u i,n ( ) = , we observe from (4.19) that Hence, taking δ > small enough to satisfy δ < −ψ( )/ CL, we obtain Thus, by (4.18) and (4.20) we see that a := min{δ, δ } satis es ( , a) ⊂ N i,n . We note that (4.19) implies that a is independent of i and n. Along the same line as above, the existence of b follows. We complete the proof. Thus it su ces to estimate µ i,n ([a, b]). Fix ζ ∈ C ∞ c (I) with ζ ≡ in [a, b] and ≤ ζ ≤ in I. We deduce from (4.15) and Hölder's inequality that  Then, φ , φ ∈ H (I) and there exists C > such that The proof of Lemma 4.9 follows from a direct modi cation of the arguments in [3,4]. for φ ∈ C ∞ c (I). Combining (4.15) with (4.24), we obtain for φ ∈ C ∞ c (I). By a density argument we see that (4.25) also holds for φ ∈ H (I). Fix η ∈ C ∞ c (I) arbitrarily and de ne φ as in Lemma 4.9. Since φ = η + α + βx, taking φ as φ in (4.25), we have  where we used |u i,n | ≤ (u i,n ) in I. Thus (4.26) is reduced into We turn to the uniform H -estimate on u i,n . Fix η ∈ C ∞ c (I) arbitrarily and de ne φ as in Lemma 4.9. Since φ (x) = η (x) + (− + x) η(y)dy, taking φ as φ in (4.25), we reduce (4.25) into :=J +J +J +J +J . (4.30) It follows from (4.29) and Lemmas 4.4 and 4.9 that Hence we reduce (4.30) into

. Convergence
In this subsection we prove the convergence of approximate solutions. To begin with, along the standard argument in minimizing movements, we have: as n → ∞ up to a subsequence. Thanks to (4.36), combining (4.41) with the uniqueness of weak limit, we see thatũ = u in L p ( , T; L (I)). Hence, byũ ∈ L p ( , T; W , p p− (I)) we obtain (4.38). We turn to (4.39). By Lemma 2.1 we have

Proof of Theorem 1.1
To begin with, we prove the existence of local-in-time solutions to (P) and its regularity properties: holds and the distribution de nes a Radon measure on I and satis es Proof. We divide the proof into three steps. From now on, we denote by u the limit obtained by Lemma 4.11.
Step 3. We discuss on (i) and (ii). Since Lemmas 4.12 and 4.13 clearly imply (5.1), the property (i) follows. We turn to the property (ii). We rst prove that the distribution (5.3) de nes a measure on I. Fix ≤ φ ∈ C ∞ c (I) arbitrarily and set for a.e. t ∈ ( , T). Then it is easy to check that µ t , φ ∈ L ( , T). Fix τ ∈ ( , T) arbitrarily. For su ciently small ε > , let η ∈ C ∞ c ( , T) be such that This together with the Lebesgue di erential theorem implies that µτ , φ ≥ for a.e. τ ∈ ( , T).
Thus µ t is a nonnegative distribution on C ∞ c (I) for a.e. t ∈ ( , T). Then it follows from Riesz's theorem that µ t de nes a Radon measure on I. Thanks to Lemma 4.13, we see that is an open set in I for t ∈ [ , T]. Hence, similarly to (4.16), we obtain (5.4).
In the rest of this section we derive the L -gradient structure of E λ (u(t)) in a weak sense, where u denotes the weak solution to (P) obtained by Theorem 5.1.

Lemma 5.2. Let u be the solution to
Proof. Since u ,n (·) = u (·) and un(·, T) = un,n(·), we deduce from (4.9) that On the other hand, thanks to (4.11), we also nd {u n k } ⊂ {un k } and v T ∈ H (I) such that This together with (5.18) implies that u(T) = v T in C ,α (Ī). Then, along the same line as in the proof of Lemma 5.2, we have u(T) = v T in H (I), that is, On the other hand, by Lemma 3.1 we see that u| [ ,s] also satis es (P). Thus we observe from Theorem 3.2 that u = u| [ ,s] in L ∞ ( , s; H(I)) ∩ H ( , s; L (I)). Sinceũ(·, s) = u(·, s) in L (I), along the same line as in the proof of Lemma 5.2, we haveũ(s) = u(s) in H (I), which implies that E λ (u(s)) = E λ (ũ(s)). Thus we obtain (5.22).
We are in a position to prove Theorem 1.1: We de ne ρ > by a constant small enough to satisfy Let n ∈ N and set ρn = ρ/n. We de ne a family of functions {w i,n } inductively. Let w ,n := u(τ). For i = , . . . , n, we de ne w i,n by argmin v∈K ψĜ i,n (v) witĥ By the minimality of w i,n we have and then E λ (w i,n ) ≤ E λ (w ,n ) = E λ (u(τ)) for i = , . . . , n. This together with Lemma 5.4 implies that E λ (w i,n ) ≤ Then, similarly to the proof of Theorem 5.1, we obtain the desired solution wτ.
This together with (5.24) implies that By induction we complete the proof.

Proof of Theorem 1.2
By Theorem 1.1 we prove the existence of local-in-time solutions to (P). In this section we rst give a characterization of the maximal existence time of solutions to (P), which is de ned as follows: De nition 6.1. Let u be the solution to (P) with initial data u . We de ne the maximal existence time T M (u ) of u as follows : T M = T M (u ) := sup{τ > | u can be uniquely extended to a solution with u( ) = u to (P) in I × [ , τ]}.
We give a characterization of the maximal existence time: for k ∈ N. Therefore, by the same argument as in Lemma 4.7 we nd constants < a * < b * < being independent of k ∈ N such that µ t k (I) = µ t k ([a * , b * ]). Thus, along the same line as in the derivation of (5.14), we have µ t k (I) ≤ C + u(t k ) H + ∂ t u(t k ) L (I) for k ∈ N. (6.9) Plugging (6.7) and (6.8) into (6.9), and extracting a subsequence, we nd a constant C > such that µ t k (I) ≤ C for k ∈ N. (6.10) Similarly to the proof of Lemma 4.10, we deduce from (6.8) and (6.10) that u (t k ) L (I) ≤ C + ∂ t u(t k ) L (I) + µ t k (I) ≤ C for k ∈ N.
Therefore, extracting a subsequence, we nd u * ∈ H (I) such that u(t k ) u * weakly in H (I).
In This together with the Schwartz inequality implies that Since G is an odd function, we have The same estimate holds for x ∈ (x , ]. Then it follows from W(v) < c / that (6.13) holds.