A note on Kazdan-Warner equation on networks

We investigate the Kazdan-Warner equation on a network. In this case, the differential equation is defined on each edge, while appropriate transition conditions of Kirchhoff type are prescribed at the vertices. We show that the Kazdan-Warner theory extends to the present setting and we study also the critical case.


Introduction
The Kazdan-Warner equation where c is a constant and h a given function, was introduced in [5] in connection with the problem of prescribing the Gaussian curvature of a compact manifold M . The solvability of (1.1) depends on the sign of c. Leth denote the average of h on M . In [5], it is shown that If c < 0, c = c(h) is not included in the previous cases and deserves a particular attention. It has been shown in [1] that, if c(h) > −∞, then (1.1) can be also solved for c = c(h).
The previous theory has been recently extended in [3,4] to the case of a connected, finite graph. Here the Laplacian is replaced by a finite difference operator, the so-called graph Laplacian, and most of the effort is to reproduce in a finite dimensional setting some crucial properties as the Maximum Principle and the Moser-Trudinger inequality.
An intermediate situation between a compact manifold and a finite graph is given by a network Γ, which is given by a finite collection of vertices connected by continuous non-self-intersecting edges. The differential equation (1.1) is defined on each edge, while appropriate transition conditions of Kirchhoff type are prescribed at the vertices. In this paper, we obtain the same conclusions of the manifold and finite graph cases, showing that the Kazdan-Warner theory remains unchanged for different classes of manifolds, also non regular such as in the case of networks. To prove these results, we shall adapt the method by Kazdan Warner [5,Thm5.3] (see also [4,Thm2]) and, for the critical case, some techniques of [1,3] with some specific arguments for networks.
The paper is organized as follows. In Section 2, we introduce some notations and preliminary results. In Sections 3, 4 and 5, we study respectively the cases c = 0, c > 0 and c < 0. In Section 5, we also discuss the critical case c = c(h).

Notations, definitions and preliminary results
A network Γ = (V, E) is a finite collection of points V := {v i } i∈I in R n connected by continuous non-self-intersecting edges E := {e j } j∈J , where any two edges can only have intersection at a vertex. For i ∈ I we set A coordinate π j : [0, l j ] → R n , with l j > 0, is chosen to parameterize e j , i.e. e j := π j ((0, l j )). We assume that Γ is finite and connected and we denote with |Γ| the sum of lengths of the edges e j , j ∈ J. For a function u : Γ → R we denote by u j : [0, l j ] → R the restriction of u to e j , i.e. u(x) = u j (y) for x ∈ e j , y = π −1 j (x) ∈ (0, l j ). Given v i ∈ V , we denote by ∂ j u(v i ) the oriented derivative at v i along the arc e j defined by if the limit exists, where π j is the parametrization of arc e j . For a function φ : Γ → R and A ⊂ Γ, we set A function u is said continuous on Γ if it is continuous with respect to the subspace topology of Γ, i.e. u j ∈ C([0, l j ]) for any j ∈ J and u j (π −1 j (v i )) = u k (π −1 k (v i )) for any i ∈ I, j, k ∈ Inc i . We introduce some functional spaces for functions defined on the network. The space L p (Γ), p ≥ 1, consists of the functions that are measurable and p-integrable on each edge e j , j ∈ J. We set The space L ∞ (Γ) consists of the functions that are measurable and bounded on each edge e j , j ∈ J. We set The Sobolev space W k,p (Γ), k ∈ N and p ≥ 1, consists of all continuous functions on Γ that belong to W k,p (e j ) for each j ∈ J. We set The space C k (Γ) for k ∈ N consists of all continuous functions on Γ that belongs to C k (e j ) for j ∈ J. The space C k (Γ) is a Banach space with the norm The following proposition gives a Poincaré inequality for the network Proof By definition of H 1 , the function f is continuous on Γ, hence there exists a point x 0 ∈ Γ such that f (x 0 ) = 0. Since Γ is connected, for any point x ∈ Γ there exists a path γ : (0, r) → Γ on the network such that γ(0) = x 0 , γ(r) = x, |γ ′ (s)| = 1 and r ≤ |Γ|. Hence, we have We deduce that

✷
We also give an analogous of the Trudinger-Moser inequality for the networks.
Lemma 2.2 For any β, δ ∈ R with δ > 0, there exists a constant C (depending only on β, δ and the network) such that, for all functions Proof We adapt the arguments of [4,Lemma7]. The case β ≤ 0 is obvious because Γ has a bounded total length. Fix β > 0 and consider a function f as in the statement. By Lemma (2.1)-(i) and by the assumption

✷
We consider the Kazdan-Warner equation on the network Γ where c is given constant and h is a continuous function on Γ. Note that the Kazdan-Warner equation is defined on each edge, while at the vertices we impose the continuity of u and the Kirchhoff condition, a classical condition for differential equations defined on networks (see [6,7]).
Definition 2.1 (a) A strong solution to problem (2.1) is a function u ∈ C 2 (Γ) which satisfies (2.1) in a pointwise manner.
Remark 2.1 One can easily check that, if u ∈ C 2 (Γ) is a weak solution of (2.1), then it is also a strong solution. Moreover, any weak solution of (2.1) is also a strong solution. Actually, a weak solution u fulfills ∂ 2 u = c − he u in distributional sense inside each edge e j . The right hand side of this equality is continuous, hence, by standard theory, u ∈ C 2 (e j ) for every j ∈ J. Being a weak solution, u also belongs to H 1 (Γ); in conclusion u ∈ C 2 (Γ).
In the next three sections, we discuss the solvability of (2.1) in the cases c = 0, c > 0 and c < 0. Proof Assume that u is a solution to problem (2.1) with c = 0. We note that the hypothesis h ≡ 0 prevents u to be constant. We multiply the differential equation in (2.1) by φ ≡ 1 and integrate on Γ; taking advantage of the Kirchhoff condition, we get Γ he u dx = 0 which implies that h must change sign. Multiplying e −u ∂ 2 u = −h by φ ≡ 1 and integrating on Γ, we get Taking advantage of the Kirchhoff condition and of the continuity of u at each vertex, we obtain Since u cannot be constant, we deduce Γ hdx < 0. Conversely, we prove that, for any h which changes sign and satisfies Γ h < 0, there exists a solution to (2.1). We define the set We claim that B is not empty. Since h changes sign, there exists a point x 0 ∈ Γ such that h(x 0 ) > 0. By the continuity of h, without any loss of generality, we can assume x 0 ∈ e for somē  ∈ J; namely, there exist ∈ J and y 0 ∈ (0, l) such that h(y 0 ) > 0. Moreover, still by the continuity of h, there exists ε > 0 such that (y 0 − ε, y 0 + ε) ⊂ (0, l) and h(y) > h(y 0 )/2 for all y ∈ (y 0 − ε, y 0 + ε). Consider a function w ∈ C 2 (Γ) such that: provided that ℓ is sufficiently large. On the other hand, for ℓ = 0 we have w 0 (x) ≡ 0 and, by assumptions, Therefore there exists ℓ 0 > 0 such that Γ he w ℓ 0 = 0. Hence the functionŵ(·) := w ℓ0 (·)− Γ w ℓ0 /|Γ| belongs to B and the claim is proved. Consider the functional Let {v n } n∈N be a minimizing sequence for J , i.e. lim n→+∞ J (v n ) = inf B J . By Lemma 2.1-(ii), possibly passing to a subsequence, we have that the functions v n are uniformly bounded in H 1 (Γ). We deduce that there existsū ∈ H 1 (Γ) such that, as n → +∞, v n ⇀ū weakly in H 1 (Γ) and v n →ū uniformly on Γ. In particular, we get thatū belongs to B and it is a minimizer of J on B.
We consider the functional As a first step, let us prove that J is bounded from below. To this end, for any u ∈ B, we setū := Γ u/|Γ| and v := u −ū. Note Γ v = 0 and ∂v ≡ ∂u. Since u ∈ B, it holds Γ he v dx = c|Γ|e −ū which impliesū = log(c|Γ|) − log Γ he v dx ; replacing this equality in the definition of J , we get Let us now estimate Γ he v ; if v is constant then, by Γ v = 0, it must be v ≡ 0 and, in particular For v nonconstant, it is expedient to introduce the functionṽ := v/ ∂v 2 which verifies:ṽ ∈ H 1 (Γ), Γṽ = 0 and ∂ṽ 2 = 1. Lemma 2.1-(ii) and Lemma 2.2 guarantee that, for any β ∈ R, there exists a constant K β (depending only on β) such that For every ε positive, for β ε := 1/(4ε), there holds Replacing this estimate in (4.1), we obtain and, in particular, for ε 0 := 1 4c|Γ| , Hence, the proof that J is bounded from below is accomplished. Let {u n } n∈N be a minimizing sequence for J ; setū n := Γ u n /|Γ| and v n := u n −ū n ; hence ∂u n ≡ ∂v n and, by estimate (4.2), ∂v n is bounded in L 2 (Γ), uniformly in n. By Lemma (2.1)-(ii), also v n is uniformly bounded in L 2 (Γ) and, therefore, the functions v n are uniformly bounded in H 1 (Γ). Moreover, by the definition of J , we get that Γ u n are uniformly bounded and consequently alsoū n are uniformly bounded. Being u n = v n +ū n , also the functions u n are uniformly bounded in H 1 (Γ). Possibly passing to a subsequence, there exists u ∈ H 1 (Γ) such that, as n → +∞, u n ⇀ u in the weak topology of H 1 (Γ), u n → u uniformly, u ∈ B and J (u) = min B J .
We claim that u is a solution to (2.1). By standard Lagrangian theory, there exists λ ∈ R such that, for every φ ∈ H 1 (Γ), Choosing φ ≡ 1, we get c|Γ| = λ Γ he u ; since u ∈ B, we get λ = 1. In conclusion, relation  We introduce the definition of upper and lower solution to (2.1).
Definition 5.1 A function u ∈ C 2 (Γ) is said to be a lower (respectively, an upper) solution of (2.1) if resp., In order to prove Theorem 5.1, we need some preliminary results.
Lemma 5.1 If there exist a lower solution u − and an upper solution u + of (2.1) such that u − ≤ u + , then there there exists a solution u of (2.1) such that u − ≤ u ≤ u + .
Proof Set k 1 (x) = max{1, −h(x)} and k(x) = k 1 (x)e u+(x) and consider the sequence of function {u n } n∈N defined inductively as u 0 = u + and u n the solution of We first observe that the sequence {u n } n∈N is well defined: indeed, since k(x) ≥ e − u+ ∞ , (5.1) admits a unique strong solution u n for any n ∈ N (see [2,Prop.10]). Moreover, we claim that the inequality u 1 ≤ u 0 = u + on Γ follows immediately by the Maximum Principle (see [2,Prop.12]). Assuming inductively that u n ≤ u n−1 , we have for x ∈ e j , j ∈ J where ξ(x) ∈ [u n (x), u n−1 (x)]. By induction, we have u + ≥ u n−1 and, recalling the condition at the vertices, we get we conclude again by the Maximum Principle that u n+1 ≤ u n in Γ. We finally observe that, arguing as before, we have and therefore u − ≤ u n+1 on Γ for all n. Hence the claim (5.2) is proved. By [2, Prop.10] there exists a positive constant C (independent of n) such that u n H 1 ≤ C and, in particular, u n ∞ ≤ C for every n ∈ N. By the first equation in (2.1) and (5.2), we deduce u n H 2 ≤ C. The Ascoli-Arzela's Theorem yields that, up to passing to a subsequence, {u n } converges uniformly to a function u ∈ H 1 (Γ) which is a weak solution to (2.1) with u − ≤ u ≤ u + . Finally, by Remark 2.1, u is a classical solution to (2.1). ✷ In the next lemma, we show that (2.1) admits a lower solution u − for any c < 0.
Lemma 5.2 If c < 0, there exists a lower solution u − of (2.1).
Proof Set u − ≡ −A for some constant A > 0. Then, the function u − fulfills the Kirchhoff condition in (2.1) and also for A sufficiently large. Hence u − is a lower solution to (2.1). ✷ Proof of Theorem 5.1 Assume that there exists a solution u of (2.1). Then, multiplying (2.1) by the test function φ ≡ 1, integrating on Γ and taking advantage of the Kirchhoff condition and the continuity of u at the vertices, we get and therefore (i).
We now assume that Γ h(x) dx < 0. Recall that, by Lemma 5.1 and 5.2, (2.1) has a solution if and only if there exists an upper solution u + to the problem. Moreover it is easy to see that, if u + is an upper solution for a givenc < 0, then it is also an upper solution for any c such that c ≤ c < 0. Hence it follows that there exists a constant c(h) with −∞ ≤ c(h) ≤ 0 such that (2.1) admits a solution for c > c(h) and no solution for c < c(h). We show that c(h) < 0. Let m ∈ C 2 (Γ) be a solution of We define b = ln(a), c = 1 2 a Γ h(x)dx and u + (x) = am(x) + b. Then c < 0 and Moreover, by (5.3), u + is continuous and verifies the Kirchhoff condition because m enjoys the same properties. Hence u + is an upper solution and therefore we conclude that We finally prove (iii). Note that h < 0 ensures h ≡ 0. We first show that, if h ≤ 0 in Γ, then (2.1) is solvable for any c < 0 and therefore c(h) = −∞. Fixed c < 0, let m be a solution of (5.3) and choose two constants a, b such that a Γ h(x)dx < c and e am(x)+b − a > 0 for x ∈ Γ. We show that the function u + (x) = am(x) + b is an upper solution of (2.1). Indeed, there holds while the continuity and the Kirchhoff conditions for u + come again from those of m. Hence u + is an upper solution to (2.1) and therefore, for any c < 0, there exists a solution to (2.1).
Conversely, let us prove that c(h) = −∞ implies h ≤ 0 in Γ. To this end, as in [3,Thm2.3], we argue by contradiction assuming that {h > 0} is not empty. For any c < 0, let u be a solution to (2.1) (whose existence is ensured by c(h) = −∞) and let φ c ∈ C 2 (Γ) be a solution to problem (whose existence is ensured by [2,Prop.10]). We claim In order to prove this relation, by the Maximum Principle ([2, Prop.12]), it suffices to prove that e −u is a lower solution to (5.4). Actually, there holds moreover, e −u is continuous and satisfies the Kirchhoff condition because u does it. Hence, our claim is proved. Proof Note that Theorem 5.1-(iii) ensures that h changes sign (and obviously, h ≡ 0). Given a decreasing sequence {c k } k∈N with c(h) < c k < 0 converging to c(h) as k → +∞, we consider The idea is to show that a sequence of continuous solutions u k of (5.5), appropriately chosen, converges for k → ∞ to a solution of (2.1) with c = c(h).

Lemma 5.3
For each k ∈ N, there exist a lower solution φ k ≡ −A ∈ R and an upper solution ψ k to (5.5) with ψ k > φ k .
Proof To show the existence of a lower solution, it suffices to argue as in Lemma 5.2 choosing A sufficiently large so that For the upper solution, we choose ψ k as a solution to (2.1) with c replaced by anyc k ∈ (c(h), c k ) (whose existence is established in Theorem 5.1). Finally, it remains to prove the inequality ψ k > −A. Denoted byx a minimum point of ψ k on Γ, we claim that ψ k (x) > −A. Assume first thatx ∈ e j for some j ∈ J. The first equation in (2.1) yields: and, in particular, h(x) < 0.
On the other hand, the function φ k ≡ −A satisfies The last three relations give: e ψ k (x) − e −A > 0, which is equivalent to ψ k (x) > −A. Assume nowx = v i for some i ∈ I and, for later contradiction, ψ k (v i ) ≤ −A. We observe that, for any j ∈ Inc i , the restriction of ψ k to e j attains its minimum at v i and, consequently, ∂ j ψ k (v i ) ≥ 0. Taking into account the Kirchhoff condition in (2.1), we deduce On the other hand, by (5.6) and the continuity of h, there exists η > 0 such that Moreover, by the continuity of ψ k and ψ k (v i ) ≤ −A, (5.7) ensures for any x ∈ e j sufficiently near v i . In conclusion, near v i , the function ∂ j ψ k is strictly decreasing with ∂ j ψ k (v i ) = 0 and therefore ψ k is strictly decreasing. This fact contradicts that ψ k attains its minimum at v i . ✷ Lemma 5.4 Fix k ∈ N. The minimum of the problem is attained by some functionū with − A <ū < ψ k .

Proof
Let {v n } n be a minimizing sequence for I k . Then there holds: for some constant C (independent of k). Moreover, we have where the inequality is due to the constraint −A ≤ v n ≤ ψ k . We deduce that ∂v n 2 are uniformly bounded; on the other hand, also v n ∞ are uniformly bounded. Therefore, the sequence {v n } n is uniformly bounded in H 1 (Γ). We infer that, possibly passing to a subsequence, there exists u ∈ H 1 (Γ) with −A ≤ū ≤ ψ k such that: v n →ū uniformly and v n ⇀ū weak in H 1 . By the lower semicontinuity of I k , we get I k (ū) ≤ lim inf n I k (v n ), henceū is minimum for (5.8).
The inequality (5.9) is a consequence of the Maximum Principle. Finally, by standard Lagrange multipliers method, we have d dt I k (ū + tφ)| t=0 = 0 for any φ ∈ H 1 (Γ), from which we get (2.2). Arguing as in Remark 2.1, we get thatū is a strong solution to (5.5). ✷ We can now conclude the proof of Proposition 5.1. Denote by u k , k ∈ N, a solution of (5.5) given by Lemma 5.4. Assume for the moment that the sequence {u k } k is bounded in H 1 (Γ). Hence there exists u ∈ H 1 (Γ) such that, as k → +∞, up to a subsequence, u k ⇀ u in the weak topology of H 1 (Γ) and u k → u uniformly. Passing to the limit in the weak formulation of (5.5), we get that u is a weak, and therefore also a strong, solution to (2.1) with c = c(h).
It remains to prove that {u k } k is bounded in H 1 . To this end, fix 0 < δ < max Γ h, an interval D inside some edge e j such that D ⊂ {h(x) ≥ δ} and a pointx ∈ D; by the same arguments of [1, pag.743] (note that we can use [1, Lemma2.1] because any solution of the equation in D is also a solution in a 2-dimensional domain), we get that the u k 's are uniformly bounded in D. Therefore, the functions w k (x) := u k (x) − u k (x) satisfy w k (x) = 0 and there exists C 1 > 0 such that |u k (x)| ≤ C 1 for any k. Arguing as in Lemma 2.1-(i), we get: w k ∞ ≤ |Γ| 1/2 ∂w k 2 = |Γ| 1/2 ∂u k 2 and, we deduce u k ∞ ≤ |u k (x)| + w k ∞ ≤ C 1 + |Γ| 1/2 ∂u k 2 . (5.11) On the other hand, choosing φ ≡ 1 as test function in the weak formulation of (5.5), we get Γ he u k dx = c k |Γ|. Since c k are negative, relations (5.10) with v n = u k and (5.12) entail where the last inequality is due to (5.11). Hence, ∂u k are uniformly bounded in L 2 ; by (5.11), the u k 's are uniformly bounded in L ∞ and consequently also in H 1 . ✷