A singularity as a break point for the multiplicity of solutions to quasilinear elliptic problems

We study a boundary value elliptic problem having a lower order nonlinear term with subquadratic growth in the gradient of the solution and possibly singular when the solution vanishes. If the singularity is mild enough (and even in the absence of the singularity), we prove an existence and multiplicity result. On the contrary, we prove an existence and uniqueness result for strong singularities.


Introduction
In this paper we deal with the following boundary value problem: in Ω, Here, Ω is a bounded domain of R N (N ≥ 3) with boundary ∂Ω smooth enough, 0 µ ∈ L ∞ (Ω), 0 f ∈ L p0 (Ω) for some p 0 > N 2 , 1 < q < 2, 0 ≤ α ≤ 1 and λ ∈ R. A solution to (P λ ) is a function 0 < u ∈ H 1 0 (Ω) ∩ L ∞ (Ω) which satisfies the equation in (P λ ) in the usual weak sense (we will be more precise about the concept of solution in Definition 3.1 below). Observe that, if α > 0, then the lower order term presents a singularity as u approaches zero, i.e., as x approaches ∂Ω. Our goal is to study the existence, nonexistence, uniqueness and multiplicity of solutions to (P λ ), specially for λ > 0.
The first motivation for dealing with this problem comes from the non-singular case α = 0, i.e., in Ω, u > 0 in Ω, u = 0 on ∂Ω.
It is well-known from classical results (see [10,12]) that problem (R λ ) admits at least one solution for all λ < 0. Concerning the uniqueness of solution, it was first dealt with in [7], and their results have been improved in several directions since then (see [3] and references therein). In particular, it has been recently proved in [3] that uniqueness holds for all λ ≤ 0. However, the existence of solution for λ = 0 is not always guaranteed. Roughly speaking, if f L p 0 (Ω) is small enough, then there exists a unique solution to (R 0 ), as it is shown for instance in [20] (see also [24] and references therein). Conversely, it is proved in [1] (see also [25]) that, if f is large in some sense, there exists no solution to (R 0 ); in consequence, λ = 0 is a bifurcation point from infinity. Concerning this last case, a very precise description of the blow-up of the solutions at λ = 0, and also a necessary and sufficient condition for the existence of solution to (R 0 ) in terms of the corresponding ergodic problem, are given in [29] under slightly stronger hypotheses on f and µ.
The scenario in which (R 0 ) has a solution is not so well understood, and has risen interest in the recent years. In this case one expects to find solutions to (R λ ) for small λ > 0 by a continuation argument. However, the uniqueness and multiplicity problems are harder to deal with for λ > 0, and very few results are known in this direction. In fact, up to our knowledge, the literature contains results concerning only the quadratic case q = 2. In this regard, the first advances can be found in [27] for µ > 0 constant. Shortly after that, some improvements appeared in [26], where λ = λ(x) is allowed to change sign but µ is still constant. These two works employ variational techniques. Going further, topological degree and bifurcation are used in [4] to handle problem (R λ ) with λ > 0 and µ ∈ L ∞ (Ω) such that µ 1 ≤ µ ≤ µ 2 for some constants µ 2 > µ 1 > 0. We also quote [31], where functions 0 µ ∈ L ∞ (Ω) vanishing on ∂Ω, and even with compact support, are permitted at the expense of imposing N ≤ 3 (the cases N = 4, 5 are also handled provided λ = λ(x) satisfies extra hypotheses). Very recently, a similar problem to (R λ ) with the p -Laplacian as principal operator has been considered in [18], while sign-changing coefficients (including µ) are allowed in [19].
In all these works, the authors prove that, if there is a solution to (R 0 ), then problem (R λ ) admits at least two different solutions for all λ > 0 small enough, and it was first shown in [4] that the branch of positive solutions bifurcates from infinity to the right of the axis λ = 0 (see [17] for a more complete picture when different sign conditions on f are imposed). We stress again that all the mentioned papers have in common the assumption q = 2. Indeed, the techniques employed for q = 2 usually involve exponential test functions which somehow remove the dependence on the gradient in the equation. For instance, this idea allows the authors of [27] to study the problem variationally, while in [4] it is essential in order to find a priori estimates for λ > 0. However, this idea fails for 1 < q < 2 as the gradient term can not be removed when one looks for a priori estimates satisfied by supersolutions to (R λ ). Up to our knowledge, the multiplicity or uniqueness of solutions for λ > 0 is an open problem if 1 < q < 2.
Turning back to (P λ ), another motivation for studying this problem comes from the very recent paper [16]. In Remark 6.1 of that paper the authors observe that, if q = 2 and 0 < α < q − 1, the techniques in [4] can be adapted to derive again a multiplicity result for λ > 0. Hence, roughly speaking, mild singularities at zero do not alter the behavior of the solutions, as far as the multiplicity for λ > 0 is concerned. Nonetheless, the main result in that paper shows that multiplicity fails for α = q − 1 (see [5] for q = 2 and µ constant). To be precise, the authors prove under natural hypotheses on µ and f that, if α = q − 1, there exists λ * ∈ (0, λ 1 ] (where λ 1 = inf v∈H 1 0 (Ω)\{0} Ω |∇v| 2 / Ω v 2 ) such that problem (P λ ) has a solution if and only if λ < λ * , and in this case, the solution is unique (see also [15] for a similar existence result when f and u may change sign). In particular, one has existence and uniqueness for λ > 0 small. Since this result is true for 1 < q ≤ 2, it is natural to wonder whether α = q − 1 is a break point for the multiplicity of solutions not only in the case q = 2, but also for 1 < q < 2.
In the present work we contribute to these topics by proving that, if there is a solution to (P 0 ), then there are at least two different solutions to (P λ ) for all λ > 0 small enough provided q and α satisfy certain relations involving also the dimension N . We prove also that the branch of positive solutions bifurcates from infinity to the right of the axis λ = 0.
Observe that µ is bounded away from zero but not necessarily constant. We introduce here the main result of this paper: Theorem 1.1. Assume that (H1) holds and that (P 0 ) admits a solution u 0 . If q > N N −1 , suppose also that Furthermore, if q ≥ 1 + 2 N , assume additionally that Then, there existsλ ∈ (0, λ 1 ) such that problem (P λ ) admits at least two different solutions for all λ ∈ (0,λ]. Moreover, zero is the unique bifurcation point from infinity to problem (P λ ).
Even though this result deals only with the range λ > 0, in order to make a more complete picture we will gather and prove in Section 3 some existence, nonexistence and uniqueness results about problem (P λ ) for λ ≤ 0. We stress that the uniqueness result for λ ≤ 0, apart from being new in the literature, shows that λ = 0 is a critical point beyond which the nature of the problem changes drastically, as in the well-known case q = 2 and α = 0.
Concerning the proof of Theorem 1.1, the idea is to derive a priori estimates of the solutions to (P λ ) for all λ > λ 0 which are independent of λ > 0. This idea first appeared in [4] for q = 2 and α = 0, but the approach for deriving the estimates does not work in our framework. For our purposes, it is more convenient to use the arguments developed in [31], which allow us to find L p estimates of supersolutions. After that, we establish a bootstrap argument, which works thanks to some results in [24], that yields an L ∞ estimate. Actually, these results are valid only in the nonsingular case α = 0, so we will extend some parts of them to our singular framework.
Hypotheses (1.1) and (1.2) in Theorem 1.1 deserve some comments. They appear in the proof as a result of the combination of the mentioned techniques from [31] and the bootstrap from [24]. However, we presume that these are technical assumptions forced by the tools we employed, so the theorem might admit some improvements. In order to clarify the meaning of these two conditions, we derive some corollaries below in which simpler conditions assuring (1.1) are imposed. For instance, if we consider the sequence , with no extra hypotheses on α apart from 0 ≤ α < q − 1 (see Corollary 3.19). Observe that Q n > 1 but lim n→∞ Q n = 1. This means that, if N is large, then q has to be chosen close to 1. However, one would expect a multiplicity result for any q ∈ (1, 2) and any N . This still remains as an open problem. In any case, Corollary 3.19 represents a remarkable advance, in particular, about the nonsingular problem (R λ ). Changing the point of view, we give in Corollaries 3.21 and 3.22 below conditions on α and N that are sufficient for applying Theorem 1.1 even for q close to 2.
With the aim of having a deeper insight into problem (P λ ), we also consider in this work the case q − 1 < α ≤ 1. In contrast to the previous situation (0 ≤ α < q − 1), we will prove that existence and uniqueness hold for λ > 0 small enough. For this purpose, we will need the following assumption on Ω: Note that, if ∂Ω is Lipschitz, then Ω satisfies (A) (see [3]), so this represents only a mild restriction. The precise hypotheses that we need are gathered here: We emphasize that µ is allowed to vanish in subsets of Ω with nonzero measure.
The statement of the main result in the q − 1 < α ≤ 1 case is the following: Assume that (H2) holds. Then there exists a solution to (P λ ) for all λ < λ 1 , and there exists no solution to (P λ ) for all λ ≥ λ 1 . Moreover, the solution is unique for all λ ≤ 0 and, if f satisfies that ∀ω ⊂⊂ Ω ∃c ω > 0 : f ≥ c ω in ω, then the solution is unique for all λ < λ 1 . Finally, λ 1 is the unique bifurcation point from infinity to problem (P λ ).
Even though we are specially interested in the uniqueness part, the existence statement in Theorem 1.2 deserves also attention. Observe that one has existence of solution if and only if λ < λ 1 . This suggests that the nonlinear term does not play an essential role in this case, since the situation is analogous to the linear problem (µ ≡ 0). Recall that this is not the case when α = q − 1, for which one has existence if and only if λ < λ * , where λ * < λ 1 provided µ > 0 (see [16,Remark 6.3]).
The proof of the existence of solution in Theorem 1.2 is performed by passing to the limit in certain family of approximate nonsingular problems. We will derive Hölder continuous a priori estimates on the solutions to such a family, which will allow us to pass to the limit. For proving such estimates, the assumption α ≤ 1 is essential (see Remark 3.3 below). Moreover, the continuity of the solutions is also essential to prove their uniqueness. Indeed, we state and prove in Section 2 two comparison principles valid for continuous lower and upper solutions to singular equations. As far as we know, these two results are new, and they are interesting by themselves as only few uniqueness results for singular equations are known (see [2,5,6,14,16]). We follow in their proofs the arguments in [3] and [16].
As a summary, our results contribute to the theory of equations with subquadatic growth in the gradient, extending what it is known about the multiplicity of solutions in the quadratic case. On the other hand, they can be seen as a link between the singular and nonsingular theory, in the sense that they show that the presence or not of a singularity is determining only if it is strong enough. Finally, new existence and uniqueness results are given for strong singularities, where the uniqueness part is specially remarkable.
We organize the paper as follows: in Section 2 we deal with the mentioned comparison principles; we devote Section 3 to prove Theorem 1.1 as well as some auxiliary results and some consequences of the mentioned theorem; Section 4 contains the proof of Theorem 1.2, and Section 5 is an appendix where we prove a continuation result needed in the proof of Theorem 1.1.
Acknowledgments. The author wants to thank warmly T. Leonori and J. Carmona for their helpful contributions to this work.

Notation.
• For every x ∈ R N , the distance from x to ∂Ω will be denoted as δ(x). Furthermore, for p ≥ 1 we will denote as L p,δ (Ω) the space of functions u : Ω → R such that identifying functions equal up to a set of zero measure. • For p ≥ 1, we will denote the usual Marcinkiewicz space as M p (Ω), i.e., the space of functions u : Ω → R for which there exists c > 0 such that |{|u| > k}|k p ≤ c for all k > 0. In this case, we denote u M p (Ω) := (inf{c > 0 : |{|u| > k}|k p ≤ c for all k > 0) 1 p . • For k ≥ 0, the usual truncation functions will be written as T k (s) = max{−k, min{s, k}} and G k (s) = s − T k (s) for all s ∈ R. • The principal eigenvalue of the −∆ operator under zero Dirichlet boundary conditions will be denoted as λ 1 , and ϕ 1 will denote the corresponding eigenfunction with ϕ 1 L ∞ (Ω) = 1.

Comparison principles
We start with a comparison principle valid for singular equations. The proof basically follows the steps of a similar result in [3]. However, up to our knowledge this is the first time that a comparison result has been proved including a general positive singular lower order term on the right hand side of the equation (see the comparison results in [16], where a specific 1-homogeneous singular term is considered).
Remark 2.2. Theorem 2.1 is valid for a wide class of lower order terms. For instance, the model example is for any α > 0 and 0 ≤ µ ∈ L ∞ loc (Ω). In particular, the growth of the singularity is irrelevant in the proof. Nonetheless, the comparison principle does not work for λ > 0. Indeed, as we pointed out in the Introduction, if the singularity is mild enough in some sense, then a multiplicity phenomenon appears for λ > 0. Thus, for the model case, the comparison result is sharp in terms of the sign of λ.
Proof of Theorem 2.1. Let us denote w = u−v. For k > 0, we consider the function φ = (w−k) + , and we also denote Notice that supp(φ) ⊂ A k . Moreover, condition (2.3) implies that A k ⊂⊂ Ω, so φ has compact support. In particular, φ ∈ H 1 0 (Ω) ∩ L ∞ (Ω), so it can be taken as test function in (2.1) and (2.2), obtaining that Since λ ≤ 0, we deduce that Assume in order to achieve a contradiction that w + ≡ 0, and let k 0 ∈ (0, w + L ∞ (Ω) ). Let also ω ⊂⊂ Ω be an open set such that A k0 ⊂ ω. Observe that A k ⊂ A k0 for all k ≥ k 0 . Then, using the properties of g, it is clear that . Therefore, from (2.6) we deduce that . For every j ∈ R, let us denote Ω j = {x ∈ Ω : |w(x)| = j}, and consider also the set J = {j ∈ R : |Ω j | = 0}. Since |Ω| < ∞, then J is at most countable, which implies that the set j∈J Ω j is measurable, and we also have that Hence, if we define the set Z = Ω \ j∈J Ω j , we deduce from (2.7) that Taking into account that u, v ∈ W 1,N loc (Ω) and A k ⊂⊂ Ω, we have that Hence, from (2.8) we derive that Let us now define the function F : It is clear that F is nonincreasing and continuous. Thus, choosing k close enough to w + L ∞ (Ω) , we deduce from (2.9) that (w − k) + ≡ 0. That is to say, w ≤ k in Ω. But this is not possible since k < w + L ∞ (Ω) = sup Ω (w). In conclusion, we have proved that w + ≡ 0, i.e., w ≤ 0 in Ω.
Next theorem is another comparison principle which works for λ > 0. In turn, one has to impose stronger hypotheses on g and h. The proof is similar to the one above combined with some ideas in [16].
Proof. For every ε > 0, let us consider the function We claim that w + ε ≡ 0 for any ε > 0. Suppose by contradiction that there exists ε 0 > 0 such that w + ε0 ≡ 0. Let us fix k 0 ∈ 0, w + ε0 L ∞ (Ω) and ε ∈ (0, ε 0 ), the latter to be chosen small enough later. It is clear that Notice that supp(w ε − k) + ⊂ A k . By (2.11), we also have that lim sup which implies that A k ⊂⊂ Ω. Then, the function (w ε − k) + has compact support, and in particular, (w ε − k) + ∈ H 1 0 (Ω) ∩ L ∞ (Ω). Therefore, we may take (wε−k) + u as test function in and, using that g ≥ 0, Let ω ⊂⊂ Ω be an open set such that A k0 ⊂ ω. Observe that A k ⊂ A k0 for all k ≥ k 0 . Then, it is clear that Moreover, we have that where c ω is the constant given by (2.10). With this choice, it is straightforward to deduce that (2.14) holds again.
Since α < q − 1, then q − α > 1. That is to say, the lower order term has superlinear homogeneity.
The concept of solution we will adopt is gathered in the following definition.

Remark 3.2.
Arguing as in [16,Appendix], it can be proved that a definition of subsolution, supersolution and solution to (P λ ) using test functions in H 1 0 (Ω) ∩ L ∞ (Ω) is equivalent to Definition 3.1. Moreover, even the concepts of supersolution and solution to problem (P λ ) with test functions only in H 1 0 (Ω) ∩ L ∞ (Ω) are equivalent to the corresponding concepts in Definition 3.1. Remark 3.3. Assume that (H1) holds. By taking ϕ 1 as test function in the weak formulation of (P λ ) one easily deduces that, if u is a solution to (P λ ), then λ < λ 1 . Furthermore, since α ∈ [0, 1], it can be proved as in [16,Appendix], which follows the ideas in [28], that every solution u to (P λ ), for any λ < λ 1 , satisfies that u ∈ C 0,η (Ω) for some η ∈ (0, 1). Finally, since the solutions to (P λ ) are positive in compact subsets of Ω, then it can be seen again as in the mentioned appendix that u ∈ W 1,N loc (Ω) for every solution to (P λ ) for any λ < λ 1 . Our first result is concerned with the existence and uniqueness of solution to (P λ ) for λ ≤ 0. The existence is well-known from the works that are quoted in the proof below. However, a precise statement for unbounded datum f is required for our purposes. In any case, the uniqueness is new up to our knowledge.
Proposition 3.4. Assume that (H1) holds. Then, problem (P λ ) has a unique solution for all λ < 0. Moreover, assume additionally that either α > 0 or the following smallness condition holds: Then (P 0 ) has a unique solution.
Proof. The result for α = 0 and λ ≤ 0 is well-known. Indeed, the existence of solution for λ < 0 is proved in [10,12], the existence for λ = 0 under the smallness condition is proved in [20], and the uniqueness for λ ≤ 0, in [3]. Thus, we assume that α ∈ (0, q − 1). Observe now that, by Young's inequality, there exist C 1 , C 2 > 0 such that Then, the hypotheses of [23, Proposition 4.1] are fulfilled, so there exists a solution u 0 ∈ H 1 0 (Ω) ∩ L ∞ (Ω) to (P 0 ) in some weaker sense than Definition 3.1. Nonetheless, since f 0 in Ω, then the strong maximum principle implies that u 0 > 0 in Ω, so u 0 is in fact a solution to (P λ ) in the sense of Definition 3.1.
Concerning the existence for λ < 0, we argue by approximation as follows. For all n ∈ N, let us consider the problem Since (3.1) and (3.2) hold, we know from [22] that there exists a solution u n ∈ H 1 Hence, Theorem 2.1 applies (see Remark 3.3) and yields In other words, {u n } is bounded in L ∞ (Ω). By taking u n as test function in the weak formulation of (3.3), we immediately deduce that {u n } is also bounded in H 1 0 (Ω). Hence, there exists u ∈ H 1 0 (Ω) ∩ L ∞ (Ω) such that, passing to a subseqence, u n ⇀ u weakly in H 1 0 (Ω) and u n → u strongly in L p (Ω) for any p ∈ [1, ∞).
Now, the strong maximum principle applied on z implies that Thus, by virtue of [9, Theorem 2.1], ∇u n → ∇u strongly in L q (Ω) N , up to a subsequence. The convergences we have proved about {u n } and {∇u n } are enough to pass to the limit in (3.3). The proof is standard, we refer to the proof of [16, Proposition 5.2] for further details. In sum, u is a solution to (P λ ).
The uniqueness of u is a direct consequence of Theorem 2.1 and Remark 3.3.
Next result shows that, if α = 0, then the existence of solution to (P 0 ) may fail if f or µ are too large in some sense, in contrast to the case α > 0. Thus, the smallness assumption in Proposition 3.4 is justified. This result is basically contained in [1, Theorem 2.1]. We include the statement and proof in our context for completeness. Proposition 3.5. Assume that (H1) holds with α = 0, and suppose that (P λ ) admits a solution for some λ ≥ 0. Then, Proof. Let u be a solution to (P λ ), and let , so it can be taken as test function in the weak formulation of (P λ ) to obtain, after using Young's inequality, that Hence, it is now clear that the result follows.
Our aim in the next two subsections is to prove, for a fixed λ 0 > 0, an L ∞ estimate for the solutions to (P λ ) for all λ > λ 0 . Such an estimate implies that zero is the only possible bifurcation point from infinity to problem (P λ ). This fact will be the key to prove multiplicity of solutions to (P λ ) for λ > 0 small enough.
This subsection is devoted to proving an L p estimate on the supersolutions to (P λ ) for λ > 0. The techniques employed here have been taken from [31].
The first result of the subsection provides an apparently weak local estimate on the solutions to (P λ ). Notwithstanding, this is the starting point for proving the L ∞ estimate we are aiming at. Concerning the proof, we will argue similarly as in Proposition 3.5.
Lemma 3.6. Assume that (H1) holds. Then, for every λ 0 > 0 and ω ⊂⊂ Ω there exists C > 0 such that for some β > 1 as test function in (P λ ) and using Young's inequality twice we obtain that Taking β = q q−1−α , the last term in the previous inequality is bounded. Therefore, so (3.4) follows by taking into account that φ = 1 in ω.
The following is a slightly more general version of [13,Lemma 3.2].
Proof. Let us consider the following problem for all n ∈ N: It is well-known that it has a unique solution v n ∈ C 1,ν 0 (Ω) for all ν ∈ (0, 1). Moreover, [13, for some C > 0 depending only on Ω. In particular, it does not depend on n.
On the other hand, by comparison, it is clear that We conclude the proof by letting n tend to infinity.
Next lemma is an immediate consequence of Lemma 3.7.
Combining Lemmas 3.6 and 3.8 we obtain in the following result some estimates in weighted Lebesgue spaces. Lemma 3.9. Assume that (H1) holds. Then, for every λ 0 > 0 there exists C > 0 such that for every supersolution u to (P λ ) with λ > λ 0 .
We finish the subsection with the best L p estimate for supersolutions that we obtain with these techniques. Lemma 3.10. Assume that (H1) holds. Then, for every λ 0 > 0 there exists C > 0 such that Proof. Let us denote v = u 1− α q . Since 1 − α q > 1 2 , we can argue as in [16,Lemma 2.6] to prove that v ∈ H 1 0 (Ω). Then, [31,Proposition 2] implies that Hence, by Lemma 3.9 we derive that It is easy to check that, in fact, γ = 0. Therefore, recalling that m = b 1 − α q , by (3.8) and (3.7) we conclude that and the result holds true.
In this subsection we will show how to obtain L ∞ estimates on the solutions to (P λ ) for λ > 0 by combining the L p estimate given by Lemma 3.10 and a bootstrapp argument. We will make use of several results in [24]. In fact, the ideas in such a paper will be used also to derive some new results which provide analogous estimates in our singular framework.
We start the subsection with the easier case α = 0, which is interesting itself; we will deal with the singular case α ∈ (0, q − 1) later. Thus we state and prove the following Proposition 3.11. Assume that (H1) holds with α = 0, and consider the sequence {Q n } defined by (1.3), i.e., Then, for every q ∈ (1, Q N ] \ {2} and every λ 0 > 0, there exists C > 0 such that for every solution u to (P λ ) with λ > λ 0 .
Proof. In this proof, C denotes a positive constant independent of u and λ whose value may vary from line to line. We start by assuming that 1 < q < N N −1 . Observe that N N −1 < Q N , so q ≤ Q N is not a restriction in this case.
Let us assume now that m = N 2 . Then, [24, Theorem 5.8, item (ii)] implies that u L p (Ω) ≤ C for all p < ∞. In particular, h L p 0 (Ω) ≤ C. Since p 0 > N 2 , then again item (i) of the same mentioned theorem yields the L ∞ estimate.
Suppose now that (2 * ) ′ < m < N 2 . Let us define the sequence {m n } inductively as where m 0 = m. This is clearly an increasing sequence. Moreover, using one more time [24, Theorem 5.8, item (iii)], it is easy to see that u L mn (Ω) ≤ C for n ∈ N as long as m n < N 2 . In particular, the same holds for h.
Assume by contradiction that m n < N 2 for all n ∈ N. Since {m n } is increasing and bounded from above, there exists l ≤ N 2 such that, passing to a not relabeled subsequence, m n → l. Consequently, From this equality we deduce that l = 0. But this is a contradiction because m 0 > 0 and the sequence is increasing. Therefore, m n ≥ N 2 for some n ∈ N, so the previous cases imply that u L ∞ (Ω) ≤ C.
It only remains to consider the case 1 < m ≤ (2 * ) ′ . Now, item (iv) of the same theorem implies that On the other hand, it is straightforward to prove that, for any a ∈ (0, 1), there exists a constant b > 0 such that Then, with m n = m * * n−1 and m 0 = m, as before, In particular, h L m 1 (Ω) ≤ C. It can be proved inductively that u L mn (Ω) ≤ C as long as m n ≤ (2 * ) ′ . Arguing as above, we deduce that {m n } is increasing and divergent. Hence, m n > (2 * ) ′ for some n ∈ N, and the proof concludes using the previous cases.
We now turn to the range N N −1 < q < 2. The procedure is the same as above, but in this case, instead of Theorem 5.8, one has to apply (a finite number of times) either [24,Theorem 4.9] or [24,Theorem 3.8], depending on the value of q. In both cases, one has to verify in the first step of the bootstrap that h ∈ L (q−1)N q (Ω) so that the hypotheses of both theorems are satisfied. We know by virtue of Lemma 3.10 that h ∈ L m (Ω), so we have to impose that One can easily check that the previous inequality is satisfied if and only if q ≤ Q N . It is left to consider the case q = N N −1 . Since N N −1 < Q N , we can take ε > 0 small enough so that Moreover, we have by Young's inequality that where h(x) = (λ + 1)u + f (x) and h ε (x) = h(x) + C ε for some C ε > 0. Therefore, the previous case can be applied and the proof concludes.
We deal now with the singular case. For this purpose, it is necessary to derive results similar to the ones from [24] mentioned in the previous proof, but valid for singular equations. Even though our results are not proper extensions in the whole generality (as in [24] the solutions are weaker than ours and the terms in their equation are not explicit and only satisfy growth restrictions), they are new in considering singular terms.
The mentioned results will be concerned with the following auxiliary problem: in Ω, where the parameters satisfy Let us define the functions r, σ : [0, q − 1) → R as Observe also that both r, σ are decreasing functions. We will not write their dependence on θ when confusion is not arisen.
The following result provides estimates on solutions to (3.10) when q is large and h has enough summability.

Thus, Hölder's and Sobolev's inequalities imply that
Let us define the function F : [0, +∞) → R by Since q < 2, it is easy to see that This means that F is a concave function, positive near zero, negative far from zero, and has a unique maximum F * > 0 with a corresponding unique maximizer Z * > 0. Let us now consider k * = inf{k > 0 : hχ {|h(x)|≥βk} L r (Ω) < F * }.
Hence, for any δ > 0, the equation F (Y ) = hχ {|h(x)|≥β(k * +δ)} L r (Ω) has two roots Z 1 and Z 2 such that Z 1 < Z * < Z 2 . By virtue of inequality (3.16), it holds that for every k ≥ k * + δ, either Y k ≤ Z 1 or Y k ≥ Z 2 . But the function k → Y k is continuous and tends to zero as k tends to infinity. Hence, Y k * +δ ≤ Z 1 < Z * . If we let now δ tend to zero, we obtain that Therefore, Now we take T k * +1 (u) as test function in the weak formulation of (3.10) so we get Since q < 2, this clearly implies that Note that, in principle, this last constant depends on k * , which may in turn depend on h L r (Ω) . However, since h L r (Ω) ≤ C, the absolute continuity of the integral implies that k * ≤ k 0 for some k 0 > 0 independent of h L r (Ω) . Summarizing, which proves the first part of item (1). Moreover, Thus, the proof of item (1) is concluded.
Proof of (2). Arguing as above, we take G k (u) 2τ −1 as test function in the weak formulation of (3.10) for some k > 0, so we obtain In order to estimate the nonlinear term, notice that Hence, we can use Hölder inequality with those three exponents, and we deduce that Notice that this is possible thanks to part (1) of the theorem and the absolute continuity of the integral. Then, removing the positive linear term which contains β and using Hölder's inequality in the term with h, we derive Finally, using that u is bounded in H 1 0 (Ω) (from item (1)), we deduce that This proves part (2) of the theorem.
Proof of (3). Since τ → +∞ as p → N 2 , part (3) is a clear consequence of part (2). Proof of (4). Let us take G k (u) as test function in the weak formulation of (3.10) for some k > 0, so we obtain this time, removing the term with β, Next, as in part (2), we take k ≥ k 0 , with k 0 independent of u, so that G k (u) q−1 L 2 * σ(0) (Ω) is small enough. Then, We conclude by using the Stampacchia's method in a direct way.
The following result is analogous to Proposition 3.12, but is is valid for a lower range for q. The proof is similar to the one above, but still there are relevant differences so it is included for the convenience of the reader.
and let us denote Then, for all C > 0 and p > 1, there exists M > 0 such that, for any h ∈ L p (Ω) with h L p (Ω) ≤ C and for any solution u to problem (3.10), the following holds: (2) if r(0) < p ≤ (2 * ) ′ , then u(1 + u) τ −1 15. Concerning the hypothesis in item (1) in the previous result, is is easy to see, using that r is decreasing, r(b(q)) = 1 and r(0) = N (q−1) q , that it is equivalent to h ∈ L p (Ω) for some This assumption is obviously weaker than p = N (q−1) q , which is imposed in [24,Theorem 5.8]. Actually, if α ≥ b(q), then it is enough to impose that p > 1. Notice also that, if p = r(θ), then τ = σ(θ). In consequence, since the function p → τ (p) is increasing, it holds that τ > σ(0) if p > r(0).
Proof of Proposition 3.14. Proof of (1). Let us consider the following functions defined for every t ≥ 0: (3.20) where ζ > 0 will be fixed later.
First of all observe that For k > 0, let us take Φ 2 (G k (u)) ∈ H 1 0 (Ω) ∩ L ∞ (Ω) as test function in the weak formulation of (3.10), so that we obtain (3.23) β Let us now estimate the nonlinear term. Thanks to (3.21) we derive that We now focus on the last term in (3.23). Using (3.22) we deduce that Hence, using Young's inequality we obtain that Let us define the function F : [0, +∞) → R by Since q < 2, it easy to see that This means that F is a concave function, positive near zero, negative far from zero, and has a unique maximum F * > 0 with a corresponding unique maximizer Z * > 0.
If we let now δ tend to zero, we obtain that Hence, we have that Fix k ≥ k * + 1 independent of h L r (Ω) . Note again that this can be done since h L r (Ω) ≤ C, so hχ {|h(x)|≥βk} L r (Ω) → 0 uniformly in h L r (Ω) as k → ∞. Then, estimate (3.25) implies that We claim now that Indeed, let us define the real functions for all s ≥ 0: It is easy to see that y ′ (s) + sy(s) = z(s) ∀s ≥ 0, and also that y(s) ≤ Cz(s) ∀s ≥ 0, for some C > 0.
Now we take T k (u)y(u) as test function in the weak formulation of (3.10) and get (3.27) Concerning the right hand side of (3.27), observe that Gathering (3.27), (3.28) and (3.29) together we deduce that We finally arrive at (3.26) by using Young's inequality, by the fact that z is a bounded function, and also by virtue of (3.25). Thus, we have proved item (1). Proof of (2). Let us consider the following functions defined on [0, +∞): It can be easily proved that Moreover, since 2τ − 1 = 2 * τ p ′ , it is easy to prove that For some k > 0 we take Ψ 2 (G k (u)) as test function in the weak formulation of (3.10), so that we obtain (3.32) Concerning the singular term, using (3.30) and Hölder's and Sobolev's inequalities, we deduce that

Now we claim that
for some k > 0 large enough. Indeed, since q < 1 + 2 N , we have that 2 and thus, for any k > 0, Therefore, by item (1), and the proof of the claim is done. As a consequence, it can be shown that the limit is uniform in u. Hence, from (3.33) we deduce that there exists k 0 > 0 independent of u such that Then, we derive from (3.32) that By virtue of (3.31) we immediately derive the estimate We conclude the proof of part (2) of the proposition similarly as part (1).
Proof of (3). It follows the same steps of part (2), but considering this time Notice that this choice is valid as τ > 1 whenever p > (2 * ) ′ .
We prove now a result analogous to Propositions 3.12 and 3.14 for q small. Proposition 3.16. Assume that q, α, β, µ satisfy (3.11) and let r, σ be defined as in (3.12). Assume also that Then, for all C > 0 and p ≥ 1, there exist M, k > 0 such that, for any h ∈ L p (Ω) with h L p (Ω) ≤ C and for any solution u to problem (3.10), the following holds: (1). For j, k > 0, let us take T j (G k (u)) ∈ H 1 0 (Ω) ∩ L ∞ (Ω) as test function in the weak formulation of (3.10). Thus we obtain On the one hand, it is clear that On the other hand, concerning the right hand side of (3.34), we obtain that In sum, we deduce that Then, we apply [8,Lemma 4.2], so that we deduce that Therefore, We now consider the function F : [0, ∞) → R defined as . The proof of this part concludes as in the previous proposition.
Proof of (2)(3)(4)(5). The proofs of the rest of the items follow the steps of the corresponding ones from Proposition 3.14. The only part which is not completely straightforward is the proof of the estimate However, since q < N N −1 , then N (q − 1) < N N −1 , so we deduce that . Therefore, the estimate holds by virtue of part (1).
The same arguments of the proof of Proposition 3.11 (but using Propositions 3.12, 3.14 and 3.16 instead of the results in [24]) are valid also for proving the main result of this subsection.
Remark 3.18. Notice that, in principle, one can not apply Propositions 3.14 nor 3.16 to prove Proposition 3.17 in the case q = N N −1 . However, for ε > 0 small, we have that N N −1 + ε < 1 + 2 N and |∇u| for any k > 0 and any solution u to (P λ ). Hence, the conclusions of Proposition 3.14 hold for q = N N −1 + ε.

Proof of the main result and consequences.
We prove now the main result of the paper.
We have proved that there exists a sequence {(λ n , u n )} ⊂ Σ + such that λ n → 0 and u n L ∞ (Ω) → +∞ as n → +∞. We will show now that this fact and the connection of Σ + are enough to proof multiplicity of solutions for all λ > 0 small enough. Indeed, assume by contradiction that there exists another sequence {(µ n , v n )} ⊂ Σ + such that µ n → 0 as n → ∞ and (P µn ) admits no other solution but v n for all n. On the other hand, using that (0, u 0 ) ∈ Σ + and Σ + is connected, it is clear that ((0, u 0 )) denotes the open ball in R × L ∞ (Ω) centered at (0, u 0 ) with radius r. Hence, since v n is unique and µ n → 0, we have that, for all r > 0, there exists n r ∈ N such that, if n ≥ n r , then (µ n , v n ) ∈ Σ + ∩ B r ((0, u 0 )) \ {(0, u 0 )}. In other words, v n → u 0 in L ∞ (Ω) as n → 0. Let us now take a not relabeled subsequence {(µ n , v n )} such that µ n+1 < λ n < µ n for all n. Let us also fix η > u 0 L ∞ (Ω) , and take n large enough so that max{ v n L ∞ (Ω) , v n+1 L ∞ (Ω) } < η < u n L ∞ (Ω) . We claim that there exists (ν n , w n ) ∈ Σ + such that ν n ∈ (µ n+1 , µ n ) and w n L ∞ (Ω) = η.
Indeed, let us consider the set Arguing by contradiction, assume that Σ + ∩ A n,η = ∅. Let us define also On the one hand, the uniqueness of v n and the fact that max{ v n L ∞ (Ω) , v n+1 L ∞ (Ω) } < η imply that Σ + ∩ B n,η = ∅. On the other hand, if we consider the set then it is clear that U n,η is open in Σ + , (λ n , u n ) ∈ U n,η and ∂U n,η = A n,η ∪B n,η . Hence, denoting V n,η = Σ + \ U n,η , we deduce that V n,η is also nonempty and open in Σ + , U n,η ∩ V n,η = ∅ and Σ + = U n,η ∪ V n,η . This contradicts that Σ + is connected. Therefore, we have found a sequence {(ν n , w n )} ⊂ Σ + such that ν n → 0 as n → +∞ and w n L ∞ (Ω) = η for all n large enough. In particular, {w n } is bounded in L ∞ (Ω). Then, we can argue as in the proof of Proposition 3.4 in order to pass to the limit in (P νn ). Thus, there exists w ∈ H 1 0 (Ω) ∩ L ∞ (Ω) such that w n ⇀ w weakly in H 1 0 (Ω), w n → w strongly in L ∞ (Ω) and w is a solution to (P 0 ). But w L ∞ (Ω) = η > u 0 L ∞ (Ω) . This is a contradiction, as u 0 is unique by virtue of Theorem 2.1 and Remark 3.3. The proof in now concluded.
We conclude the section by stating and proving three corollaries of Theorem 1.1. The first one provides multiplicity of solutions for q small, but for any α ∈ [0, q − 1).
If N ≤ 5, assume also that q < 1 + 2 N . Then, the conclusions of Theorem 1.1 hold true.
Remark 3.20. Observe that Q N > 1 + 2 N for all N ≤ 5, while Q N < 1 + 2 N otherwise. That is why we need to introduce an additional restriction in Corollary 3.19 for low dimensions. We will make a more detailed study of the case q ≥ 1 + 2 N for dimensions N = 3, 4, 5 in Corollary 3.22 below.
The second corollary gives multiplicity of solutions for a wider range of q at the expense of taking α somehow close to q − 1.
Then, the conclusions of Theorem 1.1 hold true.
Proof. One only has to notice that, if q > N N −1 and α ≥ N −1 That is to say, (1.1) holds and Theorem 1.1 can be applied. Finally, the last consequence of Theorem 1.1 provides multiplicity of solutions for q close to 2, but in this case more restrictive conditions have to be imposed on α, and even on N .
In this case, α > q − 1, so q − α < 1. That is to say, the lower order term has sublinear homogeneity. We will prove the existence of solution to (P λ ) after deriving certain a priori estimates on an approximate problem and passing eventually to the limit, in a way that such a limit will be the solution we look for. Thus, consider the following approximate problem: in Ω, u n = 0 on ∂Ω.
In the next lemma we show that problem (4.1) admits a solution.
We prove now the key estimates for proving the existence of solution to problem (P λ ).

Proof.
Step 1: H 1 0 estimate. Let us take u n as test function in the weak formulation of (4.1). Then we obtain by using Poincaré's and Hölder's inequalities that Hence, we can apply Sobolev's inequality to get that .
Assume now, in order to achieve a contradiction, that { u n L ∞ (Ω) } n∈N is unbounded, and choose a not relabeled divergent subsequence. Then, the function v n = un un L ∞ (Ω) satisfies Notice that v n L ∞ (Ω) = 1 for all n, and also that Then, it is standard to prove that v n C 0,η (Ω) ≤ C for all n and for some η ∈ (0, 1) independent of n following the arguments in [28] (see [16,Appendix]). Hence, by Arzelà-Ascoli theorem, there exists v ∈ C(Ω) such that, up to a subsequence, v n → v uniformly in Ω. Necessarily, v L ∞ (Ω) = 1, so v ≡ 0. Moreover, by using the strong maximum principle conveniently, v > 0 in Ω. This last fact combined with the uniform convergence implies that, ∀ω ⊂⊂ Ω ∃c ω > 0 : v n ≥ c ω in ω.
See the proof of [16, Proposition 5.2] for more details.
Let now φ ∈ C 1 c (Ω) be such that supp(φ) ⊂ ω for some open set ω ⊂⊂ Ω. Then, from (4.3) we deduce that Using now that {v n } n∈N is bounded in H 1 0 (Ω), we conclude that Finally, we pass to the limit in (4.2) and obtain that This contradicts the fact that λ < λ 1 .
We are ready now to prove the main theorem of this section. Finally, similar arguments as in the proof of Step 2 in Proposition 4.3 can be used to prove that λ 1 is the only possible bifurcation point from infinity. Actually, reasoning by contradiction and using that there is no solution to (P λ1 ), it is also standard to prove that λ 1 is, indeed, a bifurcation point from infinity.
We will prove next that that K is a completely continuous operator, i.e., it is continuous and maps bounded sets to relatively compact sets.
Proposition 5.1. Assume that (H1) holds. Then, the operator K is completely continuous.
Now we can argue as in [16,Appendix] to prove that {u n } is, in fact, bounded in C 0,η (Ω) for some η ∈ (0, 1). Therefore, Arzelà-Ascoli theorem implies that {u n } admits a uniformly convergent subsequence. Say, up to a not relabeled subsequence, u n → u uniformly in Ω for some u ∈ C(Ω).
Using that {u n } and {(λ n , w n )} are bounded in L ∞ (Ω) and in R × L ∞ (Ω), and also that α < q − 1 < 1, the previous equality clearly implies that {u n } is bounded in H 1 0 (Ω). Then, u ∈ H 1 0 (Ω) and, up to a new subsequence, u n ⇀ u in H 1 0 (Ω). Moreover, by [9], ∇u n → ∇u strongly in L q (Ω) N . Furthermore, a lower local estimate on {u n } can be derived by comparison in the usual way. With all these estimates and convergences, the passing to the limit in (5.1) is standard. Therefore, u ∈ H 1 0 (Ω) ∩ L ∞ (Ω) is the unique solution to (5.1). This means that K(λ, w) = u. Thus, we have proved that, up to a subsequence, K(λ n , w n ) → K(λ, w) strongly in L ∞ (Ω). Actually, since (λ, w) was fixed from the beginning, the whole sequence, and not just a subseqence, converges to (λ, w). That is to say, K is continuous.
It is left to prove that K maps bounded sets to relatively compact sets. In other words, that for every sequence {(λ n , w n )} bounded in R × L ∞ (Ω), there exists (λ, w) ∈ R × L ∞ (Ω) such that, up to a subsequence, K(λ n , w n ) → K(λ, w) strongly in L ∞ (Ω). Indeed, it is well-known that, up to a subsequence, λ n → λ in R and w n → w weakly* in L ∞ (Ω) for some (λ, w) ∈ R × L ∞ (Ω). This convergence is enough to pass to the limit in the term with w n . In the rest of the terms, we pass to limit arguing as above. Thus, up to a subsequence, K(λ n , w n ) → K(λ, w), and the proof is finished.
Proof of Proposition 5.2. By virtue of Proposition 5.1, K is completely continuous. Moreover, since (P 0 ) admits at most one solution (by virtue of [3]), then u 0 is the unique solution to Φ(0, u) = 0 (see Remark 5.3). In particular, it is isolated. We will prove now that i(Φ(0, ·), u 0 ) = 0 by using the properties of the Leray-Schauder degree.
In conclusion, we can now apply [4, Theorem 2.2], which is essentially [30,Theorem 3.2], and the proof is finished.