Existence and uniqueness of solution for a singular elliptic di ﬀ erential equation

: In this article, we are concerned about the existence, uniqueness, and nonexistence of the positive solution for:

0, as , where ⩾ N 3, < < q 0 1, > λ 0, > p 1, > μ 0 is a parameter and the function h x ( ) satisfies certain conditions.To start with, based on the variational argument and perturbation method, we obtain the existence and uniqueness of the positive solution for the aforementioned singular elliptic differential equation as > λ N 2 .In addition, there is no solution as ⩽ λ N 2 .Later, from an experimental point of view, we give the numerical solution of the aforementioned singular elliptic differential equation by means of a neural network in some special cases, which enrich the theoretical results.Our conclusions partially extend the results corresponding to the nonsingular case.

Introduction
The one-parameter elliptic partial differential equation: arose in mathematical biology and Riemannian geometry, where > p 1, > N 2, ∈ λ is a parameter, ⋅ a( ) and ⋅ b( ) are both smooth functions in N .Equation (1) has important research value in mathematics and applica- tions, and there is a vast literature concerned with this kind of problem.In the context of mathematical biology, it was studied in [2, 12,35].For example, if 0 and all ∈ x N , Afrouzi and Brown [2] proved that equation (1) has a unique positive solution when > λ λ 1 and has no positive solution in other cases by constructing the sub-and supersolutions, where λ 1 is the principal eigenvalue of the equation: In addition, in [12], Du and Ma supposed that ; they discussed the existence, uniqueness, and nonexistence of (1) again.Moreover, Du and Ma further obtained the properties of the solutions of (1) with other conditions in [13].It is easy to see that the results remain valid if Δ is replaced by a uniformly elliptic operator.In the area of Riemannian geometry, for example, Pino [11] proved that there exists a unique solution of equation (1) on a compact Riemannian manifold under certain conditions.In other various cases, one can refer to [4,21,22,24,28].
When the perturbation term is included, many articles also studied the existence, uniqueness, and nonexistence of the positive solution for Problem (1).For example, in 2019, Delgado et al. [9] added perturbation to (1) as = = a x b x K x ( ) ( ) ( ), namely, they obtained the existence of positive solutions for the following equation: where ⩾ N 1, ∈ λ α , , > p 0, ⋅ g( ), and ⋅ K ( ) are the functions satisfying some conditions, ) is positive.Then, in 2021, under more general conditions of a x ( ) and b x ( ), Delgado et al. [10] discussed the following perturbation equation: , 0, and M x y , ( ) satisfies certain conditions.In addition, using bifurcation theory, they discussed the properties of the solutions under various conditions about α, p, and r.
On the other hand, singular elliptic differential equations are an important part of PDEs.It is a significant tool to describe natural phenomena and explain natural laws.The study of singular elliptic differential equations originated in the middle of the last century [1,27,34].It aroused the interests of many mathematicians, and they did the groundworks, such as [8,18,33].From a practical point of view, its applications involve reactiondiffusion, heat conduction, fluid dynamics, non-Newtonian fluids, and many other areas [7,20,23,25,30,32].From a mathematical point of view, the study of singular elliptic equations can not only enrich the theory of differential equations but also promote the development of other mathematical and application branches.Up until now, much attention has been focused on singular elliptic differential problems.
It is natural to ask whether one adds a singular perturbation term to (1), does the result in [13] still hold?With respect to this question, partial answers are given in some special cases.For example, assuming = a x 1 ( ) and = b x 0 ( ) , Hai [19] studied the regularity of the solutions for a singular perturbation problem of (1).If a x ( ) is a nonnegative function belonging to a suitable Lebesgue space and = b x 0 ( ) , Durastanti and Oliva [14] obtained the existence and uniqueness of positive solution when adding a singular term to (1).The conclusions in [19] and [14] are both discussed on a bounded domain ⊂ on the boundary of Ω. Inspired by the aforementioned references, in this article, we consider equation (1) again with a more general singular perturbation, and the Laplacian operator is replaced by a uniformly elliptic operator ), namely, one pays attention to the existence, uniqueness, and nonexistence of the positive solution for the following singular elliptic differential equation: , and where L K N 1 ( ) and L K N 2 ( )are defined in Section 2. In fact, the elliptic equation involving the uniformly elliptic )has a long research history.As observed by Escobedo and Kavian in [15], when one deals with the nonlinear heat equation: where ) .It is well known that if ≢ u 0 is a solution, then for > λ 0, one can define a family of solutions u λ λ ( ) to (4) by: and one can look for solutions u of (4) that are invariant under the "similarity" operation defined by (5), i.e., solutions u such that ∀ > = λ u u 0, λ .When looking for these so-called "self-similar" solutions, one finds that if we denote has to satisfy a related elliptic equation, namely, So there have been numerous researchers who have studied it extensively and achieved a series of remarkable results [3,6,16,26,29].Furthermore, to test the rationality of the theoretical results, we conduct some comparative experiments to validate the uniqueness of the positive solution for (2) according to the convergence of ground truth and results predicted by the method of neural networks on validation points.Nowadays, neural network has triggered changes in many fields.Due to the powerful fitting ability, neural network can not only learn complex nonlinear mapping in computer vision and other research interests, but also solve some mathematical problems, such as nonlinear differential equation [31].
The rest of this article is organized as follows.Section 2 introduces some preliminary results that will be applied to prove main results.Section 3 is devoted to the existence, uniqueness, and nonexistence of the positive solution for (2).Finally, we apply the method of neural network to illustrate the aforementioned conclusions under a certain condition.

Preliminary results
In this section, we give some preliminary results that will be applied to prove main conclusions.
In order to make equation (2) has a variational structure, one multiplies the two sides of equation (2) by: then can obtain its equivalent equation: Here, ) ).We are going to work in the space X , which is the completion of ∞ C N 0 ( ) with respect to the norm: Through Propositions 1.1 and 1.12 in [15], it is clear that X is a Hilbert space and continuously embedded into the weighted Lebesgue space: The linearization problem of equation ( 6) is By Proposition 2.3 in [15], it derives that the principal eigenvalue of the linear equation ( 7) can be denoted by: By (8), it is easy to obtain the following Poincaré-type inequality: For describing the energy functional of (6), considering the limit equation of (6), i.e., Because one is seeking a positive solution, replacing u with + u directly and defining the associated C 1 -func- tional of (10) as follows: which means that any critical point of I 0 is a positive solution of (10).
For any ∈ u X, through Hölder's inequality and Sobolev's inequality, it could be concluded that where , through Condition (3) and interpolation inequality, it can be seen that where Next, let us give the definition of the positive solution for equation (6).
Definition 1.It should be argued that ∈ u X is a (weak) positive solution of equation ( 6) if: (i) > u 0 a.e. in X ; (ii) for any ∈ φ X , one has In the following, it tends to show that the condition > = λ λ N 1 2 is necessary for the existence of the positive solution to (6) as μ is small enough.
Lemma 1. Assume that μ is small enough, and if equation (6) has a positive solution u on X , then > λ N 2 .
Proof.Multiplying equation ( 6) by ∈ u X and then integrating by parts, one obtains Hence, combining ( 9) and ( 13), one arrives at: By (11), it yields Take the limit → + μ 0 on both sides of ( 14), and based on the sign-preserving theorem of limit, it is widely accepted that as μ is small enough.Therefore, Solution for a singular elliptic differential equation  5

Theoretical proof
First, it will be shown that when with R large enough, the functional I u μ ( ) is bounded below.
Lemma 2. There exists > μ* 0 such that, for any ∈ μ μ 0, * ( ), there holds Proof.By Corollary 1.11 in [15], it obtains that for a given λ, there exists a constant > C 0 such that Hence, one arrives at: By ( 15) and ( 11), one has being large enough, where Through Lemma 2, it seems easy to know that the functional I u μ ( ) is bounded below and coer- cive.Thus, is well defined, and hence, it gains the following lemma.
Proof.By the fact that I μ maps bounded sets to bounded sets, it is easy to know

□
Remark 2. The singular term causes difficulties even if the infimum of I u μ ( ) can be obtained in B 0 R ( ).In fact, since < < q 0 1, the term is continuous but not differentiable.So we do not know whether the minimizers of I u μ ( ) are the positive solutions of (6) or not.However, through the following lemma, one will give a positive answer to this question, namely, the minimizers of I u μ ( ) are still the positive solutions of (6).
, then u is a positive solution of equation (6).
So u and + u are interchangeable in the functionals I u 0 ( ) and I u μ ( ).We claim > u 0 a.e. on N .Indeed, suppose, on the contrary, that the set has a positive measure.Let > r 0 such that Solution for a singular elliptic differential equation  7 and , it can be seen that for any > t 0 small enough.Since I u μ ( ) is the infimum in B 0 R ( ), it yields Using a direct calculation, it is found that Dividing the two sides of ( 16) by > t 0 and combining the fact Passing to the limit, through Fatou's lemma and ( 17), it follows which is impossible.Hence, Ω 0 has zero measure and > u 0 a.e. in N .So Condition (i) of Definition 12 holds.Next, one will prove that Condition (ii) of Definition 12 is also true.Taking an arbitrary nonnegative function ∈ ψ X and arguing as (17), one can arrive at: Since ≥ ψ 0, one has for any > t 0.Then, using (18) and Fatou's lemma, it can derive By this, it is easy to know ), it easily obtains ) ‖ denoted by: By the assumption of since − < < s 1 0 0 .Hence, T attains its minimum at = s 0. By direct computation, ).Therefore, On the other hand, for any ∈ ϕ X , > δ 0, define , the following inequality will not be affected.Through (19)

I u ψ μ K x h x u ψ x I u u δϕ μ K x h x u u δϕ x I u u δϕ u δϕ μ K x h x u u δϕ u δϕ x I u u μ K x h x u x δI u ϕ δμ K x h x u ϕ x I u u δϕ μ K x h x u u δϕ x
where Solution for a singular elliptic differential equation  9 Combining ( 20) and ( 21), it should be argued that Recalling (3), it obtains Hence, it can be deduced that Using > u 0 a.e. in N , we can obtain where + 1 Ω δ represents the characteristic function of the set + Ω δ .Hence, dividing (22) by > δ 0 and taking the limit as holds by Lebesgue's theorem.This inequality still holds with writing −ϕ instead of ϕ, so it is generally agreed that ∈ u X satisfies (12).In addition, , so one cannot apply the standard minimization arguments.To overcome this difficulty, we consider the perturbation argument.Specifically speaking, for each ∈ k , define → : Now, one gives the auxiliary functional of equation ( 6), namely, Remark 3. The technique for dealing with singular perturbation problems is mainly inspired by [16].
Next, it is positioned to show that I μ k , attains its minimum at ∈ u B 0 k R ( ) and the desired positive solution will be obtained by passing to the limit as → +∞ k .Based on this idea, one shall obtain the following lemma.
Proposition 1.For any Proof.By the fact of < < q 0 1, it is widely accepted that Hence, holds for any ∈ u X and ∈ k .Through Remark 1, one can define Then, applying the Ekeland variational principle, there exists a sequence So it has . In other words, Up to a subsequence, one has for any < ≤ γ 1 2* and some ∈ g L γ K γ N ( ).For ≥ s 0, the following inequality holds.Thus, based on ( 23) and ( 24), it tends to be Solution for a singular elliptic differential equation  11 using Lebesgue's theorem.Set , using a simple calculation, it acquires Hence, by ( 23 where o 1 n ( ) denotes a quantity approaching zero as → +∞ n .It is easy to know which combines with (26) to imply Passing to the limit as → +∞ n and knowing m μ k , is the infimum of I u a.e. in N with < < q 0 1.By the assumptions of h and φ, it follows that Using pointwise convergence and Lebesgue's theorem, one concludes that In addition, by ( 23) and a standard density argument, it yields So, it should be argued that Hence, one has , one declares , it is only necessary to verify that So, we assume ≥ w 0 n and replace w n ( ) by + w n ( ) if necessary.By direct calculation, it can be seen that Solution for a singular elliptic differential equation  13 Fixed ∈ n , from the definition of k and ≥ w 0 n , one receives Combining this formula with (29) and taking the limsup as → +∞ k , it yields Moreover, passing to the limit as → +∞ n , then it gains Now, it is positioned to prove that the infimum m μ can be attained.Since u k ( ) is a bounded sequence in X , one takes a subsequence and re-denote it as u k ( ).Thus, k Similar to (25), it appears that Then, using (28) and a similar argument used to prove Proof.First, based on Sobolev's inequality, it obtains Then, from equation ( 6), one acquires By Theorem 8.17 of [17], it follows that where = ∈ − < B x y y x r : Theorem 2. For any < < μ μ 0 *, there exists at most one positive solution of equation ( 6) on X for any < < +∞ λ 0 .
Proof.Suppose, on the contrary, that both u 1 and u 2 are the positive solutions of equation ( 6) with ≠ u u 1 2 , namely, and Using Lemma 5, it is found that ∈ v X i and v i have a compact support, where = i 1, 2. If one multiples (30) by v 1 and integrates by parts, then the following equality holds.Similarly, it could be concluded that Then, (31) minus (32), one receives Define According to the definition of ε Ω( ), (33) is reduced to: Solution for a singular elliptic differential equation  15 On the one hand, using a simple calculation, the left-hand side of ( 34) is equal to: Theorem 3.For any < < μ μ 0 *, equation (6) has no positive solution on X as < ≤ λ 0 N 2 .
Proof.Through Lemma 1, it is easy to know that equation ( 6) has no positive solution on X as < ≤ λ 0 N 2 .□

Numerical experiments
In this section, by applying the method of a neural network, one obtains the approximate solution of equation (2) under the condition of = N 3 and = p 1.5.In the first step, one sets = μ 0. For avoiding the trivial solution, it yields a regularizer of the solutions according to Theorem 3.12 in [15].During this step, we also verify the influence of λ on the existence of the positive solution by setting = λ 1 and = λ 3, respectively.
In the second step, one sets = = − μ q 10 , 0.5 values by the network trained in the first step, then one trains the network as a physics-informed procedure [31]; in this process, we also provide examples to illustrate the approximate solution that can be optimized by different initial weights.The neural network one uses is an eight-layer multilayer perceptron, with each hidden layer containing 32 neurons, and non-linear mapping is achieved by embedding sigmoid and tanh activation functions between the linear layers.Defining σ as the activation function of the neural network, and the nonlinear mapping in the neural network as shown in Figure 1 could be expressed as: where i denotes the index of the hidden layer, m and n denote the dimension of vectors as input and output, respectively, ∈ Define φ as uniformly elliptic operator and to be given by the right-hand term of (2), namely, ≔ + − − u h μ λ p q μh x u λu u , , , , , .
The loss of the following loss consists of two parts: where x i j denotes the ith random sample point at the jth epoch and x net i j ( ) denotes the prediction of a neural network.Here, one takes = k 1, because as = k 1, the loss function has already converged.The results of uniformly elliptic operator φ x net i j ( ( )) can be derived by applying the chain rule for differentiating composi- tions of functions using automatic differentiation.
Wang et al. [36] supposed that the commonly used L 2 loss is not suitable for training physics-informed neural network, while ∞ L loss is a better choice on some high-dimensional PDEs.Therefore, for the sake of a balance between efficiency and numerical solution accuracy, the neural network is trained by minimizing the 8-norm with the Adam optimizer.
In the first step, the initial learning rate is set to 0.001 and halved every 10,000 epochs.With = k 1, the procedure converges after 50,000 epochs.In the second step, the initial learning rate is set to 0.01 and halved every 2,000 epochs.With = k 0, the procedure converges after 10,000 epochs.Specifically, considering the bivariate function x x net , , 0 1 2

(
) and using a heatmap to show its function value in the domain of ∈ x 0, 5 1 [ ]and ∈ x 0, 5 2 [ ], it is found that networks trained with different initial weights produce an approximate function value (Figures 2 and 3).
One derives the function values and differential results for all grid points of the trained convergent neural network in this subset and examine the quality of the learned solution u by visualizing point-wise error between the left and right sides of equation (2).The evaluation metric is the equation loss.
When = λ 1, the absolute error is more than 0.1 for most of the area, as shown in Figure 4.When = λ 3, it is less than 0.01 in the whole domain, as shown in Figures 5 and 6.It is clearly the case that the equation loss of equation ( 2) is smaller when = λ 3, which means the numerical solution one obtains is closer to the true solution.
In this special case, to further demonstrate the influence of parameter λ on the existence of a positive solution of the equation, one sets = λ 1 and 3, respectively, and traces the 1-norm and infinite norm of each epoch during the first step of training.It can be found that the overall results of both are convergent, which shows that our algorithm is effective.In addition, as shown in Figures 7 and 8, it can be seen that when the parameter = λ 1, the vibration amplitude is obviously larger than that when = λ 3, and the equation loss is also larger.So, the convergence result of = λ 1 is obviously not as good as the result of = λ 3.

( 6 ) 2 .Lemma 5 .
considering the limit on k), one can achieve exists at least one positive solution on X when > λ N Proof.Through Lemmas 1 and 4 and Proposition 1, it is easy to obtain that (6) has at least one positive solution on the space X as > λ If u is a positive solution of (6) with > λ 0 and C γ is a constant independent of x and u.Clearly, it attains ⩽ all ∈x N according to the embedding theorem, where C 0 is another constant.□Next, let us show the uniqueness of the positive solution for equation(6).
generates the initial and boundary

Figure 1 :
Figure 1: should be located in Section 4.
of the hidden layer, ∈ x R i n denotes the output of the hidden layer and ⊗ denotes the vector dot multiplication.From the first to the seventh layer, we use sigmoid as an activation function; owing to the objective being to find a positive solution, in the last hidden layer, one uses tanh as an activation function.During the first without loss of generality, one samples 1,024 points uniformly each epoch in the open ball with a radius of 10 as a batch to feed the network.During the second step, we sample 1,024 points uniformly each epoch in the open ball with a radius of 5 as a batch to feed the network and take the initial and boundary values on the open ball with a radius of 5 generated by the neural network trained in the first stage as the initial and boundary conditions for equation (2).