The classical reaction-diffusion equation is a nonlinear partial differential equation frequently used to describe the evolution of numerous natural quantities (chemical concentrations, temperatures, populations, etc.). These phenomena combine a local dynamics (via the reaction function f) and a spatial dynamics (via the diffusion). It is well known that solutions to reaction-diffusion systems can exhibit rich behavior such as the existence of traveling waves or formation of spatial patterns .
Naturally, equations (1.1) and (1.2) are also interesting from the standpoint of numerical mathematics since they correspond to semi- or full discretization of the original reaction-diffusion equation .
The literature dealing with equations (1.1) and (1.2) studies mainly the dynamical properties such as the asymptotic behavior [5, 33, 34], existence of traveling wave solutions [9, 8, 10, 21, 35, 36, 37] and pattern formation [6, 7, 8], in particular for specific nonlinearities (e.g., the Fisher or Nagumo equation). A growing number of studies have dealt with those questions in nonautonomous cases [17, 24]. In this paper, we study (1.1)–(1.2) with a general time- and space-dependent nonlinearity f. Our focus lies on the existence, uniqueness, continuous dependence (both on the initial condition as well as on the underlying time structure/numerical discretization), and a priori bounds in the form of weak and strong maximum principles. Note that both continuous dependence and maximum principles are key assumptions in the proofs of the existence of traveling waves [21, 35]. Our goal is to explore and describe them in full generality.
In order to consider both (1.1) and (1.2) at once and motivated by convergence issues and continuous dependence of solutions on the time discretization, we use the language of the time scale calculus [4, 16]. We do not restrict ourselves to symmetric diffusion (see the following paragraph) and consider the nonautonomous reaction-diffusion processes
where , is a time scale, and the symbol denotes the delta derivative with respect to time. Our results are new even in the special cases (when becomes the partial derivative ) and (when is the partial difference ).
If and , then (1.3) becomes the symmetric lattice reaction-diffusion equation. The asymmetric case , corresponds to the lattice reaction-advection-diffusion equation. Next, if and , or if and , then (1.3) reduces to the lattice reaction-transport equation. For more details and other special cases see [29, Section 1].
In Section 2, we formulate (1.3) as an abstract nonautonomous dynamic equation and prove the local existence of solutions. In comparison with the existing literature [5, 33, 34], we do not work in the Hilbert space or in the weighted spaces but in the Banach space ; as explained in , this is a much more natural choice. We also prove the uniqueness of bounded solutions. In Section 3, we use techniques from the Kurzweil–Stieltjes integration theory to show the continuous dependence of solutions on the time scale (time discretization). In the special case, this implies the convergence of solutions of (1.2) to the solution of (1.1) as the time discretization step tends to zero. Following the ideas from  (which deals with initial-boundary-value problems on finite subsets of ), we provide weak maximum and minimum principles in Section 4. These a priori bounds, as usual, depend strongly on the time structure. Combined with the local existence results they enable us to prove the global existence of bounded solutions to (1.3). We illustrate our findings on the autonomous logistic and bistable nonlinearities (Fisher and Nagumo equations) and a nonautonomous logistic population model with a variable carrying capacity. Finally, in Section 5, we conclude with the strong maximum principle. In the linear case , the weak maximum principle was already proved in [29, Theorem 4.7], but the strong maximum principle is new even for linear equations.
2 Local existence and uniqueness of solutions
In this section, we study the local existence and global uniqueness of solutions to the initial-value problem
where is a bounded real sequence, , is a time scale and . We use the notation , , and
We impose the following conditions on the function :
f is bounded on each set , where is bounded.
f is Lipschitz-continuous in the first variable on each set , where is bounded.
For each bounded set and each choice of and there exists a such that if , then for all , .
We begin with a local existence result. Given a function , the symbol denotes the x-th component of the sequence , and should not be confused with the derivative of U with respect to x (which never appears in this paper).
Theorem 2.1 (Local existence).
Assume that the function satisfies (H1)–(H3). Then for each the initial-value problem (2.1) has a bounded local solution defined on , where and . The solution is obtained by letting , where is a solution of the abstract dynamic equation
with being given by
Condition (H1) guarantees that Φ indeed takes values in . Choose an arbitrary and denote
Note that if , then for all . If L is the Lipschitz constant for the function f on , we get
This means that Φ is Lipschitz-continuous in the first variable on .
Next, we observe that Φ is bounded on . Indeed, let M be the boundedness constant for the function on . For each we have for each , and consequently
Finally, we claim that Φ is continuous on . To see this, consider an arbitrary and a fixed pair . Let be the corresponding number from (H3). Then for all with and we have
which proves that Φ is continuous at the point .
By [4, Theorem 8.16], the initial-value problem
has a local solution defined on , where and . Letting , , we see that u is a solution of the initial-value problem (2.1). ∎
Note that even in the linear case the solutions of (2.1) are not unique in general (see, e.g., [29, Section 3]) and the uniqueness can be expected only in the class of bounded solutions. In the next theorem, we tackle this issue for an initial-value problem which generalizes (2.1).
Assume that satisfies the following conditions:
φ is bounded on each set , where is bounded.
φ is Lipschitz-continuous in the first variable on each set , where is bounded.
Then for each the initial-value problem
has at most one bounded solution .
Assume that , are two bounded solutions that do not coincide on ; let
We claim that for every . If , the statement is true. If and t is left-dense, then the statement follows from the continuity of solutions with respect to the time variable. Finally, if and t is left-scattered, then , and the statement follows from the fact that .
If t is right-scattered, then and imply , a contradiction to the definition of t. Hence, t is right-dense. Since the functions , , , are bounded, their values are contained in a bounded set . By the first assumption, there is a constant such that on . We have
(the last integral exists at least in the Henstock–Kurzweil sense; see [23, Theorem 2.3]). It follows that
i.e., the functions , are continuous on .
By the second assumption, the mapping φ is Lipschitz-continuous in the first variable on ; let L be the corresponding Lipschitz constant. Then
(the last integral exists since is continuous). Consequently, for each we have
Since t is right-dense, there is a point with and . Substituting this inequality into the previous estimate, we arrive at a contradiction. ∎
The uniqueness of bounded solutions to the initial-value problem (2.1) is now a simple consequence of the previous theorem.
Theorem 2.3 (Global uniqueness).
Assume that satisfies (H1) and (H2). Then for each the initial-value problem (2.1) has at most one bounded solution .
Hence, it is enough to verify that the two conditions in Theorem 2.2 are satisfied.
Given an arbitrary bounded set , there exists a bounded set such that implies , . Hence, the first condition in Theorem 2.2 is an immediate consequence of (H1). To verify the second condition let L be the Lipschitz constant for the function f on . Then, for each pair of sequences u, , we have
which means that φ is Lipschitz-continuous in the first variable on . ∎
3 Continuous dependence results
This section is devoted to the study of continuous dependence of solutions to abstract dynamic equations with respect to the choice of the time scale. The results are also applicable to (2.1), whose solutions (as we know from Theorem 2.1) are obtained from solutions to a certain abstract dynamic equation.
We begin by proving a continuous dependence theorem for the so-called measure differential equations, i.e., integral equations with the Kurzweil–Stieltjes integral (also known as the Perron–Stieltjes integral) on the right-hand side. For readers who are not familiar with this concept it is sufficient to know that the integral has the usual properties of linearity and additivity with respect to adjacent subintervals. The main advantage with respect to the Riemann–Stieltjes integral is that the class of Kurzweil–Stieltjes integrable functions is much larger. For example, if has bounded variation, then the integral exists for each regulated function , where X is a Banach space (see [26, Proposition 15]).
Let X be a Banach space and . Consider a sequence of nondecreasing left-continuous functions , , such that on . Assume that is Lipschitz-continuous in the first variable. Let , , be a sequence of functions satisfying
and . Suppose finally that the function , , is regulated. Then on .
Since and , the sequences and are necessarily bounded. Hence, there exists a constant such that
The Kurzweil–Stieltjes integral exists because is regulated and has bounded variation. Since , it follows from [22, Theorem 2.2] that
uniformly with respect to . Thus, for an arbitrary there exists an such that
Moreover, the index can be chosen in such a way that for each .
Consequently, the following inequalities hold for each and :
where L is the Lipschitz constant for the function Φ. Using Grönwall’s inequality for the Kurzweil–Stieltjes integral (see, e.g., [25, Corollary 1.43]), we get
which completes the proof. ∎
We now use the relation between measure differential equations and dynamic equations to obtain a continuous dependence theorem for the latter type of equations. Since we need to compare solutions defined on different time scales (whose intersection might be empty), we introduce the following definitions.
Consider an interval and a time scale with , . Let be given by
Each function can be extended to a function by letting
Note that coincides with x on , and is constant on each interval , where . We will refer to as the piecewise constant extension of x, see Figure 1.
We are now ready to prove a theorem dealing with continuous dependence of solutions to abstract dynamic equations with respect to the choice of the time scale and initial condition.
Theorem 3.2 (Continuous dependence).
Let X be a Banach space and . Consider an interval and a sequence of time scales such that and for each and on . Denote
Suppose that is continuous on its domain and Lipschitz-continuous with respect to the first variable. Let , , be a sequence of functions satisfying
and . Then the sequence of piecewise constant extensions is uniformly convergent to the piecewise constant extension on . In particular, for every there exists an such that for all , .
According to the assumptions, we have
Let be given by
Note that for each we have
Because is continuous on , its piecewise constant extension is regulated on (see [27, Lemma 4]). Moreover, its one-sided limits at each point of are elements of (note that is compact because is continuous and is compact). The function is the piecewise constant extension of the identity function from to ; therefore (again by [27, Lemma 4]), is regulated on . Consequently, the function is also regulated on , and its one-sided limits have values in . The continuity of Φ on implies that is regulated on . According to Theorem 3.1, we have on . ∎
The problem of continuous dependence of solutions to dynamic equations with respect to the choice of time scale has been studied by several authors; see, e.g., [1, 3, 13, 14, 15, 20]. Our approach is close to the one taken in  or ; it relies on the continuous dependence result for measure differential equations from Theorem 3.1, which is similar in spirit to [3, Theorem 5.1]. In this context, it seems appropriate to include a few remarks:
Although the statement of [3, Theorem 5.1] is essentially correct, the proof provided there is based on an erroneous estimate of the form , where , are certain functions whose norm is bounded by M, and , are nondecreasing.
The assumption that the Hausdorff distance between and tends to zero is never used in the proof of [3, Theorem 5.1], and can be omitted. On the other hand, the assumption that the above-mentioned integral exists is missing.
The next result shows that each time scale can be approximated by a sequence of discrete time scales in such a way that the assumptions of Theorem 3.2 are satisfied. We introduce the following notation:
If is a time scale with , there exists a sequence of discrete time scales with , , , and such that on .
Moreover, if , then ; otherwise, if , then the sequence can be chosen so that for all .
We start by proving that for each there exists a left-continuous nondecreasing step function such that , , and .
Given an , let be a partition of such that , . We begin the construction of the step function by letting . Then we proceed by induction in the backward direction and define on . At the same time, we are going to check that on these subintervals, and also ensure that whenever ; this will guarantee that .
Assume that is already defined at and we want to extend it to . We distinguish between two possibilities:
If , then, by the definition of , we have for each . Let , . Then , where the last inequality follows from the induction hypothesis.
If is nonempty, let be its supremum. Define
Note that might coincide with . In this case, we necessarily have , and therefore, by the induction hypothesis, ; this guarantees that is left-continuous at .
For each we have . Hence, there holds , which in turn means that . For each it follows from the definition of that , and therefore .
Observe that the function constructed in this way has the property that , and observe that implies .
Choosing , , we get a sequence of left-continuous nondecreasing step functions such that on . For each consider the set
Clearly, and T are elements of , and . Moreover, is finite since is a step function and therefore its graph has only finitely many intersections with the graph of the identity function. Thus, is a discrete time scale. It follows from the definition of that , and therefore on .
To prove the final part of the theorem, we distinguish between two cases:
Assume that . Let , and construct a sequence of points using the recursive formula
Since the graininess of never exceeds , the set whose supremum is being considered is never empty. Also, note that (otherwise, the point would have been chosen directly after ). Thus, the recursive procedure always terminates by reaching the point for some .
In the construction of the function described at the beginning of this proof, we can always assume that the points are among . The construction then guarantees that for each . Consequently, the points are contained in all of the time scales , , and
On the other hand, since , we have , which in turn means that .
Assume that . If μ is the graininess function of an arbitrary time scale with and , observe that if , and if . Hence, we have
Since on , the Moore–Osgood theorem implies that on , and therefore
4 Weak maximum principle and global existence
A natural task in the analysis of diffusion-type equations is to establish the maximum principles. Given an initial condition , let
We introduce the following conditions, which will be useful for our purposes:
are such that , and .
There exist such that , and one of the following statements holds:
and for all , .
for all , , .
Let us notice the following:
If (H4)–(H5) are not satisfied, then the maximum principle does not hold even in the linear case with ; see [29, Section 4].
If (H5) holds, there exists a function f satisfying (H6); indeed, the linear functions
have identical nonpositive slopes, and the constant term of is less than or equal to the constant term of . If or , then (H6) is equivalent to for all , and . Finally, if and , there does not exist any function satisfying (H6).
If (H6) holds in the continuous case , the following lemma shows that (H6) is also satisfied for all sufficiently fine time scales (specifically, for almost all of the discrete approximating time scales from Theorem 3.4).
Assume that and (H2) and (H6) hold. Then there exists such that for all the following inequalities hold:
Let be the Lipschitz constant for the function f on the set . Then for all , and we obtain
Since , the two inequalities in (4.1) will be satisfied if , i.e., for all . ∎
The following lemma represents a weak maximum principle for time scales containing no right-dense points; it will be a key tool in the proof of the general weak maximum principle.
Assume that does not contain any right-dense points, conditions (H4)–(H6) hold and is a solution of (2.1) with . Then
We show the statement via the induction principle [4, Theorem 1.7] in the variable t. For a fixed we have to distinguish among three cases:
For we obtain from the definitions of m and M and from (H6) that
Let be left-dense and assume that for all and . Then the continuity of the function on implies
Let be right-scattered, i.e., necessarily , and
We have to show that
Notice that from (H5) and from the fact that we get
Consequently, (H6) yields
for each . The former inequality in (4.4) can be shown in a similar way.
We do not have to consider the case when t is right-dense since does not contain any such point. Therefore, the induction principle yields that (4.2) holds for all , . ∎
We now proceed to the general weak maximum principle for (2.1), where is an arbitrary time scale (i.e., allowing right-dense points). The basic idea of the proof is to use the continuous dependence results from Theorems 3.2 and 3.4 to approximate the solution of (2.1) on any time scale by solutions of (2.1) defined on discrete time scales, for which we can apply Lemma 4.3.
Theorem 4.4 (Weak maximum principle).
Assume that (H1)–(H6) hold. If is a bounded solution of (2.1), then
where is given by
According to Theorem 3.4, there exists a sequence of discrete time scales such that , , , and . Moreover, we have either and , or for all . In any case, using (H5), we get the existence of an such that
If , it follows from Lemma 4.2 that can be chosen in such a way that the inequalities
hold for each . If , the same inequalities hold for each because of (H6) and the fact that .
Therefore, because are discrete time scales, Lemma 4.3 yields that the corresponding solutions of (2.1) satisfy
i.e., for we have
Since the solution U is bounded, there is an such that for each . Let
As in the proof of Theorem 2.1, one can show that the restriction of the mapping Φ to is continuous on its domain and Lipschitz-continuous in the first variable. Therefore, if we let , the assumptions of Theorem 3.2 are satisfied (recall that for all and from (4.7), and for all immediately from the definition of ), and hence on .
From the definition of the piecewise constant extension and from (4.7) it is obvious that
Since on , inequalities (4.8) imply
Particularly, there has to be
which proves that (4.6) holds. ∎
In connection with the previous theorem, we point out the following facts:
The classical maximum principle guarantees that , i.e., it corresponds to the case when and . However, for this choice of r and R, condition (H6) need not be satisfied. Choosing and , we can soften (H6), and obtain the weaker estimate .
An examination of the proofs of Lemma 4.3 and Theorem 4.4 reveals that if we are interested only in the upper bound , it is sufficient to assume that . Symmetrically, to get the lower bound , it is enough to suppose that .
As an application of the weak maximum principle, we obtain the following global existence theorem. Since we consider a general class of nonlinearities f, the result is new even in the special case .
Theorem 4.6 (Global existence).
If and (H1)–(H6) hold, then (2.1) has a unique bounded solution .
Moreover, the solution depends continuously on in the following sense: For every there exists a such that if , for all , and , then the unique bounded solution of (2.1) corresponding to the initial condition satisfies for all , .
with being given by
Thus, it is enough to prove that (4.9) has a solution on the whole interval .
Let be the set of all such that (4.9) has a solution on , and denote . By Theorem 2.1, we have . Let us prove that . The statement is obvious if is a left-scattered maximum of ; therefore, we can assume that is left-dense. It follows from the definition of that (4.9) has a solution U defined on . According to the weak maximum principle, the solution U takes values in the bounded set . As in the proof of Theorem 2.1, one can show that Φ is continuous on its domain and Lipschitz-continuous in the first variable and bounded on ; let C be the boundedness constant for . Since U is a solution of (4.9), we have
for each . Note also that for all . Thus, the Cauchy condition for the existence of the limit is satisfied. If we extend U to by letting , we see that (4.10) holds also for . Since the mapping is continuous on , it follows that U is a solution of (4.9) on , i.e., .
If , we can use Theorem 2.1 to extend the solution U from to a larger interval. However, this contradicts the fact that . Hence, the only possibility is , and the proof of the existence is complete.
To obtain continuous dependence of the solution on the initial condition, it is enough to show the following statement: If for , in and is the unique solution of the initial-value problem
then on . Since we know that the solutions in fact take values in , the statement is an immediate consequence of Theorem 3.2 where we take for each . ∎
Let us illustrate the application of the weak maximum principle and the global existence theorem on the following special cases of (2.1).
Consider the logistic nonlinearity , , , , where is a parameter. In this case, problem (2.1) becomes a Fisher-type reaction-diffusion equation:
Obviously, the function f satisfies (H1)–(H3). Suppose that , , , and , i.e., (H4) and (H5) hold. Consider an arbitrary nonnegative initial condition , i.e., . We now distinguish between the cases and :
If , let and . Then and , i.e., (H6) holds and there exists a unique global solution u of (4.11). Moreover, the solution u satisfies for all and . In particular, nonnegative initial conditions always lead to nonnegative solutions.
If , Lemma 4.2 together with the analysis of the previous case guarantee that (H6) holds with and whenever is sufficiently small. For example, if , consider the linear functions
from (H6). We have for , i.e., the first inequality in (H6) is satisfied. The graphs of and meet at the point . Therefore, the second inequality in (H6) will be satisfied for if and only if , i.e., if and only if . The last condition is equivalent to , which holds if (note that ). Under these assumptions, condition (H6) holds and there exists a unique bounded global solution u of (4.11). Moreover, the solution u satisfies for all and .
Consider the so-called bistable nonlinearity , , , , where . In this case, problem (2.1) becomes a Nagumo-type reaction-diffusion equation:
Obviously, the function f satisfies (H1)–(H3). Suppose that , , , and , i.e., (H4) and (H5) hold. Consider an arbitrary initial condition . Again, we distinguish between the cases and :
If , let
Then and , i.e., (H6) holds and there exists a unique bounded global solution u of (4.12). Moreover, the solution u satisfies for all and . In particular, nonnegative/nonpositive initial conditions always lead to nonnegative/nonpositive solutions.
If , Lemma 4.2 together with the analysis of the previous case guarantee that (H6) holds whenever is sufficiently small. For example, if , one can follow the computations from [31, Section 8] to conclude that there exists a unique global solution u of (4.12) satisfying
We have no a priori bounds for .
Consider the nonautonomous nonlinearity , , , , where and . In this case, problem (2.1) has the form
This equation can be interpreted as the logistic population model where the carrying capacity d depends on position and time. Assume that d has the following properties:
d is bounded.
For each choice of and there exists a such that if , then for all .
Then the function f satisfies (H1)–(H3). Indeed, let D be the boundedness constant for . If is bounded, it is contained in a ball of radius ρ centered at the origin. Consequently, for all , , , we get the estimates
which imply that (H1)–(H3) hold.
As an example, let us mention the model of population dynamics with a shifting habitat, which was described by Hu and Li in . There, the authors considered problem (4.13) with , , (i.e., symmetric diffusion), and , where and is continuous, nondecreasing, and bounded. It follows that e is uniformly continuous on : Given an , there exists a such that implies . Thus, we get
whenever and ; this shows that d satisfies our assumptions. (We remark that some of the results presented in  can be found in our earlier paper . In particular, the fundamental solution of the linear lattice diffusion equation was derived in [28, Example 3.1], and [17, Corollary 2.1] is a consequence of our superposition principle from [28, Theorem 2.2].)
Another simple example is obtained by letting , where is a continuous periodic function; this choice corresponds to a population model with a periodically changing habitat. Since e is necessarily bounded and uniformly continuous on , it is obvious that d satisfies our assumptions.
Suppose now that , , , and , i.e., (H4) and (H5) hold. For simplicity, let us restrict ourselves to the case when d is a positive function, and let
Consider an arbitrary nonnegative initial condition , i.e., . Take and . Then and for all and . This means that (H6) holds if , or (by Lemma 4.2) if is positive and sufficiently small. In these cases, problem (4.13) possesses a unique global solution u, and for all and .
5 Strong maximum principle
In the rest of the paper, we focus on the strong maximum principle for (2.1). We need the following stronger versions of (H4)–(H6):
a, b, are such that , and .
There exist such that , and the following statements hold for all and :
If , then
If , then
The next lemma analyzes the situation when a solution of (2.1) attains its maximum at a left-scattered point.
Assume that (H1), (H2), (H3), (), (), and () hold, and is a bounded solution of (2.1). If for some and a left-scattered point , then for each .
We consider the case when ; the case can be treated in a similar way. Denote . We have
By the weak maximum principle (which holds because ()–() imply (H4)–(H6)), the values of u cannot exceed R. If at least one of the values , is smaller than R and , then
which contradicts the fact that . If , then
which is a contradiction again. Thus, the only possibility is that
as desired. ∎
We now turn our attention to the case when the maximum is attained at a left-dense point.
Assume that (H1), (H2), (H3), (), (), and () hold, and is a bounded solution of (2.1). If for some and a left-dense point , then for all and .
We consider the case when ; the case can be treated in a similar way. We begin by proving that
Assume that there exists a such that . Let be the Lipschitz constant for f on the set . Choose a partition such that and for each we have either or . We will use induction with respect to i to show that for each ; this will be a contradiction to the fact that .
For , we know that . By the weak maximum principle (which holds because ()–() imply (H4)–(H6)), the values of u cannot exceed R. If is such that , then the induction hypothesis and Lemma 5.1 imply that . Otherwise, we have . For each we get
Notice that for all . Therefore, Grönwall’s inequality [4, Theorem 6.1] yields
which completes the proof by induction and confirms that (5.1) holds.
Let us prove that for all . Assume that there exists a such that at least one of the values is smaller than R. The fact that is a constant function on implies that (note that if , then t is necessarily left-dense). On the other hand,
i.e., , a contradiction.
Once we know that for all , it follows by induction with respect to that for all and . ∎
With the help of the previous two lemmas, we derive the strong maximum principle.
Theorem 5.3 (Strong maximum principle).
Assume that (H1), (H2), (H3), (), (), and () hold with and is a bounded solution of (2.1). If for some and , then the following statements hold:
If contains only isolated points, i.e., for some , and
then for all .
Otherwise, if contains a point which is not isolated, then u is constant on .
In order to prevent any confusion, we emphasize that the fact whether a point is isolated or not is considered with respect to the time scale interval , not the entire time scale . In other words, the statement distinguishes between the cases in which the interval is a finite set (part (a)) or at least countable (part (b)).
Proof of Theorem 5.3.
We consider the case when ; the case can be treated in a similar way. We prove the statement by analyzing two different cases: Case (1): Let there be a left-dense point in . Denote
and . Given the definition of the supremum and the fact that is a closed set, we obtain . To show that is left-dense let us assume by contradiction that is left-scattered. Thus, and immediately from the definition of the supremum we get a contradiction. From the proofs of Lemmas 5.1 and 5.2 we obtain that for all , and particularly . Furthermore, since is left-dense, Lemma 5.2 yields that
Case (2): Let us assume that does not contain any left-dense point. Subcase (i): If does not contain any right-dense point, i.e., contains only isolated points, then part (a) of the theorem follows immediately from Lemma 5.1. Subcase (ii): Let there exist a right-dense point in . Denote
and . From the fact that is left-scattered and from the definition of the supremum we obtain . Moreover, since is closed, there is . Further, we show that is right-dense as well. Indeed, let us assume that is right-scattered, i.e., . Then is an unattained supremum of and there exists a sequence such that . This would imply that is left-dense, a contradiction. Thus, is right-dense.
From the definition of , the sequence of predecessors of , namely
is well-defined and satisfies . Let us assume that is arbitrary but fixed, i.e., or for some . We consider the case ; the other case is similar. Lemma 5.1 implies that for all there is
Then the continuity of the function yields that
and since is arbitrary, there is for all .
Now we prove that for and . We use the backward induction principle in the variable t (see [4, Theorem 1.7 and Remark 1.8]):
Above we have shown that for there is for all .
Let be left-scattered and for all . Then Lemma 5.1 immediately implies that for all .
Let be right-dense and for all and . Then again from the continuity of the functions we obtain
We do not have to consider the case when is left-dense, since we assume that does not contain any such point.
The backward induction principle implies that for all and .
Finally, it remains to prove that for and . Since for all , there is and, analogously to above, we can use Theorem 4.4 (weak maximum principle) to show that
Assume that (H1), (H2), (H3), (), (), and () hold with . Suppose that is a bounded solution of (2.1). If there is a point that is not isolated and if the initial condition is not constant, then
Assume by contradiction that there exist , such that . Since and is not isolated, part (b) of Theorem 5.3 yields that u is constant on , a contradiction to the assumption that is not constant. ∎
The following remarks explain why the original conditions (H4)–(H6) are not sufficient to establish the strong maximum principle, and had to be replaced by their stronger counterparts ()–().
(H4) is too weak for the strong maximum principle; we need the constants to be strictly positive. Indeed, let us consider the linear transport equation
Thus, the strong maximum principle does not hold.
To see that (H5) does not suffice, consider the time scale and the following linear equation ():
which corresponds to (2.1) with , and . This equation holds if and only if
For the initial condition
which violates the strong maximum principle.
M. Bohner, M. Federson and J. G. Mesquita, Continuous dependence for impulsive functional dynamic equations involving variable time scales, Appl. Math. Comput. 221 (2013), 383–393. Web of ScienceGoogle Scholar
M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkhäuser, Boston, 2001. Google Scholar
S.-N. Chow, Lattice dynamical systems, Dynamical Systems, Lecture Notes in Math. 1822, Springer, Berlin (2003), 1–102. Google Scholar
M. Federson, J. G. Mesquita and A. Slavík, Basic results for functional differential and dynamic equations involving impulses, Math. Nachr. 286 (2013), no. 2–3, 181–204. CrossrefWeb of ScienceGoogle Scholar
M. Friesl, A. Slavík and P. Stehlík, Discrete-space partial dynamic equations on time scales and applications to stochastic processes, Appl. Math. Lett. 37 (2014), 86–90. Web of ScienceCrossrefGoogle Scholar
B. M. Garay, S. Hilger and P. E. Kloeden, Continuous dependence in time scale dynamics, Proceedings of the Sixth International Conference on Difference Equations (Augsburg 2001), CRC Press, Boca Raton (2004), 279–287. Google Scholar
N. T. Ha, N. H. Du, L. C. Loi and D. D. Thuan, On the convergence of solutions to dynamic equations on time scales, Qual. Theory Dyn. Syst. (2015), 10.1007/s12346-015-0166-8. Web of ScienceGoogle Scholar
P. E. Kloeden, A Gronwall-like inequality and continuous dependence on time scales, Nonlinear Analysis and Applications: To V. Lakshmikantham on his 80th Birthday, Kluwer Academic, Dordrecht (2003), 645–659. Google Scholar
J. Mallet-Paret, Traveling waves in spatially discrete dynamical systems of diffusive type, Dynamical Systems, Lecture Notes in Math. 1822, Springer, Berlin (2003), 231–298. Google Scholar
G. A. Monteiro and M. Tvrdý, Generalized linear differential equations in a Banach space: Continuous dependence on a parameter, Discrete Contin. Dyn. Syst. 33 (2013), no. 1, 283–303. Web of ScienceGoogle Scholar
Š. Schwabik, Generalized Ordinary Differential Equations, World Scientific, River Edge, 1992. Google Scholar
Š. Schwabik, Abstract Perron–Stieltjes integral, Math. Bohem. 121 (1996), 425–447. Google Scholar
V. Volpert, Elliptic Partial Differential Equations: Volume 2 Reaction-Diffusion Equations, Springer Monogr. Math. 104, Springer, Basel, 2014. Google Scholar
About the article
Published Online: 2017-03-17
Funding Source: Grantová Agentura České Republiky
Award identifier / Grant number: GA15-07690S
All three authors acknowledge the support by the Czech Science Foundation, grant number GA15-07690S.
Citation Information: Advances in Nonlinear Analysis, Volume 8, Issue 1, Pages 303–322, ISSN (Online) 2191-950X, ISSN (Print) 2191-9496, DOI: https://doi.org/10.1515/anona-2016-0116.
© 2019 Walter de Gruyter GmbH, Berlin/Boston. This work is licensed under the Creative Commons Attribution 4.0 Public License. BY 4.0