Show Summary Details
More options …

IMPACT FACTOR 2018: 6.636

CiteScore 2018: 5.03

SCImago Journal Rank (SJR) 2018: 3.215
Source Normalized Impact per Paper (SNIP) 2018: 3.225

Mathematical Citation Quotient (MCQ) 2018: 3.18

Open Access
Online
ISSN
2191-950X
See all formats and pricing
More options …
Volume 8, Issue 1

# Well-posedness and maximum principles for lattice reaction-diffusion equations

Antonín Slavík
• Corresponding author
• Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 186 75 Praha 8, Czech Republic
• Email
• Other articles by this author:
/ Petr Stehlík
• Department of Mathematics and NTIS, Faculty of Applied Sciences, University of West Bohemia, Univerzitní 8, 306 14 Plzeň, Czech Republic
• Email
• Other articles by this author:
/ Jonáš Volek
• Department of Mathematics and NTIS, Faculty of Applied Sciences, University of West Bohemia, Univerzitní 8, 306 14 Plzeň, Czech Republic
• Email
• Other articles by this author:
Published Online: 2017-03-17 | DOI: https://doi.org/10.1515/anona-2016-0116

## Abstract

Existence, uniqueness and continuous dependence results together with maximum principles represent key tools in the analysis of lattice reaction-diffusion equations. In this paper, we study these questions in full generality by considering nonautonomous reaction functions, possibly nonsymmetric diffusion and continuous, discrete or mixed time. First, we prove the local existence and global uniqueness of bounded solutions, as well as the continuous dependence of solutions on the underlying time structure and on initial conditions. Next, we obtain the weak maximum principle which enables us to get the global existence of solutions. Finally, we provide the strong maximum principle which exhibits an interesting dependence on the time structure. Our results are illustrated by the autonomous Fisher and Nagumo lattice equations and a nonautonomous logistic population model with a variable carrying capacity.

MSC 2010: 34A33; 34A34; 34N05; 35A01; 35B50; 35F25; 39A14; 65M12

## 1 Introduction

The classical reaction-diffusion equation ${\partial }_{t}u=k{\partial }_{xx}u+f\left(u\right)$ is a nonlinear partial differential equation frequently used to describe the evolution of numerous natural quantities (chemical concentrations, temperatures, populations, etc.). These phenomena combine a local dynamics (via the reaction function f) and a spatial dynamics (via the diffusion). It is well known that solutions to reaction-diffusion systems can exhibit rich behavior such as the existence of traveling waves or formation of spatial patterns [32].

Motivated by applications in biology, chemistry and kinematics [2, 10, 12, 19], various authors have considered the lattice reaction-diffusion equation (see [7, 8, 36, 37])

${\partial }_{t}u\left(x,t\right)=k\left(u\left(x+1,t\right)-2u\left(x,t\right)+u\left(x-1,t\right)\right)+f\left(u\left(x,t\right)\right),x\in ℤ,t\in \left[0,\mathrm{\infty }\right),$(1.1)

as well as the discrete reaction-diffusion equation (see [9, 8, 18])

$u\left(x,t+1\right)-u\left(x,t\right)=k\left(u\left(x+1,t\right)-2u\left(x,t\right)+u\left(x-1,t\right)\right)+f\left(u\left(x,t\right)\right),x\in ℤ,t\in {ℕ}_{0}.$(1.2)

Naturally, equations (1.1) and (1.2) are also interesting from the standpoint of numerical mathematics since they correspond to semi- or full discretization of the original reaction-diffusion equation [18].

The literature dealing with equations (1.1) and (1.2) studies mainly the dynamical properties such as the asymptotic behavior [5, 33, 34], existence of traveling wave solutions [9, 8, 10, 21, 35, 36, 37] and pattern formation [6, 7, 8], in particular for specific nonlinearities (e.g., the Fisher or Nagumo equation). A growing number of studies have dealt with those questions in nonautonomous cases [17, 24]. In this paper, we study (1.1)–(1.2) with a general time- and space-dependent nonlinearity f. Our focus lies on the existence, uniqueness, continuous dependence (both on the initial condition as well as on the underlying time structure/numerical discretization), and a priori bounds in the form of weak and strong maximum principles. Note that both continuous dependence and maximum principles are key assumptions in the proofs of the existence of traveling waves [21, 35]. Our goal is to explore and describe them in full generality.

In order to consider both (1.1) and (1.2) at once and motivated by convergence issues and continuous dependence of solutions on the time discretization, we use the language of the time scale calculus [4, 16]. We do not restrict ourselves to symmetric diffusion (see the following paragraph) and consider the nonautonomous reaction-diffusion processes

${u}^{\mathrm{\Delta }}\left(x,t\right)=au\left(x+1,t\right)+bu\left(x,t\right)+cu\left(x-1,t\right)+f\left(u\left(x,t\right),x,t\right),x\in ℤ,t\in 𝕋,$(1.3)

where $a,b,c\in ℝ$, $𝕋\subseteq ℝ$ is a time scale, and the symbol ${u}^{\mathrm{\Delta }}$ denotes the delta derivative with respect to time. Our results are new even in the special cases $𝕋=ℝ$ (when ${u}^{\mathrm{\Delta }}$ becomes the partial derivative ${\partial }_{t}u$) and $𝕋=ℤ$ (when ${u}^{\mathrm{\Delta }}$ is the partial difference $u\left(x,t+1\right)-u\left(x,t\right)$).

If $a=c$ and $b=-2a$, then (1.3) becomes the symmetric lattice reaction-diffusion equation. The asymmetric case $a\ne c$, $b=-\left(a+c\right)$ corresponds to the lattice reaction-advection-diffusion equation. Next, if $c=0$ and $b=-a$, or if $a=0$ and $b=-c$, then (1.3) reduces to the lattice reaction-transport equation. For more details and other special cases see [29, Section 1].

In Section 2, we formulate (1.3) as an abstract nonautonomous dynamic equation and prove the local existence of solutions. In comparison with the existing literature [5, 33, 34], we do not work in the Hilbert space ${\mathrm{\ell }}^{2}\left(ℤ\right)$ or in the weighted spaces ${\mathrm{\ell }}_{\delta }^{2}\left(ℤ\right)$ but in the Banach space ${\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$; as explained in [12], this is a much more natural choice. We also prove the uniqueness of bounded solutions. In Section 3, we use techniques from the Kurzweil–Stieltjes integration theory to show the continuous dependence of solutions on the time scale (time discretization). In the special case, this implies the convergence of solutions of (1.2) to the solution of (1.1) as the time discretization step tends to zero. Following the ideas from [31] (which deals with initial-boundary-value problems on finite subsets of $ℤ$), we provide weak maximum and minimum principles in Section 4. These a priori bounds, as usual, depend strongly on the time structure. Combined with the local existence results they enable us to prove the global existence of bounded solutions to (1.3). We illustrate our findings on the autonomous logistic and bistable nonlinearities (Fisher and Nagumo equations) and a nonautonomous logistic population model with a variable carrying capacity. Finally, in Section 5, we conclude with the strong maximum principle. In the linear case $f\equiv 0$, the weak maximum principle was already proved in [29, Theorem 4.7], but the strong maximum principle is new even for linear equations.

## 2 Local existence and uniqueness of solutions

In this section, we study the local existence and global uniqueness of solutions to the initial-value problem

${u}^{\mathrm{\Delta }}\left(x,t\right)=au\left(x+1,t\right)+bu\left(x,t\right)+cu\left(x-1,t\right)+f\left(u\left(x,t\right),x,t\right),$$x\in ℤ,t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },$$u\left(x,{t}_{0}\right)={u}_{x}^{0},$$x\in ℤ,$

where ${\left\{{u}_{x}^{0}\right\}}_{x\in ℤ}$ is a bounded real sequence, $a,b,c\in ℝ$, $𝕋\subseteq ℝ$ is a time scale and ${t}_{0},T\in 𝕋$. We use the notation ${\left[\alpha ,\beta \right]}_{𝕋}=\left[\alpha ,\beta \right]\cap 𝕋$, $\alpha ,\beta \in ℝ$, and

We impose the following conditions on the function $f:ℝ×ℤ×{\left[{t}_{0},T\right]}_{𝕋}\to ℝ$:

• (H1)

f is bounded on each set $B×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$, where $B\subset ℝ$ is bounded.

• (H2)

f is Lipschitz-continuous in the first variable on each set $B×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$, where $B\subset ℝ$ is bounded.

• (H3)

For each bounded set $B\subset ℝ$ and each choice of $\epsilon >0$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$ there exists a $\delta >0$ such that if $s\in \left(t-\delta ,t+\delta \right)\cap {\left[{t}_{0},T\right]}_{𝕋}$, then $|f\left(u,x,t\right)-f\left(u,x,s\right)|<\epsilon$ for all $u\in B$, $x\in ℤ$.

We begin with a local existence result. Given a function $U:𝕋\to {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, the symbol $U{\left(t\right)}_{x}$ denotes the x-th component of the sequence $U\left(t\right)$, and should not be confused with the derivative of U with respect to x (which never appears in this paper).

#### Theorem 2.1 (Local existence).

Assume that the function $f\mathrm{:}\mathrm{R}\mathrm{×}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ satisfies (H1)(H3). Then for each ${u}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ the initial-value problem (2.1) has a bounded local solution defined on $\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}{t}_{\mathrm{0}}\mathrm{+}\delta \mathrm{\right]}}_{\mathrm{T}}$, where $\delta \mathrm{>}\mathrm{0}$ and $\delta \mathrm{\ge }\mu \mathit{}\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{\right)}$. The solution is obtained by letting $u\mathit{}\mathrm{\left(}x\mathrm{,}t\mathrm{\right)}\mathrm{=}U\mathit{}{\mathrm{\left(}t\mathrm{\right)}}_{x}$, where $U\mathrm{:}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}{t}_{\mathrm{0}}\mathrm{+}\delta \mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ is a solution of the abstract dynamic equation

${U}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left(U\left(t\right),t\right),U\left({t}_{0}\right)={u}^{0},$

with $\mathrm{\Phi }\mathrm{:}{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ being given by

$\mathrm{\Phi }\left({\left\{{u}_{x}\right\}}_{x\in ℤ},t\right)={\left\{a{u}_{x+1}+b{u}_{x}+c{u}_{x-1}+f\left({u}_{x},x,t\right)\right\}}_{x\in ℤ}.$

#### Proof.

Condition (H1) guarantees that Φ indeed takes values in ${\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$. Choose an arbitrary $\rho >0$ and denote

$\mathcal{ℬ}=\left\{u\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right):{\parallel u-{u}^{0}\parallel }_{\mathrm{\infty }}\le \rho \right\}\mathit{ }\text{and}\mathit{ }B=\left[\underset{x\in ℤ}{inf}{u}_{x}^{0}-\rho ,\underset{x\in ℤ}{sup}{u}_{x}^{0}+\rho \right]\subset ℝ.$

Note that if $u,v\in \mathcal{ℬ}$, then ${u}_{x},{v}_{x}\in B$ for all $x\in ℤ$. If L is the Lipschitz constant for the function f on $B×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$, we get

${\parallel \mathrm{\Phi }\left(u,t\right)-\mathrm{\Phi }\left(v,t\right)\parallel }_{\mathrm{\infty }}$$\le {\parallel a{\left\{{u}_{x+1}-{v}_{x+1}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel b{\left\{{u}_{x}-{v}_{x}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel c{\left\{{u}_{x-1}-{v}_{x-1}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel {\left\{f\left({u}_{x},x,t\right)-f\left({v}_{x},x,t\right)\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}$$\le \left(|a|+|b|+|c|\right){\parallel u-v\parallel }_{\mathrm{\infty }}+L{\parallel u-v\parallel }_{\mathrm{\infty }}.$

This means that Φ is Lipschitz-continuous in the first variable on $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$.

Next, we observe that Φ is bounded on $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$. Indeed, let M be the boundedness constant for the function $|f|$ on $B×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$. For each $u\in \mathcal{ℬ}$ we have ${u}_{x}\in B$ for each $x\in ℤ$, and consequently

${\parallel \mathrm{\Phi }\left(u,t\right)\parallel }_{\mathrm{\infty }}\le {\parallel a{\left\{{u}_{x+1}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel b{\left\{{u}_{x}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel c{\left\{{u}_{x-1}\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}+{\parallel {\left\{f\left({u}_{x},x,t\right)\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}$$\le \left(|a|+|b|+|c|\right){\parallel u\parallel }_{\mathrm{\infty }}+M\le \left(|a|+|b|+|c|\right)\left({\parallel {u}^{0}\parallel }_{\mathrm{\infty }}+\rho \right)+M.$

Finally, we claim that Φ is continuous on $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$. To see this, consider an arbitrary $\epsilon >0$ and a fixed pair $\left(u,t\right)\in \mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$. Let $\delta >0$ be the corresponding number from (H3). Then for all $\left(v,s\right)\in \mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$ with ${\parallel u-v\parallel }_{\mathrm{\infty }}<\epsilon$ and $s\in \left(t-\delta ,t+\delta \right)\cap {\left[{t}_{0},T\right]}_{𝕋}$ we have

${\parallel \mathrm{\Phi }\left(u,t\right)-\mathrm{\Phi }\left(v,s\right)\parallel }_{\mathrm{\infty }}\le {\parallel \mathrm{\Phi }\left(u,t\right)-\mathrm{\Phi }\left(v,t\right)\parallel }_{\mathrm{\infty }}+{\parallel \mathrm{\Phi }\left(v,t\right)-\mathrm{\Phi }\left(v,s\right)\parallel }_{\mathrm{\infty }}$$\le \left(|a|+|b|+|c|+L\right){\parallel u-v\parallel }_{\mathrm{\infty }}+{\parallel {\left\{f\left({v}_{x},x,t\right)-f\left({v}_{x},x,s\right)\right\}}_{x\in ℤ}\parallel }_{\mathrm{\infty }}$$\le \left(|a|+|b|+|c|+L+1\right)\epsilon ,$

which proves that Φ is continuous at the point $\left(u,t\right)$.

By [4, Theorem 8.16], the initial-value problem

${U}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left(U\left(t\right),t\right),U\left({t}_{0}\right)={u}^{0},$

has a local solution defined on ${\left[{t}_{0},{t}_{0}+\delta \right]}_{𝕋}$, where $\delta >0$ and $\delta \ge \mu \left({t}_{0}\right)$. Letting $u\left(x,t\right)=U{\left(t\right)}_{x}$, $x\in ℤ$, we see that u is a solution of the initial-value problem (2.1). ∎

Note that even in the linear case $f\equiv 0$ the solutions of (2.1) are not unique in general (see, e.g., [29, Section 3]) and the uniqueness can be expected only in the class of bounded solutions. In the next theorem, we tackle this issue for an initial-value problem which generalizes (2.1).

#### Theorem 2.2.

Assume that $\phi \mathrm{:}{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}\mathrm{×}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ satisfies the following conditions:

• (i)

φ is bounded on each set $\mathcal{ℬ}×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$ , where $\mathcal{ℬ}\subset {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ is bounded.

• (ii)

φ is Lipschitz-continuous in the first variable on each set $\mathcal{ℬ}×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$ , where $\mathcal{ℬ}\subset {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ is bounded.

Then for each ${u}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ the initial-value problem

${u}^{\mathrm{\Delta }}\left(x,t\right)=\phi \left({\left\{u\left(x,t\right)\right\}}_{x\in ℤ},x,t\right),u\left(x,{t}_{0}\right)={u}_{x}^{0},x\in ℤ,t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },$(2.2)

has at most one bounded solution $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$.

#### Proof.

Assume that ${u}_{1}$, ${u}_{2}$ are two bounded solutions that do not coincide on $ℤ×{\left({t}_{0},T\right]}_{𝕋}$; let

We claim that ${u}_{1}\left(x,t\right)={u}_{2}\left(x,t\right)$ for every $x\in ℤ$. If $t={t}_{0}$, the statement is true. If $t>{t}_{0}$ and t is left-dense, then the statement follows from the continuity of solutions with respect to the time variable. Finally, if $t>{t}_{0}$ and t is left-scattered, then ${u}_{1}\left(x,\rho \left(t\right)\right)={u}_{2}\left(x,\rho \left(t\right)\right)$, and the statement follows from the fact that ${u}_{1}^{\mathrm{\Delta }}\left(x,\rho \left(t\right)\right)={u}_{2}^{\mathrm{\Delta }}\left(x,\rho \left(t\right)\right)$.

If t is right-scattered, then ${u}_{1}\left(x,t\right)={u}_{2}\left(x,t\right)$ and ${u}_{1}^{\mathrm{\Delta }}\left(x,t\right)={u}_{2}^{\mathrm{\Delta }}\left(x,t\right)$ imply ${u}_{1}\left(x,\sigma \left(t\right)\right)={u}_{2}\left(x,\sigma \left(t\right)\right)$, a contradiction to the definition of t. Hence, t is right-dense. Since the functions ${U}_{i}\left(\tau \right)={\left\{{u}_{i}\left(x,\tau \right)\right\}}_{x\in ℤ}$, $i\in \left\{1,2\right\}$, $\tau \in {\left[{t}_{0},T\right]}_{𝕋}$, are bounded, their values are contained in a bounded set $\mathcal{ℬ}\subset {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$. By the first assumption, there is a constant $M\ge 0$ such that $|\phi |\le M$ on $\mathcal{ℬ}×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$. We have

${u}_{i}\left(x,{t}_{2}\right)-{u}_{i}\left(x,{t}_{1}\right)={\int }_{{t}_{1}}^{{t}_{2}}{u}_{i}^{\mathrm{\Delta }}\left(x,\tau \right)\mathrm{\Delta }\tau ={\int }_{{t}_{1}}^{{t}_{2}}\phi \left({U}_{i}\left(\tau \right),x,\tau \right)\mathrm{\Delta }\tau ,i\in \left\{1,2\right\},{t}_{1},{t}_{2}\ge {t}_{0},x\in ℤ$

(the last integral exists at least in the Henstock–Kurzweil sense; see [23, Theorem 2.3]). It follows that

$|{u}_{i}\left(x,{t}_{2}\right)-{u}_{i}\left(x,{t}_{1}\right)|\le |{t}_{2}-{t}_{1}|M,i\in \left\{1,2\right\},{t}_{1},{t}_{2}\ge {t}_{0},x\in ℤ,$

and therefore

${\parallel {U}_{i}\left({t}_{2}\right)-{U}_{i}\left({t}_{1}\right)\parallel }_{\mathrm{\infty }}\le |{t}_{2}-{t}_{1}|M,i\in \left\{1,2\right\},{t}_{1},{t}_{2}\ge {t}_{0},$

i.e., the functions ${U}_{1}$, ${U}_{2}$ are continuous on ${\left[{t}_{0},T\right]}_{𝕋}$.

By the second assumption, the mapping φ is Lipschitz-continuous in the first variable on $\mathcal{ℬ}×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$; let L be the corresponding Lipschitz constant. Then

${u}_{1}\left(x,r\right)-{u}_{2}\left(x,r\right)={\int }_{t}^{r}\phi \left({U}_{1}\left(\tau \right),x,\tau \right)-\phi \left({U}_{2}\left(\tau \right),x,\tau \right)\mathrm{\Delta }\tau ,$$\mathrm{ }r\ge t,$${\parallel {U}_{1}\left(r\right)-{U}_{2}\left(r\right)\parallel }_{\mathrm{\infty }}\le {\int }_{t}^{r}L{\parallel {U}_{1}\left(\tau \right)-{U}_{2}\left(\tau \right)\parallel }_{\mathrm{\infty }}\mathrm{\Delta }\tau ,$$\mathrm{ }r\ge t$

(the last integral exists since ${U}_{1}-{U}_{2}$ is continuous). Consequently, for each $s\in {\left[t,T\right]}_{𝕋}$ we have

$\underset{\tau \in \left[t,s\right]}{sup}{\parallel {U}_{1}\left(\tau \right)-{U}_{2}\left(\tau \right)\parallel }_{\mathrm{\infty }}\le \left(s-t\right)L\underset{\tau \in \left[t,s\right]}{sup}{\parallel {U}_{1}\left(\tau \right)-{U}_{2}\left(\tau \right)\parallel }_{\mathrm{\infty }}.$

Since t is right-dense, there is a point $s\in {\left[t,T\right]}_{𝕋}$ with $s>t$ and $\left(s-t\right)L<1$. Substituting this inequality into the previous estimate, we arrive at a contradiction. ∎

The uniqueness of bounded solutions to the initial-value problem (2.1) is now a simple consequence of the previous theorem.

#### Theorem 2.3 (Global uniqueness).

Assume that $f\mathrm{:}\mathrm{R}\mathrm{×}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ satisfies (H1) and (H2). Then for each ${u}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ the initial-value problem (2.1) has at most one bounded solution $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$.

#### Proof.

Note that (2.1) is a special case of (2.2) with the function $\phi :{\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)×ℤ×{\left[{t}_{0},T\right]}_{𝕋}\to ℝ$ being given by

$\phi \left({\left\{{u}_{x}\right\}}_{x\in ℤ},x,t\right)=a{u}_{x+1}+b{u}_{x}+c{u}_{x-1}+f\left({u}_{x},x,t\right).$

Hence, it is enough to verify that the two conditions in Theorem 2.2 are satisfied.

Given an arbitrary bounded set $\mathcal{ℬ}\subset {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, there exists a bounded set $B\subset ℝ$ such that $u\in \mathcal{ℬ}$ implies ${u}_{x}\in B$, $x\in ℤ$. Hence, the first condition in Theorem 2.2 is an immediate consequence of (H1). To verify the second condition let L be the Lipschitz constant for the function f on $B×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$. Then, for each pair of sequences u, $v\in \mathcal{ℬ}\subset {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, we have

$\begin{array}{cc}\hfill |\phi \left(u,x,t\right)-\phi \left(v,x,t\right)|& \le \left(|a|+|b|+|c|\right)\cdot {\parallel u-v\parallel }_{\mathrm{\infty }}+|f\left({u}_{x},x,t\right)-f\left({v}_{x},x,t\right)|\hfill \\ & \le \left(|a|+|b|+|c|+L\right)\cdot {\parallel u-v\parallel }_{\mathrm{\infty }},\hfill \end{array}$

which means that φ is Lipschitz-continuous in the first variable on $\mathcal{ℬ}×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$. ∎

## 3 Continuous dependence results

This section is devoted to the study of continuous dependence of solutions to abstract dynamic equations with respect to the choice of the time scale. The results are also applicable to (2.1), whose solutions (as we know from Theorem 2.1) are obtained from solutions to a certain abstract dynamic equation.

We begin by proving a continuous dependence theorem for the so-called measure differential equations, i.e., integral equations with the Kurzweil–Stieltjes integral (also known as the Perron–Stieltjes integral) on the right-hand side. For readers who are not familiar with this concept it is sufficient to know that the integral has the usual properties of linearity and additivity with respect to adjacent subintervals. The main advantage with respect to the Riemann–Stieltjes integral is that the class of Kurzweil–Stieltjes integrable functions is much larger. For example, if $g:\left[a,b\right]\to ℝ$ has bounded variation, then the integral ${\int }_{a}^{b}f\left(t\right)dg\left(t\right)$ exists for each regulated function $f:\left[a,b\right]\to X$, where X is a Banach space (see [26, Proposition 15]).

The statement as well as the proof of the next theorem are closely related to [3, Theorem 5.1]; for more details, see Remark 3.3.

#### Theorem 3.1.

Let X be a Banach space and $\mathcal{B}\mathrm{\subseteq }X$. Consider a sequence of nondecreasing left-continuous functions ${g}_{n}\mathrm{:}\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}\mathrm{\to }\mathrm{R}$, $n\mathrm{\in }{\mathrm{N}}_{\mathrm{0}}$, such that ${g}_{n}\mathrm{⇉}{g}_{\mathrm{0}}$ on $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$. Assume that $\mathrm{\Phi }\mathrm{:}\mathcal{B}\mathrm{×}\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}\mathrm{\to }X$ is Lipschitz-continuous in the first variable. Let ${x}_{n}\mathrm{:}\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}\mathrm{\to }\mathcal{B}$, $n\mathrm{\in }{\mathrm{N}}_{\mathrm{0}}$, be a sequence of functions satisfying

${x}_{n}\left(t\right)={x}_{n}\left({t}_{0}\right)+{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{n}\left(s\right),s\right)d{g}_{n}\left(s\right),t\in \left[{t}_{0},T\right],n\in {ℕ}_{0},$

and ${x}_{n}\mathit{}\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{\right)}\mathrm{\to }{x}_{\mathrm{0}}\mathit{}\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{\right)}$. Suppose finally that the function $s\mathrm{↦}\mathrm{\Phi }\mathit{}\mathrm{\left(}{x}_{\mathrm{0}}\mathit{}\mathrm{\left(}s\mathrm{\right)}\mathrm{,}s\mathrm{\right)}$, $s\mathrm{\in }\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$, is regulated. Then ${x}_{n}\mathrm{⇉}{x}_{\mathrm{0}}$ on $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$.

#### Proof.

Since ${g}_{n}\left({t}_{0}\right)\to {g}_{0}\left({t}_{0}\right)$ and ${g}_{n}\left(T\right)\to {g}_{0}\left(T\right)$, the sequences ${\left\{{g}_{n}\left({t}_{0}\right)\right\}}_{n=1}^{\mathrm{\infty }}$ and ${\left\{{g}_{n}\left(T\right)\right\}}_{n=1}^{\mathrm{\infty }}$ are necessarily bounded. Hence, there exists a constant $M\ge 0$ such that

$\underset{t\in \left[{t}_{0},T\right]}{\mathrm{var}}{g}_{n}\left(t\right)={g}_{n}\left(T\right)-{g}_{n}\left({t}_{0}\right)\le M,n\in ℕ.$

The Kurzweil–Stieltjes integral ${\int }_{{t}_{0}}^{T}\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\mathrm{d}\left({g}_{n}-{g}_{0}\right)\left(s\right)$ exists because $s↦\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)$ is regulated and ${g}_{n}-{g}_{0}$ has bounded variation. Since ${g}_{n}-{g}_{0}⇉0$, it follows from [22, Theorem 2.2] that

$\underset{n\to \mathrm{\infty }}{lim}{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\mathrm{d}\left({g}_{n}-{g}_{0}\right)\left(s\right)=0$

uniformly with respect to $t\in \left[{t}_{0},T\right]$. Thus, for an arbitrary $\epsilon >0$ there exists an ${n}_{0}\in ℕ$ such that

$\parallel {\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\mathrm{d}\left({g}_{n}-{g}_{0}\right)\left(s\right)\parallel \le \epsilon ,n\ge {n}_{0},t\in \left[{t}_{0},T\right].$

Moreover, the index ${n}_{0}$ can be chosen in such a way that $\parallel {x}_{n}\left({t}_{0}\right)-{x}_{0}\left({t}_{0}\right)\parallel \le \epsilon$ for each $n\ge {n}_{0}$.

Consequently, the following inequalities hold for each $n\ge {n}_{0}$ and $t\in \left[{t}_{0},T\right]$:

$\parallel {x}_{n}\left(t\right)-{x}_{0}\left(t\right)\parallel \le \parallel {x}_{n}\left({t}_{0}\right)-{x}_{0}\left({t}_{0}\right)\parallel +\parallel {\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{n}\left(s\right),s\right)d{g}_{n}\left(s\right)-{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)d{g}_{0}\left(s\right)\parallel$$\le \epsilon +\parallel {\int }_{{t}_{0}}^{t}\left(\mathrm{\Phi }\left({x}_{n}\left(s\right),s\right)-\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\right)d{g}_{n}\left(s\right)\parallel +\parallel {\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\mathrm{d}\left({g}_{n}-{g}_{0}\right)\left(s\right)\parallel$$\le 2\epsilon +{\int }_{{t}_{0}}^{t}\parallel \mathrm{\Phi }\left({x}_{n}\left(s\right),s\right)-\mathrm{\Phi }\left({x}_{0}\left(s\right),s\right)\parallel d{g}_{n}\left(s\right)$$\le 2\epsilon +L{\int }_{{t}_{0}}^{t}\parallel {x}_{n}\left(s\right)-{x}_{0}\left(s\right)\parallel d{g}_{n}\left(s\right),$

where L is the Lipschitz constant for the function Φ. Using Grönwall’s inequality for the Kurzweil–Stieltjes integral (see, e.g., [25, Corollary 1.43]), we get

$\parallel {x}_{n}\left(t\right)-{x}_{0}\left(t\right)\parallel \le 2\epsilon {e}^{L\left({g}_{n}\left(t\right)-{g}_{n}\left({t}_{0}\right)\right)}\le 2\epsilon {e}^{LM},n\ge {n}_{0},t\in \left[{t}_{0},T\right],$

which completes the proof. ∎

We now use the relation between measure differential equations and dynamic equations to obtain a continuous dependence theorem for the latter type of equations. Since we need to compare solutions defined on different time scales (whose intersection might be empty), we introduce the following definitions.

Consider an interval $\left[{t}_{0},T\right]\subset ℝ$ and a time scale $𝕋$ with ${t}_{0}\in 𝕋$, $sup𝕋\ge T$. Let ${g}_{𝕋}:\left[{t}_{0},T\right]\to ℝ$ be given by

${g}_{𝕋}\left(t\right)=inf\left\{s\in {\left[{t}_{0},T\right]}_{𝕋}:s\ge t\right\},t\in \left[{t}_{0},T\right].$

Each function $x:{\left[{t}_{0},T\right]}_{𝕋}\to X$ can be extended to a function ${x}^{*}:\left[{t}_{0},T\right]\to X$ by letting

${x}^{*}\left(t\right)=x\left({g}_{𝕋}\left(t\right)\right),t\in \left[{t}_{0},T\right].$(3.1)

Note that ${x}^{*}$ coincides with x on ${\left[{t}_{0},T\right]}_{𝕋}$, and is constant on each interval $\left(u,v\right]$, where $\left(u,v\right)\cap 𝕋=\mathrm{\varnothing }$. We will refer to ${x}^{*}$ as the piecewise constant extension of x, see Figure 1.

We are now ready to prove a theorem dealing with continuous dependence of solutions to abstract dynamic equations with respect to the choice of the time scale and initial condition.

#### Theorem 3.2 (Continuous dependence).

Let X be a Banach space and $\mathcal{B}\mathrm{\subseteq }X$. Consider an interval $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}\mathrm{\subset }\mathrm{R}$ and a sequence of time scales ${\mathrm{\left\{}{\mathrm{T}}_{n}\mathrm{\right\}}}_{n\mathrm{=}\mathrm{0}}^{\mathrm{\infty }}$ such that ${t}_{\mathrm{0}}\mathrm{\in }{\mathrm{T}}_{n}$ and $T\mathrm{\in }{\mathrm{T}}_{n}$ for each $n\mathrm{\in }{\mathrm{N}}_{\mathrm{0}}$ and ${g}_{{\mathrm{T}}_{n}}\mathrm{⇉}{g}_{{\mathrm{T}}_{\mathrm{0}}}$ on $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$. Denote

$𝕋=\overline{\bigcup _{n=0}^{\mathrm{\infty }}{𝕋}_{n}}.$

Suppose that $\mathrm{\Phi }\mathrm{:}\mathcal{B}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }X$ is continuous on its domain and Lipschitz-continuous with respect to the first variable. Let ${x}_{n}\mathrm{:}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{{\mathrm{T}}_{n}}\mathrm{\to }\mathcal{B}$, $n\mathrm{\in }{\mathrm{N}}_{\mathrm{0}}$, be a sequence of functions satisfying

${x}_{n}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left({x}_{n}\left(t\right),t\right),t\in {\left[{t}_{0},T\right]}_{{𝕋}_{n}}^{\kappa },n\in {ℕ}_{0},$

and ${x}_{n}\mathit{}\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{\right)}\mathrm{\to }{x}_{\mathrm{0}}\mathit{}\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{\right)}$. Then the sequence of piecewise constant extensions ${\mathrm{\left\{}{x}_{n}^{\mathrm{*}}\mathrm{\right\}}}_{n\mathrm{=}\mathrm{1}}^{\mathrm{\infty }}$ is uniformly convergent to the piecewise constant extension ${x}_{\mathrm{0}}^{\mathrm{*}}$ on $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$. In particular, for every $\epsilon \mathrm{>}\mathrm{0}$ there exists an ${n}_{\mathrm{0}}\mathrm{\in }\mathrm{N}$ such that $\mathrm{\parallel }{x}_{n}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{-}{x}_{\mathrm{0}}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{\parallel }\mathrm{<}\epsilon$ for all $n\mathrm{\ge }{n}_{\mathrm{0}}$, $t\mathrm{\in }{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{{\mathrm{T}}_{n}}\mathrm{\cap }{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{{\mathrm{T}}_{\mathrm{0}}}$.

#### Proof.

According to the assumptions, we have

${x}_{n}\left(t\right)={x}_{n}\left({t}_{0}\right)+{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{n}\left(s\right),s\right)\mathrm{\Delta }s,t\in {\left[{t}_{0},T\right]}_{{𝕋}_{n}},n\in {ℕ}_{0}.$

For each $n\in {ℕ}_{0}$ let ${x}_{n}^{*}:\left[{t}_{0},T\right]\to X$ be the piecewise constant extension of ${x}_{n}$. Using the relation between Δ-integrals and Kurzweil–Stieltjes integrals (see [27, Theorem 5] or [11, Theorem 4.5]), we conclude that ${x}_{n}^{*}$ satisfy

${x}_{n}^{*}\left(t\right)={x}_{n}^{*}\left({t}_{0}\right)+{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left({x}_{n}^{*}\left(s\right),{g}_{{𝕋}_{n}}\left(s\right)\right)d{g}_{{𝕋}_{n}}\left(s\right),t\in \left[{t}_{0},T\right],n\in {ℕ}_{0}.$(3.2)

Let ${\mathrm{\Phi }}^{*}:\mathcal{ℬ}×\left[{t}_{0},T\right]\to X$ be given by

${\mathrm{\Phi }}^{*}\left(x,t\right)=\mathrm{\Phi }\left(x,{g}_{𝕋}\left(t\right)\right),x\in \mathcal{ℬ},t\in \left[{t}_{0},T\right].$

Note that for each $s\in {\left[{t}_{0},T\right]}_{{𝕋}_{n}}$ we have

$\mathrm{\Phi }\left({x}_{n}^{*}\left(s\right),{g}_{{𝕋}_{n}}\left(s\right)\right)=\mathrm{\Phi }\left({x}_{n}^{*}\left(s\right),s\right)=\mathrm{\Phi }\left({x}_{n}^{*}\left(s\right),{g}_{𝕋}\left(s\right)\right)={\mathrm{\Phi }}^{*}\left({x}_{n}^{*}\left(s\right),s\right).$

Thus, by [11, Theorem 5.1], the integral equation (3.2) is equivalent to

${x}_{n}^{*}\left(t\right)={x}_{n}^{*}\left({t}_{0}\right)+{\int }_{{t}_{0}}^{t}{\mathrm{\Phi }}^{*}\left({x}_{n}^{*}\left(s\right),s\right)d{g}_{{𝕋}_{n}}\left(s\right),t\in \left[{t}_{0},T\right],n\in {ℕ}_{0}.$

Because ${x}_{0}$ is continuous on ${\left[{t}_{0},T\right]}_{{𝕋}_{0}}$, its piecewise constant extension ${x}_{0}^{*}$ is regulated on $\left[{t}_{0},T\right]$ (see [27, Lemma 4]). Moreover, its one-sided limits at each point of $\left[{t}_{0},T\right]$ are elements of $\mathcal{ℬ}$ (note that ${x}_{0}^{*}\left(\left[{t}_{0},T\right]\right)={x}_{0}\left({\left[{t}_{0},T\right]}_{{𝕋}_{0}}\right)$ is compact because ${x}_{0}$ is continuous and ${\left[{t}_{0},T\right]}_{{𝕋}_{0}}$ is compact). The function ${g}_{𝕋}$ is the piecewise constant extension of the identity function from ${\left[{t}_{0},T\right]}_{𝕋}$ to $\left[{t}_{0},T\right]$; therefore (again by [27, Lemma 4]), ${g}_{𝕋}$ is regulated on $\left[{t}_{0},T\right]$. Consequently, the function $s↦\left({x}_{0}^{*}\left(s\right),{g}_{𝕋}\left(s\right)\right)$ is also regulated on $\left[{t}_{0},T\right]$, and its one-sided limits have values in $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$. The continuity of Φ on $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$ implies that $s↦\mathrm{\Phi }\left({x}_{0}^{*}\left(s\right),{g}_{𝕋}\left(s\right)\right)={\mathrm{\Phi }}^{*}\left({x}_{0}^{*}\left(s\right),s\right)$ is regulated on $\left[{t}_{0},T\right]$. According to Theorem 3.1, we have ${x}_{n}^{*}⇉{x}_{0}^{*}$ on $\left[{t}_{0},T\right]$. ∎

Figure 1

The piecewise constant extension ${x}^{*}$ (gray) of a function x (black); see (3.1).

#### Remark 3.3.

The problem of continuous dependence of solutions to dynamic equations with respect to the choice of time scale has been studied by several authors; see, e.g., [1, 3, 13, 14, 15, 20]. Our approach is close to the one taken in [3] or [13]; it relies on the continuous dependence result for measure differential equations from Theorem 3.1, which is similar in spirit to [3, Theorem 5.1]. In this context, it seems appropriate to include a few remarks:

• Although the statement of [3, Theorem 5.1] is essentially correct, the proof provided there is based on an erroneous estimate of the form $\parallel {\int }_{{t}_{0}}^{t}{f}_{n}d{g}_{n}-{\int }_{{t}_{0}}^{t}{f}_{n}d{g}_{0}\parallel \le {\int }_{{t}_{0}}^{T}M\mathrm{d}\left({g}_{n}-{g}_{0}\right)$, where ${f}_{n}$, ${f}_{0}$ are certain functions whose norm is bounded by M, and ${g}_{n}$, ${g}_{0}$ are nondecreasing.

• The assumption that the Hausdorff distance between ${𝕋}_{n}$ and ${𝕋}_{0}$ tends to zero is never used in the proof of [3, Theorem 5.1], and can be omitted. On the other hand, the assumption that the above-mentioned integral ${\int }_{{t}_{0}}^{T}{f}_{n}d{g}_{0}$ exists is missing.

• The result [3, Theorem 5.1] deals with measure functional differential equations; our Theorem 3.1 and its proof can be easily adapted to this type of equations.

The next result shows that each time scale can be approximated by a sequence of discrete time scales in such a way that the assumptions of Theorem 3.2 are satisfied. We introduce the following notation:

${\overline{\mu }}_{𝕋}=\underset{t\in {\left[{t}_{0},T\right)}_{𝕋}}{\mathrm{max}}\mu \left(t\right).$

#### Theorem 3.4.

If ${\mathrm{T}}_{\mathrm{0}}\mathrm{\subset }\mathrm{R}$ is a time scale with ${t}_{\mathrm{0}}\mathrm{,}T\mathrm{\in }{\mathrm{T}}_{\mathrm{0}}$, there exists a sequence of discrete time scales ${\mathrm{\left\{}{\mathrm{T}}_{n}\mathrm{\right\}}}_{n\mathrm{=}\mathrm{1}}^{\mathrm{\infty }}$ with ${\mathrm{T}}_{n}\mathrm{\subseteq }{\mathrm{T}}_{\mathrm{0}}$, $\mathrm{min}\mathit{}{\mathrm{T}}_{n}\mathrm{=}{t}_{\mathrm{0}}$, $\mathrm{max}\mathit{}{\mathrm{T}}_{n}\mathrm{=}T$, and such that ${g}_{{\mathrm{T}}_{n}}\mathrm{⇉}{g}_{{\mathrm{T}}_{\mathrm{0}}}$ on $\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}$.

Moreover, if ${\overline{\mu }}_{{\mathrm{T}}_{\mathrm{0}}}\mathrm{=}\mathrm{0}$, then ${\mathrm{lim}}_{n\mathrm{\to }\mathrm{\infty }}\mathit{}{\overline{\mu }}_{{\mathrm{T}}_{n}}\mathrm{=}\mathrm{0}$; otherwise, if ${\overline{\mu }}_{{\mathrm{T}}_{\mathrm{0}}}\mathrm{>}\mathrm{0}$, then the sequence ${\mathrm{\left\{}{\mathrm{T}}_{n}\mathrm{\right\}}}_{n\mathrm{=}\mathrm{1}}^{\mathrm{\infty }}$ can be chosen so that ${\overline{\mu }}_{{\mathrm{T}}_{n}}\mathrm{=}{\overline{\mu }}_{{\mathrm{T}}_{\mathrm{0}}}$ for all $n\mathrm{\in }\mathrm{N}$.

#### Proof.

We start by proving that for each $\epsilon >0$ there exists a left-continuous nondecreasing step function ${g}_{\epsilon }:\left[{t}_{0},T\right]\to ℝ$ such that ${g}_{\epsilon }\left({t}_{0}\right)={t}_{0}$, ${g}_{\epsilon }\left(T\right)=T$, and ${\parallel {g}_{\epsilon }-{g}_{{𝕋}_{0}}\parallel }_{\mathrm{\infty }}\le \epsilon$.

Given an $\epsilon >0$, let ${t}_{0}={x}_{0}<{x}_{1}<\mathrm{\cdots }<{x}_{m}=T$ be a partition of $\left[{t}_{0},T\right]$ such that ${x}_{i}-{x}_{i-1}\le \epsilon$, $i\in \left\{1,\mathrm{\dots },m\right\}$. We begin the construction of the step function ${g}_{\epsilon }:\left[{t}_{0},T\right]\to ℝ$ by letting ${g}_{\epsilon }\left(T\right)=T$. Then we proceed by induction in the backward direction and define ${g}_{\epsilon }$ on $\left[{x}_{m-1},{x}_{m}\right),\mathrm{\dots },\left[{x}_{0},{x}_{1}\right)$. At the same time, we are going to check that ${\parallel {g}_{{𝕋}_{0}}-{g}_{\epsilon }\parallel }_{\mathrm{\infty }}\le \epsilon$ on these subintervals, and also ensure that ${g}_{\epsilon }\left({x}_{i}\right)={x}_{i}$ whenever ${x}_{i}\in {𝕋}_{0}$; this will guarantee that ${g}_{\epsilon }\left({t}_{0}\right)={t}_{0}$.

Assume that ${g}_{\epsilon }$ is already defined at ${x}_{i}$ and we want to extend it to $\left[{x}_{i-1},{x}_{i}\right)$. We distinguish between two possibilities:

• If ${𝕋}_{0}\cap \left[{x}_{i-1},{x}_{i}\right)=\mathrm{\varnothing }$, then, by the definition of ${g}_{{𝕋}_{0}}$, we have ${g}_{{𝕋}_{0}}\left(t\right)={g}_{{𝕋}_{0}}\left({x}_{i}\right)$ for each $t\in \left[{x}_{i-1},{x}_{i}\right)$. Let ${g}_{\epsilon }\left(t\right)={g}_{\epsilon }\left({x}_{i}\right)$, $t\in \left[{x}_{i-1},{x}_{i}\right)$. Then $|{g}_{\epsilon }\left(t\right)-{g}_{{𝕋}_{0}}\left(t\right)|=|{g}_{\epsilon }\left({x}_{i}\right)-{g}_{{𝕋}_{0}}\left({x}_{i}\right)|\le \epsilon$, where the last inequality follows from the induction hypothesis.

• If ${𝕋}_{0}\cap \left[{x}_{i-1},{x}_{i}\right)$ is nonempty, let ${t}_{i}$ be its supremum. Define

Note that ${t}_{i}$ might coincide with ${x}_{i}$. In this case, we necessarily have ${x}_{i}\in {𝕋}_{0}$, and therefore, by the induction hypothesis, ${g}_{\epsilon }\left({x}_{i}\right)={x}_{i}$; this guarantees that ${g}_{\epsilon }$ is left-continuous at ${x}_{i}$.

For each $t\in \left[{x}_{i-1},{t}_{i}\right]$ we have ${x}_{i-1}\le t\le {g}_{{𝕋}_{0}}\left(t\right)\le {t}_{i}$. Hence, there holds $0\le {t}_{i}-{g}_{{𝕋}_{0}}\left(t\right)\le {t}_{i}-{x}_{i-1}\le \epsilon$, which in turn means that $|{g}_{\epsilon }\left(t\right)-{g}_{{𝕋}_{0}}\left(t\right)|\le \epsilon$. For each $t\in \left({t}_{i},{x}_{i}\right)$ it follows from the definition of ${g}_{{𝕋}_{0}}$ that ${g}_{{𝕋}_{0}}\left(t\right)={g}_{{𝕋}_{0}}\left({x}_{i}\right)$, and therefore $|{g}_{\epsilon }\left(t\right)-{g}_{{𝕋}_{0}}\left(t\right)|=|{g}_{\epsilon }\left({x}_{i}\right)-{g}_{{𝕋}_{0}}\left({x}_{i}\right)|\le \epsilon$.

Observe that the function ${g}_{\epsilon }$ constructed in this way has the property that ${g}_{\epsilon }\left(t\right)\ge t$, and observe that ${g}_{\epsilon }\left(t\right)=t$ implies $t\in {𝕋}_{0}$.

Choosing $\epsilon =1/n$, $n\in ℕ$, we get a sequence of left-continuous nondecreasing step functions ${\left\{{g}_{1/n}\right\}}_{n=1}^{\mathrm{\infty }}$ such that ${g}_{1/n}⇉{g}_{{𝕋}_{0}}$ on $\left[{t}_{0},T\right]$. For each $n\in ℕ$ consider the set

${𝕋}_{n}=\left\{t\in \left[{t}_{0},T\right]:{g}_{1/n}\left(t\right)=t\right\}.$

Clearly, ${t}_{0}$ and T are elements of ${𝕋}_{n}$, and ${𝕋}_{n}\subseteq {𝕋}_{0}$. Moreover, ${𝕋}_{n}$ is finite since ${g}_{1/n}$ is a step function and therefore its graph has only finitely many intersections with the graph of the identity function. Thus, ${𝕋}_{n}$ is a discrete time scale. It follows from the definition of ${𝕋}_{n}$ that ${g}_{{𝕋}_{n}}={g}_{1/n}$, and therefore ${g}_{{𝕋}_{n}}⇉{g}_{{𝕋}_{0}}$ on $\left[{t}_{0},T\right]$.

To prove the final part of the theorem, we distinguish between two cases:

• Assume that ${\overline{\mu }}_{{𝕋}_{0}}>0$. Let ${y}_{0}={t}_{0}$, and construct a sequence of points ${y}_{1}<\mathrm{\cdots }<{y}_{k}=T$ using the recursive formula

${y}_{i}=sup\left({y}_{i-1},{y}_{i-1}+{\overline{\mu }}_{{𝕋}_{0}}\right]\cap {\left[{t}_{0},T\right]}_{{𝕋}_{0}}.$

Since the graininess of ${𝕋}_{0}$ never exceeds ${\overline{\mu }}_{{𝕋}_{0}}$, the set whose supremum is being considered is never empty. Also, note that ${y}_{i+1}-{y}_{i-1}\ge {\overline{\mu }}_{{𝕋}_{0}}$ (otherwise, the point ${y}_{i+1}$ would have been chosen directly after ${y}_{i-1}$). Thus, the recursive procedure always terminates by reaching the point ${y}_{k}=T$ for some $k\in ℕ$.

In the construction of the function ${g}_{\epsilon }$ described at the beginning of this proof, we can always assume that the points ${y}_{0},\mathrm{\dots },{y}_{k}$ are among ${x}_{0},\mathrm{\dots },{x}_{m}$. The construction then guarantees that ${g}_{\epsilon }\left({y}_{i}\right)={y}_{i}$ for each $i\in \left\{0,\mathrm{\dots },k\right\}$. Consequently, the points ${y}_{0},\mathrm{\dots },{y}_{k}$ are contained in all of the time scales ${𝕋}_{n}$, $n\in ℕ$, and

${\overline{\mu }}_{{𝕋}_{n}}\le \underset{1\le i\le k}{\mathrm{max}}\left({y}_{i}-{y}_{i-1}\right)\le {\overline{\mu }}_{{𝕋}_{0}}.$

On the other hand, since ${𝕋}_{n}\subseteq {𝕋}_{0}$, we have ${\overline{\mu }}_{{𝕋}_{0}}\le {\overline{\mu }}_{{𝕋}_{n}}$, which in turn means that ${\overline{\mu }}_{{𝕋}_{n}}={\overline{\mu }}_{{𝕋}_{0}}$.

• Assume that ${\overline{\mu }}_{{𝕋}_{0}}=0$. If μ is the graininess function of an arbitrary time scale $𝕋$ with $\mathrm{min}𝕋={t}_{0}$ and $sup𝕋\ge T$, observe that ${g}_{𝕋}\left(t+\right)-{g}_{𝕋}\left(t\right)=\mu \left(t\right)$ if $t\in {\left[{t}_{0},T\right)}_{𝕋}$, and ${g}_{𝕋}\left(t+\right)-{g}_{𝕋}\left(t\right)=0$ if $t\in \left[{t}_{0},T\right)\setminus 𝕋$. Hence, we have

${\overline{\mu }}_{𝕋}=\underset{t\in {\left[{t}_{0},T\right)}_{𝕋}}{sup}\mu \left(t\right)=\underset{t\in \left[{t}_{0},T\right)}{sup}\left({g}_{𝕋}\left(t+\right)-{g}_{𝕋}\left(t\right)\right).$

Since ${g}_{{𝕋}_{n}}⇉{g}_{{𝕋}_{0}}$ on $\left[{t}_{0},T\right]$, the Moore–Osgood theorem implies that ${g}_{{𝕋}_{n}}\left(t+\right)-{g}_{{𝕋}_{n}}\left(t\right)⇉{g}_{{𝕋}_{0}}\left(t+\right)-{g}_{{𝕋}_{0}}\left(t\right)$ on $\left[{t}_{0},T\right)$, and therefore

$\underset{n\to \mathrm{\infty }}{lim}{\overline{\mu }}_{{𝕋}_{n}}=\underset{n\to \mathrm{\infty }}{lim}\left(\underset{t\in \left[{t}_{0},T\right)}{sup}\left({g}_{{𝕋}_{n}}\left(t+\right)-{g}_{{𝕋}_{n}}\left(t\right)\right)\right)=\underset{t\in \left[{t}_{0},T\right)}{sup}\left({g}_{{𝕋}_{0}}\left(t+\right)-{g}_{{𝕋}_{0}}\left(t\right)\right)={\overline{\mu }}_{{𝕋}_{0}}=0.\mathit{∎}$

## 4 Weak maximum principle and global existence

A natural task in the analysis of diffusion-type equations is to establish the maximum principles. Given an initial condition ${u}^{0}\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, let

$m=\underset{x\in ℤ}{inf}{u}_{x}^{0},M=\underset{x\in ℤ}{sup}{u}_{x}^{0}.$

We introduce the following conditions, which will be useful for our purposes:

• (H4)

$a,b,c\in ℝ$ are such that $a,c\ge 0$, $b<0$ and $a+b+c=0$.

• (H5)

$b<0$ and ${\overline{\mu }}_{𝕋}\le -1/b$.

• (H6)

There exist $r,R\in ℝ$ such that $r\le m\le M\le R$, and one of the following statements holds:

• ${\overline{\mu }}_{𝕋}=0$ and $f\left(R,x,t\right)\le 0\le f\left(r,x,t\right)$ for all $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$.

• ${\overline{\mu }}_{𝕋}>0$ and

$\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(r-u\right)\le f\left(u,x,t\right)\le \frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(R-u\right)$

for all $u\in \left[r,R\right]$, $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$.

#### Remark 4.1.

Let us notice the following:

• If (H4)–(H5) are not satisfied, then the maximum principle does not hold even in the linear case with $f\equiv 0$; see [29, Section 4].

• (H6) defines forbidden areas that the function $f\left(\cdot ,x,t\right)$ cannot intersect for any $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$, similarly to [31] (see Figure 2).

• If (H5) holds, there exists a function f satisfying (H6); indeed, the linear functions

${\psi }_{1}\left(u\right)=\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(r-u\right)\mathit{ }\text{and}\mathit{ }{\psi }_{2}\left(u\right)=\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(R-u\right)$

have identical nonpositive slopes, and the constant term of ${\psi }_{1}$ is less than or equal to the constant term of ${\psi }_{2}$. If ${\overline{\mu }}_{𝕋}=-1/b$ or $r=R$, then (H6) is equivalent to $f\left(u,x,t\right)=0$ for all $u\in \left[r,R\right]$, $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$. Finally, if ${\overline{\mu }}_{𝕋}>-1/b$ and $r, there does not exist any function satisfying (H6).

Figure 2

Illustration of (H6). The values r, R are chosen so that the function $f\left(\cdot ,x,t\right)$ does not intersect the gray forbidden areas. The slope of the boundary dashed lines is determined by the values of ${\overline{\mu }}_{𝕋}$.

If (H6) holds in the continuous case ${\overline{\mu }}_{𝕋}=0$, the following lemma shows that (H6) is also satisfied for all sufficiently fine time scales (specifically, for almost all of the discrete approximating time scales ${𝕋}_{n}$ from Theorem 3.4).

#### Lemma 4.2.

Assume that ${\overline{\mu }}_{\mathrm{T}}\mathrm{=}\mathrm{0}$ and (H2) and (H6) hold. Then there exists ${\epsilon }_{\mathrm{0}}\mathrm{>}\mathrm{0}$ such that for all $\epsilon \mathrm{\in }\mathrm{\left(}\mathrm{0}\mathrm{,}{\epsilon }_{\mathrm{0}}\mathrm{\right]}$ the following inequalities hold:

(4.1)

#### Proof.

Let $L\ge 0$ be the Lipschitz constant for the function f on the set $\left[r,R\right]×ℤ×\left[{t}_{0},T\right]$. Then for all $u\in \left[r,R\right]$, $x\in ℤ$ and $t\in \left[{t}_{0},T\right]$ we obtain

$f\left(u,x,t\right)\le f\left(u,x,t\right)-f\left(R,x,t\right)\le |f\left(u,x,t\right)-f\left(R,x,t\right)|\le L|u-R|=L\left(R-u\right),$$f\left(u,x,t\right)\ge f\left(u,x,t\right)-f\left(r,x,t\right)\ge -|f\left(u,x,t\right)-f\left(r,x,t\right)|\ge -L|u-r|=L\left(r-u\right).$

Since $L\left(r-u\right)\le f\left(u,x,t\right)\le L\left(R-u\right)$, the two inequalities in (4.1) will be satisfied if $1/\epsilon +b\ge L$, i.e., for all $\epsilon \in \left(0,1/\left(L-b\right)\right]$. ∎

The following lemma represents a weak maximum principle for time scales containing no right-dense points; it will be a key tool in the proof of the general weak maximum principle.

#### Lemma 4.3.

Assume that ${\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right)}}_{\mathrm{T}}$ does not contain any right-dense points, conditions (H4)(H6) hold and $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a solution of (2.1) with ${u}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$. Then

(4.2)

#### Proof.

We show the statement via the induction principle [4, Theorem 1.7] in the variable t. For a fixed $t\in {\left[{t}_{0},T\right]}_{𝕋}$ we have to distinguish among three cases:

• For $t={t}_{0}$ we obtain from the definitions of m and M and from (H6) that

• Let $t\in {\left({t}_{0},T\right]}_{𝕋}$ be left-dense and assume that $r\le u\left(x,s\right)\le R$ for all $s\in {\left[{t}_{0},t\right)}_{𝕋}$ and $x\in ℤ$. Then the continuity of the function $u\left(x,\cdot \right)$ on ${\left[{t}_{0},T\right]}_{𝕋}$ implies

• Let $t\in {\left[{t}_{0},T\right)}_{𝕋}$ be right-scattered, i.e., necessarily ${\overline{\mu }}_{𝕋}>0$, and

(4.3)

We have to show that

(4.4)

Notice that from (H5) and from the fact that ${\overline{\mu }}_{𝕋}\ge {\mu }_{𝕋}\left(t\right)>0$ we get

$0\le \frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}=\frac{1}{{\overline{\mu }}_{𝕋}}+b\le \frac{1}{{\mu }_{𝕋}\left(t\right)}+b=\frac{1+{\mu }_{𝕋}\left(t\right)b}{{\mu }_{𝕋}\left(t\right)}.$

Consequently, (H6) yields

(4.5)

Let us prove the latter inequality in (4.4). Using the equation in (2.1), we obtain the estimate

$u\left(x,t+{\mu }_{𝕋}\left(t\right)\right)={\mu }_{𝕋}\left(t\right)au\left(x+1,t\right)+\left(1+{\mu }_{𝕋}\left(t\right)b\right)u\left(x,t\right)+{\mu }_{𝕋}\left(t\right)cu\left(x-1,t\right)$$+{\mu }_{𝕋}\left(t\right)f\left(u\left(x,t\right),x,t\right)$$\le {\mu }_{𝕋}\left(t\right)\left(a+c\right)R+\left(1+{\mu }_{𝕋}\left(t\right)b\right)u\left(x,t\right)+{\mu }_{𝕋}\left(t\right)f\left(u\left(x,t\right),x,t\right)$$\mathrm{ }\text{(by (H4) and (4.3))}$$=-{\mu }_{𝕋}\left(t\right)bR+\left(1+{\mu }_{𝕋}\left(t\right)b\right)u\left(x,t\right)+{\mu }_{𝕋}\left(t\right)f\left(u\left(x,t\right),x,t\right)$$\mathrm{ }\text{(by (H4))}$$\le -{\mu }_{𝕋}\left(t\right)bR+\left(1+{\mu }_{𝕋}\left(t\right)b\right)u\left(x,t\right)+\left(1+{\mu }_{𝕋}\left(t\right)b\right)\left(R-u\left(x,t\right)\right)$$\mathrm{ }\text{(by (4.3) and (4.5))}$$=R$

for each $x\in ℤ$. The former inequality in (4.4) can be shown in a similar way.

We do not have to consider the case when t is right-dense since $𝕋$ does not contain any such point. Therefore, the induction principle yields that (4.2) holds for all $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$. ∎

We now proceed to the general weak maximum principle for (2.1), where $𝕋$ is an arbitrary time scale (i.e., allowing right-dense points). The basic idea of the proof is to use the continuous dependence results from Theorems 3.2 and 3.4 to approximate the solution of (2.1) on any time scale by solutions of (2.1) defined on discrete time scales, for which we can apply Lemma 4.3.

#### Theorem 4.4 (Weak maximum principle).

Assume that (H1)(H6) hold. If $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a bounded solution of (2.1), then

(4.6)

#### Proof.

From Theorems 2.1 and 2.3 we obtain that u has to be unique and $U\left(t\right)={\left\{u\left(x,t\right)\right\}}_{x\in ℤ}$ is the unique solution of the abstract initial-value problem

${U}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left(U\left(t\right),t\right),U\left({t}_{0}\right)={u}^{0},$

where $\mathrm{\Phi }:{\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)×{\left[{t}_{0},T\right]}_{𝕋}\to {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ is given by

$\mathrm{\Phi }\left({\left\{{u}_{x}\right\}}_{x\in ℤ},t\right)={\left\{a{u}_{x+1}+b{u}_{x}+c{u}_{x-1}+f\left({u}_{x},x,t\right)\right\}}_{x\in ℤ}.$

According to Theorem 3.4, there exists a sequence ${\left\{{𝕋}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ of discrete time scales such that ${𝕋}_{n}\subseteq 𝕋$, $\mathrm{min}{𝕋}_{n}={t}_{0}$, $\mathrm{max}{𝕋}_{n}=T$, and ${g}_{{𝕋}_{n}}⇉{g}_{𝕋}$. Moreover, we have either ${\overline{\mu }}_{𝕋}=0$ and ${\overline{\mu }}_{{𝕋}_{n}}\to 0$, or ${\overline{\mu }}_{{𝕋}_{n}}={\overline{\mu }}_{𝕋}$ for all $n\in ℕ$. In any case, using (H5), we get the existence of an ${n}_{0}\in ℕ$ such that

If ${\overline{\mu }}_{𝕋}=0$, it follows from Lemma 4.2 that ${n}_{0}$ can be chosen in such a way that the inequalities

hold for each $n>{n}_{0}$. If ${\overline{\mu }}_{𝕋}>0$, the same inequalities hold for each $n\in ℕ$ because of (H6) and the fact that ${\overline{\mu }}_{{𝕋}_{n}}={\overline{\mu }}_{𝕋}$.

Therefore, because ${𝕋}_{n}$ are discrete time scales, Lemma 4.3 yields that the corresponding solutions ${u}_{n}:ℤ×{\left[{t}_{0},T\right]}_{{𝕋}_{n}}\to ℝ$ of (2.1) satisfy

i.e., for ${U}_{n}\left(t\right)={\left\{{u}_{n}\left(x,t\right)\right\}}_{x\in ℤ}$ we have

(4.7)

Since the solution U is bounded, there is an $S>0$ such that ${\parallel U\left(t\right)\parallel }_{\mathrm{\infty }}\le S$ for each $t\in {\left[{t}_{0},T\right]}_{𝕋}$. Let

$\mathcal{ℬ}=\left\{V\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right):{\parallel V\parallel }_{\mathrm{\infty }}\le \mathrm{max}\left(|r|,|R|,S\right)\right\}.$

As in the proof of Theorem 2.1, one can show that the restriction of the mapping Φ to $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$ is continuous on its domain and Lipschitz-continuous in the first variable. Therefore, if we let ${𝕋}_{0}=𝕋$, the assumptions of Theorem 3.2 are satisfied (recall that ${U}_{n}\left(t\right)\in \mathcal{ℬ}$ for all $t\in {𝕋}_{n}$ and $n>{n}_{0}$ from (4.7), and $U\left(t\right)\in \mathcal{ℬ}$ for all $t\in 𝕋$ immediately from the definition of $\mathcal{ℬ}$), and hence ${U}_{n}^{*}⇉{U}^{*}$ on $\left[{t}_{0},T\right]$.

From the definition of the piecewise constant extension ${U}_{n}^{*}$ and from (4.7) it is obvious that

(4.8)

Since ${U}_{n}^{*}⇉{U}^{*}$ on $\left[{t}_{0},T\right]$, inequalities (4.8) imply

Particularly, there has to be

which proves that (4.6) holds. ∎

#### Remark 4.5.

In connection with the previous theorem, we point out the following facts:

• The classical maximum principle guarantees that $m\le u\left(x,t\right)\le M$, i.e., it corresponds to the case when $r=m$ and $R=M$. However, for this choice of r and R, condition (H6) need not be satisfied. Choosing $r and $R>M$, we can soften (H6), and obtain the weaker estimate $r\le u\left(x,t\right)\le R$.

• An examination of the proofs of Lemma 4.3 and Theorem 4.4 reveals that if we are interested only in the upper bound $u\left(x,t\right)\le R$, it is sufficient to assume that $a+b+c\le 0$. Symmetrically, to get the lower bound $u\left(x,t\right)\ge r$, it is enough to suppose that $a+b+c\ge 0$.

As an application of the weak maximum principle, we obtain the following global existence theorem. Since we consider a general class of nonlinearities f, the result is new even in the special case $𝕋=ℝ$.

#### Theorem 4.6 (Global existence).

If ${u}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$ and (H1)(H6) hold, then (2.1) has a unique bounded solution $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$.

Moreover, the solution depends continuously on ${u}^{\mathrm{0}}$ in the following sense: For every $\epsilon \mathrm{>}\mathrm{0}$ there exists a $\delta \mathrm{>}\mathrm{0}$ such that if ${v}^{\mathrm{0}}\mathrm{\in }{\mathrm{\ell }}^{\mathrm{\infty }}\mathit{}\mathrm{\left(}\mathrm{Z}\mathrm{\right)}$, $r\mathrm{\le }{v}_{x}^{\mathrm{0}}\mathrm{\le }R$ for all $x\mathrm{\in }\mathrm{Z}$, and ${\mathrm{\parallel }{u}^{\mathrm{0}}\mathrm{-}{v}^{\mathrm{0}}\mathrm{\parallel }}_{\mathrm{\infty }}\mathrm{<}\delta$, then the unique bounded solution $v\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ of (2.1) corresponding to the initial condition ${v}^{\mathrm{0}}$ satisfies $\mathrm{|}u\mathit{}\mathrm{\left(}x\mathrm{,}t\mathrm{\right)}\mathrm{-}v\mathit{}\mathrm{\left(}x\mathrm{,}t\mathrm{\right)}\mathrm{|}\mathrm{<}\epsilon$ for all $x\mathrm{\in }\mathrm{Z}$, $t\mathrm{\in }{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}$.

#### Proof.

We know from Theorems 2.1 and 2.3 that bounded solutions to (2.1) are unique, and that they correspond to solutions of the initial-value problem

${U}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left(U\left(t\right),t\right),t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },U\left({t}_{0}\right)={u}^{0},$(4.9)

with $\mathrm{\Phi }:{\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)×{\left[{t}_{0},T\right]}_{𝕋}\to {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ being given by

$\mathrm{\Phi }\left({\left\{{u}_{x}\right\}}_{x\in ℤ},t\right)={\left\{a{u}_{x+1}+b{u}_{x}+c{u}_{x-1}+f\left({u}_{x},x,t\right)\right\}}_{x\in ℤ}.$

Thus, it is enough to prove that (4.9) has a solution on the whole interval ${\left[{t}_{0},T\right]}_{𝕋}$.

Let $\mathcal{𝒮}$ be the set of all $s\in {\left[{t}_{0},T\right]}_{𝕋}$ such that (4.9) has a solution on ${\left[{t}_{0},s\right]}_{𝕋}$, and denote ${t}_{1}=sup\mathcal{𝒮}$. By Theorem 2.1, we have ${t}_{1}>{t}_{0}$. Let us prove that ${t}_{1}\in \mathcal{𝒮}$. The statement is obvious if ${t}_{1}$ is a left-scattered maximum of $\mathcal{𝒮}$; therefore, we can assume that ${t}_{1}$ is left-dense. It follows from the definition of ${t}_{1}$ that (4.9) has a solution U defined on ${\left[{t}_{0},{t}_{1}\right)}_{𝕋}$. According to the weak maximum principle, the solution U takes values in the bounded set . As in the proof of Theorem 2.1, one can show that Φ is continuous on its domain and Lipschitz-continuous in the first variable and bounded on $\mathcal{ℬ}×{\left[{t}_{0},T\right]}_{𝕋}$; let C be the boundedness constant for ${\parallel \mathrm{\Phi }\parallel }_{\mathrm{\infty }}$. Since U is a solution of (4.9), we have

$U\left(t\right)=U\left({t}_{0}\right)+{\int }_{{t}_{0}}^{t}\mathrm{\Phi }\left(U\left(s\right),s\right)\mathrm{\Delta }s$(4.10)

for each $t\in {\left[{t}_{0},{t}_{1}\right)}_{𝕋}$. Note also that ${\parallel U\left({s}_{1}\right)-U\left({s}_{2}\right)\parallel }_{\mathrm{\infty }}\le C|{s}_{1}-{s}_{2}|$ for all ${s}_{1},{s}_{2}\in {\left[{t}_{0},{t}_{1}\right)}_{𝕋}$. Thus, the Cauchy condition for the existence of the limit $U\left({t}_{1}-\right)={lim}_{s\to {t}_{1}-}U\left(s\right)$ is satisfied. If we extend U to ${\left[{t}_{0},{t}_{1}\right]}_{𝕋}$ by letting $U\left({t}_{1}\right)=U\left({t}_{1}-\right)$, we see that (4.10) holds also for $t={t}_{1}$. Since the mapping $s↦\mathrm{\Phi }\left(U\left(s\right),s\right)$ is continuous on ${\left[{t}_{0},{t}_{1}\right]}_{𝕋}$, it follows that U is a solution of (4.9) on ${\left[{t}_{0},{t}_{1}\right]}_{𝕋}$, i.e., ${t}_{1}\in \mathcal{𝒮}$.

If ${t}_{1}, we can use Theorem 2.1 to extend the solution U from ${\left[{t}_{0},{t}_{1}\right]}_{𝕋}$ to a larger interval. However, this contradicts the fact that ${t}_{1}=sup\mathcal{𝒮}$. Hence, the only possibility is ${t}_{1}=T$, and the proof of the existence is complete.

To obtain continuous dependence of the solution on the initial condition, it is enough to show the following statement: If ${u}^{n}\in \mathcal{ℬ}$ for $n\in ℕ$, ${u}^{n}\to {u}^{0}$ in ${\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ and ${U}_{n}:{\left[{t}_{0},T\right]}_{𝕋}\to {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$ is the unique solution of the initial-value problem

${U}_{n}^{\mathrm{\Delta }}\left(t\right)=\mathrm{\Phi }\left({U}_{n}\left(t\right),t\right),t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },{U}_{n}\left({t}_{0}\right)={u}^{n},$

then ${U}_{n}⇉U$ on ${\left[{t}_{0},T\right]}_{𝕋}$. Since we know that the solutions ${U}_{n}$ in fact take values in $\mathcal{ℬ}$, the statement is an immediate consequence of Theorem 3.2 where we take ${𝕋}_{n}=𝕋$ for each $n\in {ℕ}_{0}$. ∎

Let us illustrate the application of the weak maximum principle and the global existence theorem on the following special cases of (2.1).

#### Example 4.7.

Consider the logistic nonlinearity $f\left(u,x,t\right)=\lambda u\left(1-u\right)$, $u\in ℝ$, $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$, where $\lambda >0$ is a parameter. In this case, problem (2.1) becomes a Fisher-type reaction-diffusion equation:

${u}^{\mathrm{\Delta }}\left(x,t\right)=au\left(x+1,t\right)+bu\left(x,t\right)+cu\left(x-1,t\right)+\lambda u\left(x,t\right)\left(1-u\left(x,t\right)\right),$$x\in ℤ,t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },$$u\left(x,{t}_{0}\right)={u}_{x}^{0},$$x\in ℤ.$

Obviously, the function f satisfies (H1)–(H3). Suppose that $a,c\ge 0$, $b<0$, $a+b+c=0$, and ${\overline{\mu }}_{𝕋}\le -1/b$, i.e., (H4) and (H5) hold. Consider an arbitrary nonnegative initial condition ${u}^{0}\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, i.e., $m\ge 0$. We now distinguish between the cases ${\overline{\mu }}_{𝕋}=0$ and ${\overline{\mu }}_{𝕋}>0$:

• If ${\overline{\mu }}_{𝕋}=0$, let $r=\mathrm{min}\left(m,1\right)$ and $R=\mathrm{max}\left(M,1\right)$. Then $f\left(R,x,t\right)\le 0$ and $f\left(r,x,t\right)\ge 0$, i.e., (H6) holds and there exists a unique global solution u of (4.11). Moreover, the solution u satisfies $r\le u\left(x,t\right)\le R$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$. In particular, nonnegative initial conditions always lead to nonnegative solutions.

• If ${\overline{\mu }}_{𝕋}>0$, Lemma 4.2 together with the analysis of the previous case guarantee that (H6) holds with $r=\mathrm{min}\left(m,1\right)$ and $R=\mathrm{max}\left(M,1\right)$ whenever ${\overline{\mu }}_{𝕋}$ is sufficiently small. For example, if $M\le 1$, consider the linear functions

${\psi }_{1}\left(u\right)=\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(r-u\right)\mathit{ }\text{and}\mathit{ }{\psi }_{2}\left(u\right)=\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(R-u\right)$

from (H6). We have ${\psi }_{1}\left(u\right)\le 0\le f\left(u,x,t\right)$ for $u\in \left[r,R\right]$, i.e., the first inequality in (H6) is satisfied. The graphs of ${\psi }_{2}$ and $f\left(\cdot ,x,t\right)$ meet at the point $\left(1,0\right)$. Therefore, the second inequality $f\left(u,x,t\right)\le {\psi }_{2}\left(u\right)$ in (H6) will be satisfied for $u\in \left[r,R\right]$ if and only if $\frac{\partial f}{\partial u}\left(1,x,t\right)\ge {\psi }_{2}^{\prime }\left(1\right)$, i.e., if and only if $-\lambda \ge -\left(1/{\overline{\mu }}_{𝕋}+b\right)$. The last condition is equivalent to $\lambda -b\le 1/{\overline{\mu }}_{𝕋}$, which holds if ${\overline{\mu }}_{𝕋}\le 1/\left(\lambda -b\right)$ (note that $b<0<\lambda$). Under these assumptions, condition (H6) holds and there exists a unique bounded global solution u of (4.11). Moreover, the solution u satisfies $m=r\le u\left(x,t\right)\le R=1$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$.

#### Example 4.8.

Consider the so-called bistable nonlinearity $f\left(u,x,t\right)=\lambda u\left(1-{u}^{2}\right)$, $u\in ℝ$, $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$, where $\lambda >0$. In this case, problem (2.1) becomes a Nagumo-type reaction-diffusion equation:

${u}^{\mathrm{\Delta }}\left(x,t\right)=au\left(x+1,t\right)+bu\left(x,t\right)+cu\left(x-1,t\right)+\lambda u\left(x,t\right)\left(1-u{\left(x,t\right)}^{2}\right),$$x\in ℤ,t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },$$u\left(x,{t}_{0}\right)={u}_{x}^{0},$$x\in ℤ.$

Obviously, the function f satisfies (H1)–(H3). Suppose that $a,c\ge 0$, $b<0$, $a+b+c=0$, and ${\overline{\mu }}_{𝕋}\le -1/b$, i.e., (H4) and (H5) hold. Consider an arbitrary initial condition ${u}^{0}\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$. Again, we distinguish between the cases ${\overline{\mu }}_{𝕋}=0$ and ${\overline{\mu }}_{𝕋}>0$:

• If ${\overline{\mu }}_{𝕋}=0$, let

Then $f\left(R,x,t\right)\le 0$ and $f\left(r,x,t\right)\ge 0$, i.e., (H6) holds and there exists a unique bounded global solution u of (4.12). Moreover, the solution u satisfies $r\le u\left(x,t\right)\le R$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$. In particular, nonnegative/nonpositive initial conditions always lead to nonnegative/nonpositive solutions.

• If ${\overline{\mu }}_{𝕋}>0$, Lemma 4.2 together with the analysis of the previous case guarantee that (H6) holds whenever ${\overline{\mu }}_{𝕋}$ is sufficiently small. For example, if ${\parallel {u}^{0}\parallel }_{\mathrm{\infty }}\le 1$, one can follow the computations from [31, Section 8] to conclude that there exists a unique global solution u of (4.12) satisfying

where

$\stackrel{~}{R}=\frac{2\lambda {\overline{\mu }}_{𝕋}{\left(1/3+\left(1+2b{\overline{\mu }}_{𝕋}\right)/3\lambda {\overline{\mu }}_{𝕋}\right)}^{3/2}}{1+b{\overline{\mu }}_{𝕋}}.$

We have no a priori bounds for ${\overline{\mu }}_{𝕋}>2/\left(\lambda -2b\right)$.

#### Example 4.9.

Consider the nonautonomous nonlinearity $f\left(u,x,t\right)=\lambda u\left(d\left(x,t\right)-u\right)$, $u\in ℝ$, $x\in ℤ$, $t\in {\left[{t}_{0},T\right]}_{𝕋}$, where $\lambda >0$ and $d:ℤ×{\left[{t}_{0},T\right]}_{𝕋}\to ℝ$. In this case, problem (2.1) has the form

${u}^{\mathrm{\Delta }}\left(x,t\right)=au\left(x+1,t\right)+bu\left(x,t\right)+cu\left(x-1,t\right)+\lambda u\left(x,t\right)\left(d\left(x,t\right)-u\left(x,t\right)\right),$$x\in ℤ,t\in {\left[{t}_{0},T\right]}_{𝕋}^{\kappa },$$u\left(x,{t}_{0}\right)={u}_{x}^{0},$$x\in ℤ.$

This equation can be interpreted as the logistic population model where the carrying capacity d depends on position and time. Assume that d has the following properties:

• d is bounded.

• For each choice of $\epsilon >0$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$ there exists a $\delta >0$ such that if $s\in \left(t-\delta ,t+\delta \right)\cap {\left[{t}_{0},T\right]}_{𝕋}$, then $|d\left(x,t\right)-d\left(x,s\right)|<\epsilon$ for all $x\in ℤ$.

Then the function f satisfies (H1)–(H3). Indeed, let D be the boundedness constant for $|d|$. If $B\subset ℝ$ is bounded, it is contained in a ball of radius ρ centered at the origin. Consequently, for all $u,v\in B$, $x\in ℤ$, $t,s\in {\left[{t}_{0},T\right]}_{𝕋}$, we get the estimates

$|f\left(u,x,t\right)|\le \lambda |u|\left(|d\left(x,t\right)|+|u|\right)\le \lambda \rho \left(D+\rho \right),$$|f\left(u,x,t\right)-f\left(v,x,t\right)|=\lambda |\left(u-v\right)\left(d\left(x,t\right)-u-v\right)|\le \lambda |u-v|\left(D+2\rho \right),$$|f\left(u,x,t\right)-f\left(u,x,s\right)|=\lambda |u\left(d\left(x,t\right)-d\left(x,s\right)\right)|\le \lambda \rho |d\left(x,t\right)-d\left(x,s\right)|,$

which imply that (H1)–(H3) hold.

As an example, let us mention the model of population dynamics with a shifting habitat, which was described by Hu and Li in [17]. There, the authors considered problem (4.13) with $𝕋=ℝ$, $a=c$, $b=-2a$ (i.e., symmetric diffusion), and $d\left(x,t\right)=e\left(x-\gamma t\right)$, where $\gamma >0$ and $e:ℝ\to ℝ$ is continuous, nondecreasing, and bounded. It follows that e is uniformly continuous on $ℝ$: Given an $\epsilon >0$, there exists a $\delta >0$ such that $|{t}_{1}-{t}_{2}|<\delta$ implies $|e\left({t}_{1}\right)-e\left({t}_{2}\right)|<\epsilon$. Thus, we get

$|d\left(x,t\right)-d\left(x,s\right)|=|e\left(x-\gamma t\right)-e\left(x-\gamma s\right)|<\epsilon$

whenever $|t-s|<\delta /\gamma$ and $x\in ℤ$; this shows that d satisfies our assumptions. (We remark that some of the results presented in [17] can be found in our earlier paper [28]. In particular, the fundamental solution of the linear lattice diffusion equation was derived in [28, Example 3.1], and [17, Corollary 2.1] is a consequence of our superposition principle from [28, Theorem 2.2].)

Another simple example is obtained by letting $d\left(x,t\right)=e\left(t\right)$, where $e:ℝ\to ℝ$ is a continuous periodic function; this choice corresponds to a population model with a periodically changing habitat. Since e is necessarily bounded and uniformly continuous on $ℝ$, it is obvious that d satisfies our assumptions.

Suppose now that $a,c\ge 0$, $b<0$, $a+b+c=0$, and ${\overline{\mu }}_{𝕋}\le -1/b$, i.e., (H4) and (H5) hold. For simplicity, let us restrict ourselves to the case when d is a positive function, and let

${d}_{\mathrm{min}}=\underset{\left(x,t\right)\in ℤ×{\left[{t}_{0},T\right]}_{𝕋}}{inf}d\left(x,t\right),{d}_{\mathrm{max}}=\underset{\left(x,t\right)\in ℤ×{\left[{t}_{0},T\right]}_{𝕋}}{sup}d\left(x,t\right).$

Consider an arbitrary nonnegative initial condition ${u}^{0}\in {\mathrm{\ell }}^{\mathrm{\infty }}\left(ℤ\right)$, i.e., $m\ge 0$. Take $r=\mathrm{min}\left(m,{d}_{\mathrm{min}}\right)$ and $R=\mathrm{max}\left(M,{d}_{\mathrm{max}}\right)$. Then $f\left(r,x,t\right)\ge 0$ and $f\left(R,x,t\right)\le 0$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$. This means that (H6) holds if ${\overline{\mu }}_{𝕋}=0$, or (by Lemma 4.2) if ${\overline{\mu }}_{𝕋}$ is positive and sufficiently small. In these cases, problem (4.13) possesses a unique global solution u, and $r\le u\left(x,t\right)\le R$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$.

## 5 Strong maximum principle

In the rest of the paper, we focus on the strong maximum principle for (2.1). We need the following stronger versions of (H4)–(H6):

• (H4)

a, b, $c\in ℝ$ are such that $a,c>0$, $b<0$ and $a+b+c=0$.

• (H5)

$b<0$ and ${\overline{\mu }}_{𝕋}<-1/b$.

• (H6)

There exist $r,R\in ℝ$ such that $r\le m\le M\le R$, and the following statements hold for all $x\in ℤ$ and $t\in {\left[{t}_{0},T\right]}_{𝕋}$:

• $f\left(R,x,t\right)\le 0\le f\left(r,x,t\right)$.

• If ${\overline{\mu }}_{𝕋}>0$, then

• If ${\overline{\mu }}_{𝕋}>0$, then

The next lemma analyzes the situation when a solution of (2.1) attains its maximum at a left-scattered point.

#### Lemma 5.1.

Assume that (H1), (H2), (H3), ($\overline{\mathrm{H4}}$), ($\overline{\mathrm{H5}}$), and ($\overline{\mathrm{H6}}$) hold, and $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a bounded solution of (2.1). If $u\mathit{}\mathrm{\left(}\overline{x}\mathrm{,}\overline{t}\mathrm{\right)}\mathrm{\in }\mathrm{\left\{}r\mathrm{,}R\mathrm{\right\}}$ for some $\overline{x}\mathrm{\in }\mathrm{Z}$ and a left-scattered point $\overline{t}\mathrm{\in }{\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}$, then $u\mathit{}\mathrm{\left(}x\mathrm{,}{\rho }_{\mathrm{T}}\mathit{}\mathrm{\left(}\overline{t}\mathrm{\right)}\mathrm{\right)}\mathrm{=}u\mathit{}\mathrm{\left(}\overline{x}\mathrm{,}\overline{t}\mathrm{\right)}$ for each $x\mathrm{\in }\mathrm{\left\{}\overline{x}\mathrm{-}\mathrm{1}\mathrm{,}\overline{x}\mathrm{,}\overline{x}\mathrm{+}\mathrm{1}\mathrm{\right\}}$.

#### Proof.

We consider the case when $u\left(\overline{x},\overline{t}\right)=R$; the case $u\left(\overline{x},\overline{t}\right)=r$ can be treated in a similar way. Denote $\overline{s}={\rho }_{𝕋}\left(\overline{t}\right)$. We have

$u\left(\overline{x},\overline{t}\right)={\mu }_{𝕋}\left(\overline{s}\right)au\left(\overline{x}+1,\overline{s}\right)+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)u\left(\overline{x},\overline{s}\right)+{\mu }_{𝕋}\left(\overline{s}\right)cu\left(\overline{x}-1,\overline{s}\right)+{\mu }_{𝕋}\left(\overline{s}\right)f\left(u\left(\overline{x},\overline{s}\right),\overline{x},\overline{s}\right).$

By the weak maximum principle (which holds because ($\overline{\mathrm{H4}}$)–($\overline{\mathrm{H6}}$) imply (H4)–(H6)), the values of u cannot exceed R. If at least one of the values $u\left(\overline{x}+1,\overline{s}\right)$, $u\left(\overline{x}-1,\overline{s}\right)$ is smaller than R and $u\left(\overline{x},\overline{s}\right)=R$, then

$u\left(\overline{x},\overline{t}\right)\stackrel{\text{(}\overline{\mathrm{H4}}\text{)}}{<}{\mu }_{𝕋}\left(\overline{s}\right)\left(a+c\right)R+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)R+{\mu }_{𝕋}\left(\overline{s}\right)f\left(R,\overline{x},\overline{s}\right)\stackrel{\text{(}\overline{\mathrm{H4}}\text{)}}{=}R+{\mu }_{𝕋}\left(\overline{s}\right)f\left(R,\overline{x},\overline{s}\right)\stackrel{\text{(}\overline{\mathrm{H6}}\text{)}}{\le }R,$

which contradicts the fact that $u\left(\overline{x},\overline{t}\right)=R$. If $u\left(\overline{x},\overline{s}\right), then

$u\left(\overline{x},\overline{t}\right)\le {\mu }_{𝕋}\left(\overline{s}\right)\left(a+c\right)R+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)u\left(\overline{x},\overline{s}\right)+{\mu }_{𝕋}\left(\overline{s}\right)f\left(u\left(\overline{x},\overline{s}\right),\overline{x},\overline{s}\right)$$<{\mu }_{𝕋}\left(\overline{s}\right)\left(a+c\right)R+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)u\left(\overline{x},\overline{s}\right)+{\mu }_{𝕋}\left(\overline{s}\right)\frac{1+{\overline{\mu }}_{𝕋}b}{{\overline{\mu }}_{𝕋}}\left(R-u\left(\overline{x},\overline{s}\right)\right)$$\mathrm{ }\text{(by (}\overline{\mathrm{H4}}\text{) and (}\overline{\mathrm{H6}}\text{))}$$\le {\mu }_{𝕋}\left(\overline{s}\right)\left(a+c\right)R+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)u\left(\overline{x},\overline{s}\right)+\left(1+{\mu }_{𝕋}\left(\overline{s}\right)b\right)\left(R-u\left(\overline{x},\overline{s}\right)\right)$$=R$$\mathrm{ }\text{(by (}\overline{\mathrm{H4}}\text{))},$

which is a contradiction again. Thus, the only possibility is that

$u\left(\overline{x}+1,\overline{s}\right)=u\left(\overline{x},\overline{s}\right)=u\left(\overline{x}-1,\overline{s}\right)=R,$

as desired. ∎

We now turn our attention to the case when the maximum is attained at a left-dense point.

#### Lemma 5.2.

Assume that (H1), (H2), (H3), ($\overline{\mathrm{H4}}$), ($\overline{\mathrm{H5}}$), and ($\overline{\mathrm{H6}}$) hold, and $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a bounded solution of (2.1). If $u\mathit{}\mathrm{\left(}\overline{x}\mathrm{,}\overline{t}\mathrm{\right)}\mathrm{\in }\mathrm{\left\{}r\mathrm{,}R\mathrm{\right\}}$ for some $\overline{x}\mathrm{\in }\mathrm{Z}$ and a left-dense point $\overline{t}\mathrm{\in }{\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}$, then $u\mathit{}\mathrm{\left(}x\mathrm{,}t\mathrm{\right)}\mathrm{=}u\mathit{}\mathrm{\left(}\overline{x}\mathrm{,}\overline{t}\mathrm{\right)}$ for all $x\mathrm{\in }\mathrm{Z}$ and $t\mathrm{\in }{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}\overline{t}\mathrm{\right]}}_{\mathrm{T}}$.

#### Proof.

We consider the case when $u\left(\overline{x},\overline{t}\right)=R$; the case $u\left(\overline{x},\overline{t}\right)=r$ can be treated in a similar way. We begin by proving that

(5.1)

Assume that there exists a $\overline{s}\in {\left[{t}_{0},\overline{t}\right)}_{𝕋}$ such that $u\left(\overline{x},\overline{s}\right). Let $L\ge 0$ be the Lipschitz constant for f on the set $\left[r,R\right]×ℤ×{\left[{t}_{0},T\right]}_{𝕋}$. Choose a partition $\overline{s}={s}_{0}<{s}_{1}<\mathrm{\cdots }<{s}_{k}=\overline{t}$ such that ${s}_{0},\mathrm{\dots },{s}_{k}\in 𝕋$ and for each $i\in \left\{1,\mathrm{\dots },k\right\}$ we have either ${s}_{i}-{s}_{i-1}<1/\left(L-b\right)$ or ${s}_{i}={\sigma }_{𝕋}\left({s}_{i-1}\right)$. We will use induction with respect to i to show that $u\left(\overline{x},{s}_{i}\right) for each $i\in \left\{0,\mathrm{\dots },k\right\}$; this will be a contradiction to the fact that $u\left(\overline{x},{s}_{k}\right)=u\left(\overline{x},\overline{t}\right)=R$.

For $i=0$, we know that $u\left(\overline{x},{s}_{0}\right)=u\left(\overline{x},\overline{s}\right). By the weak maximum principle (which holds because ($\overline{\mathrm{H4}}$)–($\overline{\mathrm{H6}}$) imply (H4)–(H6)), the values of u cannot exceed R. If $i\in \left\{0,\mathrm{\dots },k-1\right\}$ is such that ${s}_{i+1}={\sigma }_{𝕋}\left({s}_{i}\right)$, then the induction hypothesis $u\left(\overline{x},{s}_{i}\right) and Lemma 5.1 imply that $u\left(\overline{x},{s}_{i+1}\right). Otherwise, we have ${s}_{i+1}-{s}_{i}<1/\left(L-b\right)$. For each $t\in {\left[{s}_{i},{s}_{i+1}\right)}_{𝕋}$ we get

${\left(u\left(\overline{x},t\right)-R\right)}^{\mathrm{\Delta }}=au\left(\overline{x}+1,t\right)+bu\left(\overline{x},t\right)+cu\left(\overline{x}-1,t\right)+f\left(u\left(\overline{x},t\right),\overline{x},t\right)$$\le \left(a+c\right)R+bu\left(\overline{x},t\right)+f\left(u\left(\overline{x},t\right),\overline{x},t\right)-f\left(R,\overline{x},t\right)+f\left(R,\overline{x},t\right)$$\mathrm{ }\text{(by (}\overline{\mathrm{H4}}\text{) and Theorem 4.4)}$$\le -b\left(R-u\left(\overline{x},t\right)\right)+f\left(u\left(\overline{x},t\right),\overline{x},t\right)-f\left(R,\overline{x},t\right)$$\mathrm{ }\text{(by (}\overline{\mathrm{H4}}\text{) and (}\overline{\mathrm{H6}}\text{))}$$\le -b\left(R-u\left(\overline{x},t\right)\right)+|f\left(u\left(\overline{x},t\right),\overline{x},t\right)-f\left(R,\overline{x},t\right)|$$\le -b\left(R-u\left(\overline{x},t\right)\right)+L|u\left(\overline{x},t\right)-R|$$=\left(b-L\right)\left(u\left(\overline{x},t\right)-R\right)$$\mathrm{ }\text{(by Theorem 4.4)}.$

Notice that $1+{\mu }_{𝕋}\left(t\right)\left(b-L\right)>0$ for all $t\in {\left[{s}_{i},{s}_{i+1}\right)}_{𝕋}$. Therefore, Grönwall’s inequality [4, Theorem 6.1] yields

$u\left(\overline{x},{s}_{i+1}\right)-R\le \underset{<0}{\underset{⏟}{\left(u\left(\overline{x},{s}_{i}\right)-R\right)}}\underset{>0}{\underset{⏟}{{\mathrm{e}}_{b-L}\left({s}_{i+1},{s}_{i}\right)}}<0,$

which completes the proof by induction and confirms that (5.1) holds.

Let us prove that $u\left(\overline{x}±1,t\right)=R$ for all $t\in {\left[{t}_{0},\overline{t}\right]}_{𝕋}$. Assume that there exists a $t\in {\left[{t}_{0},\overline{t}\right]}_{𝕋}$ such that at least one of the values $u\left(\overline{x}±1,t\right)$ is smaller than R. The fact that $u\left(\overline{x},\cdot \right)$ is a constant function on ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$ implies that ${u}^{\mathrm{\Delta }}\left(\overline{x},t\right)=0$ (note that if $t=\overline{t}$, then t is necessarily left-dense). On the other hand,

${u}^{\mathrm{\Delta }}\left(\overline{x},t\right)=au\left(\overline{x}+1,t\right)+bu\left(\overline{x},t\right)+cu\left(\overline{x}-1,t\right)+f\left(u\left(\overline{x},t\right),\overline{x},t\right)<\left(a+b+c\right)R+f\left(R,\overline{x},t\right)\le 0,$

i.e., ${u}^{\mathrm{\Delta }}\left(\overline{x},t\right)<0$, a contradiction.

Once we know that $u\left(\overline{x}±1,t\right)=R$ for all $t\in {\left[{t}_{0},\overline{t}\right]}_{𝕋}$, it follows by induction with respect to $x\in ℤ$ that $u\left(x,t\right)=R$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},\overline{t}\right]}_{𝕋}$. ∎

With the help of the previous two lemmas, we derive the strong maximum principle.

#### Theorem 5.3 (Strong maximum principle).

Assume that (H1), (H2), (H3), ($\overline{\mathrm{H4}}$), ($\overline{\mathrm{H5}}$), and ($\overline{\mathrm{H6}}$) hold with $r\mathrm{=}m\mathrm{\le }M\mathrm{=}R$ and $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a bounded solution of (2.1). If $u\mathit{}\mathrm{\left(}\overline{x}\mathrm{,}\overline{t}\mathrm{\right)}\mathrm{\in }\mathrm{\left\{}r\mathrm{,}R\mathrm{\right\}}$ for some $\overline{x}\mathrm{\in }\mathrm{Z}$ and $\overline{t}\mathrm{\in }{\mathrm{\left(}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}$, then the following statements hold:

• (a)

If ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$ contains only isolated points, i.e., ${t}_{0}={\rho }_{𝕋}^{k}\left(\overline{t}\right)$ for some $k\in ℕ$ , and

then $u\left(x,t\right)=u\left(\overline{x},\overline{t}\right)$ for all $\left(x,t\right)\in \mathcal{𝒟}\left(\overline{x},\overline{t}\right)$.

• (b)

Otherwise, if ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$ contains a point which is not isolated, then u is constant on $ℤ×{\left[{t}_{0},T\right]}_{𝕋}$.

#### Remark 5.4.

In order to prevent any confusion, we emphasize that the fact whether a point is isolated or not is considered with respect to the time scale interval ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$, not the entire time scale $𝕋$. In other words, the statement distinguishes between the cases in which the interval ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$ is a finite set (part (a)) or at least countable (part (b)).

#### Proof of Theorem 5.3.

We consider the case when $u\left(\overline{x},\overline{t}\right)=R$; the case $u\left(\overline{x},\overline{t}\right)=r$ can be treated in a similar way. We prove the statement by analyzing two different cases: Case (1): Let there be a left-dense point in ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$. Denote

and ${t}_{\mathrm{ld}}=sup{\mathcal{𝒫}}_{\mathrm{ld}}$. Given the definition of the supremum and the fact that $𝕋$ is a closed set, we obtain ${t}_{\mathrm{ld}}\in 𝕋$. To show that ${t}_{\mathrm{ld}}$ is left-dense let us assume by contradiction that ${t}_{\mathrm{ld}}$ is left-scattered. Thus, ${t}_{\mathrm{ld}}\notin {\mathcal{𝒫}}_{\mathrm{ld}}$ and immediately from the definition of the supremum we get a contradiction. From the proofs of Lemmas 5.1 and 5.2 we obtain that $u\left(\overline{x},t\right)=R$ for all $t\in {\left[{t}_{0},\overline{t}\right]}_{𝕋}$, and particularly $u\left(\overline{x},{t}_{\mathrm{ld}}\right)=R$. Furthermore, since ${t}_{\mathrm{ld}}$ is left-dense, Lemma 5.2 yields that

(5.2)

There remains to prove the statement for $t\in {\left[{t}_{\mathrm{ld}},T\right]}_{𝕋}$. From (5.2) we get that $u\left(x,{t}_{0}\right)={u}_{x}^{0}=R$ for all $x\in ℤ$, and thus $r=m=M=R$. Consequently, since (H6) holds with $r=m=M=R$, Theorem 4.4 (weak maximum principle) yields that

Case (2): Let us assume that ${\left[{t}_{0},\overline{t}\right]}_{𝕋}$ does not contain any left-dense point. Subcase (i): If ${\left[{t}_{0},\overline{t}\right)}_{𝕋}$ does not contain any right-dense point, i.e., ${\left[{t}_{0},T\right]}_{𝕋}$ contains only isolated points, then part (a) of the theorem follows immediately from Lemma 5.1. Subcase (ii): Let there exist a right-dense point in ${\left[{t}_{0},\overline{t}\right)}_{𝕋}$. Denote

and ${t}_{\mathrm{rd}}=sup{\mathcal{𝒫}}_{\mathrm{rd}}$. From the fact that $\overline{t}$ is left-scattered and from the definition of the supremum we obtain ${t}_{\mathrm{rd}}<\overline{t}$. Moreover, since $𝕋$ is closed, there is ${t}_{\mathrm{rd}}\in 𝕋$. Further, we show that ${t}_{\mathrm{rd}}$ is right-dense as well. Indeed, let us assume that ${t}_{\mathrm{rd}}$ is right-scattered, i.e., ${t}_{\mathrm{rd}}\notin {\mathcal{𝒫}}_{\mathrm{rd}}$. Then ${t}_{\mathrm{rd}}$ is an unattained supremum of ${\mathcal{𝒫}}_{\mathrm{rd}}$ and there exists a sequence ${\left\{{t}_{n}\right\}}_{n=1}^{\mathrm{\infty }}\subset {\mathcal{𝒫}}_{\mathrm{rd}}$ such that ${t}_{n}↗{t}_{\mathrm{rd}}$. This would imply that ${t}_{\mathrm{rd}}$ is left-dense, a contradiction. Thus, ${t}_{\mathrm{rd}}$ is right-dense.

From the definition of ${t}_{\mathrm{rd}}$, the sequence of predecessors of $\overline{t}$, namely

${\left\{{\rho }_{𝕋}^{j}\left(\overline{t}\right)\right\}}_{j=1}^{\mathrm{\infty }}\subseteq {\left({t}_{\mathrm{rd}},\overline{t}\right]}_{𝕋},$

is well-defined and satisfies ${\rho }_{𝕋}^{j}\left(\overline{t}\right)↘{t}_{\mathrm{rd}}$. Let us assume that $x\in ℤ$ is arbitrary but fixed, i.e., $x=\overline{x}+{i}_{0}$ or $x=\overline{x}-{i}_{0}$ for some ${i}_{0}\in {ℕ}_{0}$. We consider the case $x=\overline{x}+{i}_{0}$; the other case is similar. Lemma 5.1 implies that for all $j\ge {i}_{0}$ there is

$u\left(x,{\rho }_{𝕋}^{j}\left(\overline{t}\right)\right)=u\left(\overline{x}+{i}_{0},{\rho }_{𝕋}^{j}\left(\overline{t}\right)\right)=R.$

Then the continuity of the function $u\left(x,\cdot \right)$ yields that

$R=\underset{j\to \mathrm{\infty }}{lim}u\left(x,{\rho }_{𝕋}^{j}\left(\overline{t}\right)\right)=u\left(x,{t}_{\mathrm{rd}}\right),$

and since $x\in ℤ$ is arbitrary, there is $u\left(x,{t}_{\mathrm{rd}}\right)=R$ for all $x\in ℤ$.

Now we prove that $u\left(x,t\right)=R$ for $x\in ℤ$ and $t\in {\left[{t}_{0},{t}_{\mathrm{rd}}\right]}_{𝕋}$. We use the backward induction principle in the variable t (see [4, Theorem 1.7 and Remark 1.8]):

• Above we have shown that for $t={t}_{\mathrm{rd}}$ there is $u\left(x,{t}_{\mathrm{rd}}\right)=R$ for all $x\in ℤ$.

• Let $t\in {\left({t}_{0},{t}_{\mathrm{rd}}\right]}_{𝕋}$ be left-scattered and $u\left(x,t\right)=R$ for all $x\in ℤ$. Then Lemma 5.1 immediately implies that $u\left(x,{\rho }_{𝕋}\left(t\right)\right)=R$ for all $x\in ℤ$.

• Let $t\in {\left[{t}_{0},{t}_{\mathrm{rd}}\right)}_{𝕋}$ be right-dense and $u\left(x,s\right)=R$ for all $x\in ℤ$ and $s\in {\left(t,{t}_{\mathrm{rd}}\right]}_{𝕋}$. Then again from the continuity of the functions $u\left(x,\cdot \right)$ we obtain

• We do not have to consider the case when $t\in {\left({t}_{0},{t}_{\mathrm{rd}}\right]}_{𝕋}$ is left-dense, since we assume that ${\left[{t}_{0},{t}_{\mathrm{rd}}\right]}_{𝕋}$ does not contain any such point.

The backward induction principle implies that $u\left(x,t\right)=R$ for all $x\in ℤ$ and $t\in {\left[{t}_{0},{t}_{\mathrm{rd}}\right]}_{𝕋}$.

Finally, it remains to prove that $u\left(x,t\right)=R$ for $x\in ℤ$ and $t\in {\left[{t}_{\mathrm{rd}},T\right]}_{𝕋}$. Since $u\left(x,{t}_{0}\right)={u}_{x}^{0}=R$ for all $x\in ℤ$, there is $r=m=M=R$ and, analogously to above, we can use Theorem 4.4 (weak maximum principle) to show that

#### Corollary 5.5.

Assume that (H1), (H2), (H3), ($\overline{\mathrm{H4}}$), ($\overline{\mathrm{H5}}$), and ($\overline{\mathrm{H6}}$) hold with $r\mathrm{=}m\mathrm{\le }M\mathrm{=}R$. Suppose that $u\mathrm{:}\mathrm{Z}\mathrm{×}{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right]}}_{\mathrm{T}}\mathrm{\to }\mathrm{R}$ is a bounded solution of (2.1). If there is a point ${t}_{d}\mathrm{\in }{\mathrm{\left[}{t}_{\mathrm{0}}\mathrm{,}T\mathrm{\right)}}_{\mathrm{T}}$ that is not isolated and if the initial condition ${u}^{\mathrm{0}}$ is not constant, then

#### Proof.

Assume by contradiction that there exist $\overline{x}\in ℤ$, $\overline{t}\in {\left({t}_{d},T\right]}_{𝕋}$ such that $u\left(\overline{x},\overline{t}\right)\in \left\{r,R\right\}$. Since ${t}_{d}\in {\left[{t}_{0},\overline{t}\right)}_{𝕋}$ and ${t}_{d}$ is not isolated, part (b) of Theorem 5.3 yields that u is constant on $ℤ×{\left[{t}_{0},T\right]}_{𝕋}$, a contradiction to the assumption that ${u}^{0}$ is not constant. ∎

The following remarks explain why the original conditions (H4)–(H6) are not sufficient to establish the strong maximum principle, and had to be replaced by their stronger counterparts ($\overline{\mathrm{H4}}$)–($\overline{\mathrm{H6}}$).

#### Remark 5.6.

(H4) is too weak for the strong maximum principle; we need the constants $a,c\in ℝ$ to be strictly positive. Indeed, let us consider the linear transport equation

$\frac{\partial u}{\partial t}\left(x,t\right)=-u\left(x,t\right)+u\left(x-1,t\right),x\in ℤ,t\in \left[0,T\right],$$u\left(x,0\right)=\left\{\begin{array}{cc}1,\hfill & x\ge 0,\hfill \\ 0,\hfill & x<0,\hfill \end{array}$

i.e., the initial-value problem (2.1) with $a=0$, $b=-1$, $c=1$, and $f\equiv 0$. Then the unique bounded solution is given by (see [30, Corollary 4.3])

$u\left(x,t\right)=\left\{\begin{array}{cc}\sum _{j=0}^{x}\frac{{t}^{j}}{j!}{e}^{-t},\hfill & x\ge 0,t\in \left[0,T\right],\hfill \\ 0,\hfill & x<0,t\in \left[0,T\right].\hfill \end{array}$

Thus, the strong maximum principle does not hold.

#### Remark 5.7.

To see that (H5) does not suffice, consider the time scale $𝕋={ℕ}_{0}$ and the following linear equation ($f\equiv 0$):

${u}^{\mathrm{\Delta }}\left(x,t\right)=\frac{1}{2}u\left(x+1,t\right)-u\left(x,t\right)+\frac{1}{2}u\left(x-1,t\right),x\in ℤ,t\in {ℕ}_{0},$

which corresponds to (2.1) with $a=c=\frac{1}{2}$, $b=-1$ and $f\equiv 0$. This equation holds if and only if

$u\left(x,t+1\right)=\frac{1}{2}u\left(x+1,t\right)+\frac{1}{2}u\left(x-1,t\right),x\in ℤ,t\in {ℕ}_{0}.$

For the initial condition

we obtain

which violates the strong maximum principle.

#### Remark 5.8.

Finally, let $a,b,c$ be an arbitrary triple satisfying ($\overline{\mathrm{H4}}$), and $𝕋=\mu {ℕ}_{0}=\left\{0,\mu ,2\mu ,\mathrm{\dots }\right\}$, where $\mu >0$ satisfies ($\overline{\mathrm{H5}}$). Consider problem (2.1) with

${u}_{x}^{0}=\left\{\begin{array}{cc}1,\hfill & x\ne 0,\hfill \\ 0,\hfill & x=0,\hfill \end{array}\mathit{ }\text{and}\mathit{ }f\left(u,x,t\right)=\left(b+\frac{1}{\mu }\right)\left(1-u\right).$

We have $m=0$ and $M=1$. For $r=0$ and $R=1$ the function f satisfies (H6), but not ($\overline{\mathrm{H6}}$). Using (2.1), we calculate

$u\left(0,\mu \right)=\mu au\left(1,0\right)+\left(1+\mu b\right)u\left(0,0\right)+\mu cu\left(-1,0\right)+\mu f\left(u\left(0,0\right),0\right)=\mu \left(a+c\right)+\left(1+\mu b\right)\stackrel{\text{(}\overline{\mathrm{H4}}\text{)}}{=}1.$

Therefore, $u\left(0,\mu \right)=1=R$, but $u\left(0,0\right)=0$, which contradicts the strong maximum principle.

## References

• [1]

L. Adamec, A note on continuous dependence of solutions of dynamic equations on time scales, J. Difference Equ. Appl. 17 (2011), 647–656.

• [2]

J. Bell, Some threshold results for models of myelinated nerves, Math. Biosci. 54 (1981), 181–190.

• [3]

M. Bohner, M. Federson and J. G. Mesquita, Continuous dependence for impulsive functional dynamic equations involving variable time scales, Appl. Math. Comput. 221 (2013), 383–393.

• [4]

M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkhäuser, Boston, 2001.  Google Scholar

• [5]

T. Caraballo, F. Morillas and J. Valero, Asymptotic behaviour of a logistic lattice system, Discrete Contin. Dyn. Syst. 34 (2014), no. 10, 4019–4037.

• [6]

S.-N. Chow, Lattice dynamical systems, Dynamical Systems, Lecture Notes in Math. 1822, Springer, Berlin (2003), 1–102.  Google Scholar

• [7]

S.-N. Chow and J. Mallet-Paret, Pattern formation and spatial chaos in lattice dynamical systems, IEEE Trans. Circuits Syst. 42 (1995), 746–751.

• [8]

S.-N. Chow, J. Mallet-Paret and W. Shen, Traveling waves in lattice dynamical systems, J. Differential Equations 149 (1998), 248–291.

• [9]

S.-N. Chow and W. Shen, Dynamics in a discrete Nagumo equation: Spatial topological chaos, SIAM J. Appl. Math. 55 (1995), 1764–1781.

• [10]

T. Erneux and G. Nicolis, Propagating waves in discrete bistable reaction diffusion systems, Phys. D 67 (1993), 237–244.

• [11]

M. Federson, J. G. Mesquita and A. Slavík, Basic results for functional differential and dynamic equations involving impulses, Math. Nachr. 286 (2013), no. 2–3, 181–204.

• [12]

A. Feintuch and B. Francis, Infinite chains of kinematic points, Automatica J. IFAC 48 (2012), no. 5, 901–908.

• [13]

M. Friesl, A. Slavík and P. Stehlík, Discrete-space partial dynamic equations on time scales and applications to stochastic processes, Appl. Math. Lett. 37 (2014), 86–90.

• [14]

B. M. Garay, S. Hilger and P. E. Kloeden, Continuous dependence in time scale dynamics, Proceedings of the Sixth International Conference on Difference Equations (Augsburg 2001), CRC Press, Boca Raton (2004), 279–287.  Google Scholar

• [15]

N. T. Ha, N. H. Du, L. C. Loi and D. D. Thuan, On the convergence of solutions to dynamic equations on time scales, Qual. Theory Dyn. Syst. (2015), 10.1007/s12346-015-0166-8.

• [16]

S. Hilger, Analysis on measure chains – A unified approach to continuous and discrete calculus, Results Math. 18 (1990), 18–56.

• [17]

C. Hu and B. Li, Spatial dynamics for lattice differential equations with a shifting habitat, J. Differential Equations 259 (2015), 1967–1989.

• [18]

H. Hupkes and E. Van Vleck, Travelling waves for complete discretizations of reaction diffusion systems, J. Dynam. Differential Equations 28 (2016), 955–1006.

• [19]

J. P. Keener, Propagation and its failure in coupled systems of discrete excitable cells, SIAM J. Appl. Math. 47 (1987), 556–572.

• [20]

P. E. Kloeden, A Gronwall-like inequality and continuous dependence on time scales, Nonlinear Analysis and Applications: To V. Lakshmikantham on his 80th Birthday, Kluwer Academic, Dordrecht (2003), 645–659.  Google Scholar

• [21]

J. Mallet-Paret, Traveling waves in spatially discrete dynamical systems of diffusive type, Dynamical Systems, Lecture Notes in Math. 1822, Springer, Berlin (2003), 231–298.  Google Scholar

• [22]

G. A. Monteiro and M. Tvrdý, Generalized linear differential equations in a Banach space: Continuous dependence on a parameter, Discrete Contin. Dyn. Syst. 33 (2013), no. 1, 283–303.

• [23]

A. Peterson and B. Thompson, Henstock–Kurzweil delta and nabla integrals, J. Math. Anal. Appl. 323 (2006), 162–178.

• [24]

C. Pötzsche, Geometric Theory of Discrete Nonautonomous Dynamical Systems, Springer, Berlin, 2010.

• [25]

Š. Schwabik, Generalized Ordinary Differential Equations, World Scientific, River Edge, 1992.  Google Scholar

• [26]

Š. Schwabik, Abstract Perron–Stieltjes integral, Math. Bohem. 121 (1996), 425–447.  Google Scholar

• [27]

A. Slavík, Dynamic equations on time scales and generalized ordinary differential equations, J. Math. Anal. Appl. 385 (2012), 534–550.

• [28]

A. Slavík and P. Stehlík, Explicit solutions to dynamic diffusion-type equations and their time integrals, Appl. Math. Comput. 234 (2014), 486–505.

• [29]

A. Slavík and P. Stehlík, Dynamic diffusion-type equations on discrete-space domains, J. Math. Anal. Appl. 427 (2015), no. 1, 525–545.

• [30]

P. Stehlík and J. Volek, Transport equation on semidiscrete domains and Poisson–Bernoulli processes, J. Difference Equ. Appl. 19 (2013), no. 3, 439–456.

• [31]

P. Stehlík and J. Volek, Maximum principles for discrete and semidiscrete reaction-diffusion equation, Discrete Dyn. Nat. Soc. 2015 (2015), Article ID 791304.

• [32]

V. Volpert, Elliptic Partial Differential Equations: Volume 2 Reaction-Diffusion Equations, Springer Monogr. Math. 104, Springer, Basel, 2014.  Google Scholar

• [33]

B. Wang, Dynamics of systems of infinite lattices, J. Differential Equations 221 (2006), 224–245.

• [34]

B. Wang, Asymptotic behavior of non-autonomous lattice systems, J. Math. Anal. Appl. 331 (2007), 121–136.

• [35]

H. F. Weinberger, Long time behavior of a class of biological models, SIAM J. Math. Anal. 13 (1982), 353–396.

• [36]

B. Zinner, Existence of traveling wavefront solutions for the discrete Nagumo equation, J. Differential Equations 96 (1992), 1–27.

• [37]

B. Zinner, G. Harris and W. Hudson, Traveling wavefronts for the discrete Fisher’s equation, J. Differential Equations 105 (1993), 46–62.

Revised: 2016-10-03

Accepted: 2017-02-16

Published Online: 2017-03-17

Funding Source: Grantová Agentura České Republiky

Award identifier / Grant number: GA15-07690S

All three authors acknowledge the support by the Czech Science Foundation, grant number GA15-07690S.

Citation Information: Advances in Nonlinear Analysis, Volume 8, Issue 1, Pages 303–322, ISSN (Online) 2191-950X, ISSN (Print) 2191-9496,

Export Citation

[1]