We analyze a secondorder accurate finite difference method for a spatially periodic convectiondiffusion problem. The method is a time stepping method based on the Strang splitting of the spatially semidiscrete solution, in which the diffusion part uses the Crank–Nicolson method and the convection part the explicit forward Euler approximation on a shorter time interval. When the diffusion coefficient is small, the forward Euler method may be used also for the diffusion term.
1 Introduction
In this paper we shall consider the numerical solution of the following convectiondiffusion problem in the cube
Ω
=
(
0
,
2
π
)
d
:
(1.1)
∂
U
∂
t
=
div
(
a
∇
U
)
+
b
⋅
∇
U
in
Ω
for
t
>
0
with
U
(
0
)
=
V
,
under periodic boundary conditions, where the positive definite
d
×
d
matrix
a
(
x
)
=
(
a
i
j
(
x
)
)
and the vector
b
=
b
(
x
)
=
(
b
1
,
…
,
b
d
)
are periodic and smooth.
Equation (1.1) is a special case of the initialvalue problem for the operator equation
(1.2)
d
U
d
t
=

𝒜
U
+
ℬ
U
for
t
≥
0
with
U
(
0
)
=
V
,
where
𝒜
and
ℬ
represent different physical processes, in our case
𝒜
U
=

div
(
a
∇
U
)
,
ℬ
U
=
b
⋅
∇
U
,
with
𝒜
representing a slow and
ℬ
a fast physical process.
The solution of (1.2) may be formally expressed as
U
(
t
)
=
ℰ
(
t
)
V
=
e

t
(
𝒜

ℬ
)
V
for
t
≥
0
.
To discretize such an equation in time, a common approach is to split
𝒜

ℬ
into
𝒜
and

ℬ
. With k a time step one introduces
t
n
=
n
k
and one may then use the secondorder symmetric Strang splitting [8, 7] on each time interval
(
t
n

1
,
t
n
)
,
(1.3)
ℰ
(
k
)
=
e

k
(
𝒜

ℬ
)
≈
e
1
2
k
ℬ
e

k
𝒜
e
1
2
k
ℬ
,
which thus locally involves solutions of
U
t
=

𝒜
U
and
U
t
=
ℬ
U
, see e.g. Hundsdorfer and Verwer [5] and references therein.
The three exponentials on the right in (1.3) are then approximated by rational functions of
𝒜
and
ℬ
, respectively, such as the Crank–Nicolson method
r
0
(
k
𝒜
)
, with
r
0
(
λ
)
=
(
1

1
2
λ
)
/
(
1
+
1
2
λ
)
, for the middle factor. The choice of approximation involving
ℬ
is not so obvious. It has been suggested in the context of numerical weather prediction, e.g. in Baldauf [1], that time steps of different length could be used for the two processes, with shorter time intervals for the fast process and longer for the slow one. Also Gassmann and Herzog [3] discuss the difficulties associated with splitting in such situations. In the case of reactiondiffusion equation, see Estep, Ginting, Ropp, Shadid and Tavener [2] and references therein. Our aim in this work is to discuss this problem in a somewhat rigorous fashion for our simple model problem.
We note that if
𝒜
and
ℬ
commute, which holds for (1.1) when a and b are independent of x, then
e

k
(
𝒜

ℬ
)
=
e

k
𝒜
e
k
ℬ
=
e
1
2
k
ℬ
e

k
𝒜
e
1
2
k
ℬ
so that the error in (1.3) is zero. When
𝒜
and
ℬ
do not commute, then formally, by Taylor expansion,
e

k
(
𝒜

ℬ
)

e

k
𝒜
e
k
ℬ
=
O
(
k
2
)
and
e

k
(
𝒜

ℬ
)

e
1
2
k
ℬ
e

k
𝒜
e
1
2
k
ℬ
=
O
(
k
3
)
. Error estimates for the time splitting, depending on the regularity of the initial values may be found in Jahnke and Lubich [6], Hansen and Ostermann [4] and references therein.
In the method that we study in this paper, we begin by discretizing (1.1) in the spatial variables. We let
h
=
2
π
M
, where M is a positive integer, and define a corresponding uniform mesh
(1.4)
Ω
h
=
{
x
=
x
j
=
j
h
:
j
=
(
j
1
,
…
,
j
d
}
,
1
≤
j
l
≤
M
,
l
=
1
,
…
,
d
}
.
For Mperiodic vectors u with elements
u
j
, corresponding to the meshpoint
x
j
, and with
u
j
+
M
e
l
=
u
j
, we consider the following simple secondorder finite difference approximation of (1.1):
(1.5)
d
u
d
t
=
∑
i
,
j
=
1
d
∂
¯
j
(
a
´
i
j
∂
i
u
)
+
∑
j
=
1
d
1
2
b
j
(
∂
j
+
∂
¯
j
)
u
in
Ω
h
for
t
>
0
with
u
(
0
)
=
v
.
Here
∂
j
,
∂
¯
j
are forward and backward finite difference quotients in the direction of
x
j
,
a
´
i
j
(
x
l
)
=
a
i
j
(
x
l
+
1
2
h
e
i
)
, and v the restriction of V to
Ω
h
. Problem (1.5) may be written as a system of ODEs in time,
(1.6)
d
u
d
t
=

A
u
+
B
u
for
t
>
0
with
u
(
0
)
=
v
,
where the
M
d
×
M
d
matrices A and B correspond to the differential operators
𝒜
and
ℬ
. It is then to (1.6) that we will apply the splitting approach.
In the onedimensional case we may take, with
h
=
2
π
M
the meshwidth and
x
l
=
l
h
,
A
=
1
h
2
[
d
(
x
1
)

a
(
x
1.5
)
0
…
…

a
(
x
0.5
)

a
(
x
1.5
)
d
(
x
2
)

a
(
x
2.5
)
…
…
0
0

a
(
x
2.5
)
d
(
x
3
)
…
…
0
⋮
⋮
⋮
⋱
⋱

a
(
x
M

0.5
)

a
(
x
0.5
)
0
0
…

a
(
x
M

0.5
)
d
(
x
M
)
]
,
where
d
(
x
l
)
=
a
(
x
l
+
0.5
h
)
+
a
(
x
l

0.5
h
)
(recall
a
(
x
M
+
0.5
)
=
a
(
x
0.5
)
), and
B
=
1
2
h
[
0
b
(
x
1
)
…
…

b
(
x
1
)

b
(
x
2
)
0
b
(
x
2
)
…
0
⋮
⋮
⋱
⋮
⋮
b
(
x
M

1
)
b
(
x
M
)
0
…

b
(
x
M
)
0
]
.
The solution of (1.6), the spatially semidiscrete solution, is
u
(
t
)
=
E
(
t
)
v
=
e

t
(
A

B
)
v
,
and we shall see that the error in this approximation is
O
(
h
2
)
, under the appropriate regularity assumptions. For the time discretization we shall work with basic time intervals of length
k
=
1
N
, where
N
=
2
p
is an even positive integer, then apply the Strang splitting (1.3) on each of the interval, so that
(1.7)
E
(
k
)
=
e

k
(
A

B
)
≈
e
1
2
k
B
e

k
A
e
1
2
k
B
,
and finally approximate the three exponential factors.
We would like the time discretization error to match that of the discretization in space. For a secondorder time discretization method, this will require
k
=
O
(
h
)
. For
k
≤
γ
h
2
, with γ appropriate, the problem may be solved by explicit approximations but since we prefer N to be relatively small, we will consider methods with h and k of the same order.
For the approximation of
e

k
A
in (1.7) we shall use the Crank–Nicolson method. Then, in order to approximate
e
1
2
k
B
, we would like to use the forward Euler method on a time interval of length
k
1
<
k
. Assuming for the moment that b is constant and thus B skewsymmetric, we note that
∥
I
+
k
1
B
∥
=
(
1
+
k
1
2
∥
B
∥
2
)
1
2
≤
1
+
C
k
1
2
h

2
.
Here and below,
∥
⋅
∥
denotes the standard matrix norm induced by the
ℓ
2
vector inner product. Stability therefore holds if
k
1
2
h

2
≤
C
k
1
, or if
k
1
≤
C
h
2
. Since k should be of the same order as h, this makes it natural to choose
k
1
=
k
2
. We thus subdivide the time intervals of length k into N subintervals of length
k
2
=
k
N
and apply an explicit forward Euler approximation on each of these. As we shall see, this approximation matches the secondorder of the Crank–Nicolson method.
Thus the diffusion part of the equation is approximated on intervals of length k and the convection part on intervals of length
k
2
. We consider thus the time discrete solution at time
t
n
=
n
k
,
(1.8)
u
n
=
E
k
n
v
,
where
E
k
=
ℬ
k
r
0
(
k
A
)
ℬ
k
with
ℬ
k
=
(
I
+
k
2
B
)
p
.
In fact, in the successive time stepping, only the matrix
ℬ
~
k
=
ℬ
k
2
=
(
I
+
k
2
B
)
N
is used, except in the first and last steps. The method proposed thus replaces the solution at each time step of a nonsymmetric problem, by the solution of a symmetric problem, plus applications of an explicit method, but successively repeated
N
2
=
p
times before and after the diffusion approximation, thus covering the interval of length
N
k
2
=
k
. We reemphasize that the splitting is applied to the spatially semidiscrete problem and not to the continuous problem.
The analysis sketched above is carried out in Section 2. The analysis will use discrete Sobolev norms. After this, in Section 2, we discuss the case when equation (1.1) contains a small diffusion coefficient ε. In this case we are able to show that if
ε
≤
γ
h
with γ sufficiently small, then the approximation of
e

k
ε
A
can be done by the forward Euler method, and, with the convection part as before, we have a purely explicit secondorder approximation method. We close the paper by presenting some numerical illustrations in Section 4.
2 Basic Error Analysis
For the periodic problem in
Ω
=
(
0
,
2
π
)
d
and with
Ω
h
as in (1.4), we introduce the discrete inner product and norm
(
v
,
w
)
h
=
h
d
∑
x
j
∈
Ω
h
v
j
w
j
and
∥
v
∥
h
=
(
v
,
v
)
h
1
2
.
Further, we set
∂
j
u
(
x
)
=
1
h
(
u
(
x
+
h
e
j
)

u
(
x
)
)
and
∂
α
u
=
∂
1
α
1
…
∂
d
α
d
u
for
α
=
(
α
1
,
…
,
α
d
)
,
and define the discrete Sobolev norm, with

α

=
α
1
+
…
+
α
d
,
∥
u
∥
h
,
s
=
(
∑

α

≤
s
∥
∂
α
u
∥
h
2
)
1
2
for
s
≥
0
.
We shall also use
∂
¯
j
u
(
x
)
=
1
h
(
u
(
x
)

u
(
x

h
e
j
)
)
. We define
∥
w
∥
ℂ
s
(
R
)
=
max

α

≤
s
sup
x
∈
R

D
α
w
(
x
)

,
D
α
=
(
∂
∂
x
1
)
α
1
⋯
(
∂
∂
x
d
)
α
d
.
We shall write
ℂ
s
for
ℂ
s
(
Ω
)
, and
ℂ
for
ℂ
0
. We note that defining
U
h
to be the restriction to the mesh
Ω
h
of a smooth function U, i.e. by
(
U
h
)
j
=
U
(
x
j
)
, we have
∥
U
h
∥
h
,
s
≤
C
∥
U
∥
ℂ
s
.
Consider now the matrices A and B in our convectiondiffusion problem (1.2). They satisfy, with C independent of h,
(2.1)
∥
A
u
∥
h
≤
C
∥
u
∥
h
,
2
and
∥
B
u
∥
h
≤
C
∥
u
∥
h
,
1
.
Setting
Q
(
x
)
=
{
y
:

y
s

x
s

≤
h
,
s
=
1
,
…
,
d
}
,
we have, since the terms in
A
U
h
are symmetric difference quotients of U at the meshpoints
x
j
, and the terms in
(
𝒜
U
)
h
are the corresponding derivatives of U at
x
j
,
(2.2)

(
A
U
h
)
(
x
j
)

(
𝒜
U
)
(
x
j
)

≤
C
h
2
∥
U
∥
ℂ
4
(
Q
(
x
j
)
)
and
(2.3)

(
B
U
h
)
(
x
j
)

(
ℬ
U
)
(
x
j
)

≤
C
h
2
∥
U
∥
ℂ
3
(
Q
(
x
j
)
)
,
expressing, in particular, that (1.5) is a secondorder approximation of (1.1).
We note that, for

α

=
s
,
(2.4)
∥
∂
α
A
u

A
∂
α
u
∥
h
≤
C
∥
u
∥
h
,
s
+
1
and
∥
∂
α
B
u

B
∂
α
u
∥
h
≤
C
∥
u
∥
h
,
s
,
and we may conclude from (2.1) that
(2.5)
∥
A
u
∥
h
,
s
≤
C
∥
u
∥
h
,
s
+
2
and
∥
B
u
∥
h
,
s
≤
C
∥
u
∥
h
,
s
+
1
.
The matrix A is positive semidefinite, with
(2.6)
∥
v
∥
h
,
1
2
≤
C
(
(
A
v
,
v
)
h
+
∥
v
∥
h
2
)
.
Further, it follows easily from the definition of A that
(2.7)
∥
A
v
∥
h
≤
α
h

2
∥
v
∥
h
,
where
α
=
4
d
λ
max
(
a
)
.
For
B
=
(
b
i
l
)
we have
b
i
l
=
{
±
b
j
(
x
i
)
if
l
=
i
±
e
j
,
j
=
1
,
…
,
d
,
0
for other
l
,
and that then
b
i
+
e
j
,
i
=

b
j
(
x
i
+
e
j
)
. It follows from this that
B
=
B
0
+
B
1
with
B
0
=
1
2
(
B

B
T
)
,
B
1
=
1
2
(
B
+
B
T
)
.
Here
B
0
is skewsymmetric and
(2.8)
∥
B
0
∥
≤
h

1
β
0
,
β
0
=
∑
j
=
1
d
∥
b
j
∥
ℂ
,
∥
B
1
∥
≤
β
1
=
∑
j
=
1
d
∥
∂
b
j
∂
x
j
∥
ℂ
.
Note also that
(
B
0
v
,
v
)
=
0
for all v.
We begin with the straightforward standard analysis of the spatially semidiscrete problem (1.6), which we include for completeness. We first show the stability of the solution operator of (1.6) in discrete Sobolev norms.
Lemma 2.1.
Let
E
(
t
)
=
e

t
(
A

B
)
. Then, for any
s
≥
0
,
with
C
=
C
s
independent of h,
∥
E
(
t
)
v
∥
h
,
s
≤
e
C
t
∥
v
∥
h
,
s
for
t
≥
0
.
Proof.
Let
s
≥
0
and let

α

=
s
. From (1.6) we find, for
u
(
t
)
=
E
(
t
)
v
,
(2.9)
(
∂
α
u
t
,
∂
α
u
)
h
+
(
∂
α
A
u
,
∂
α
u
)
h
=
(
∂
α
B
u
,
∂
α
u
)
h
.
Here, by (2.4),
(2.10)

(
∂
α
A
u
,
∂
α
u
)
h

(
A
∂
α
u
,
∂
α
u
)
h

≤
C
∥
u
∥
h
,
s
+
1
∥
u
∥
h
,
s
.
Further, since
B
=
B
0
+
B
1
with
B
0
skewsymmetric,

(
∂
α
B
u
,
∂
α
u
)
h

≤

(
B
∂
α
u
,
∂
α
u
)
h

+
C
∥
u
∥
h
,
s
≤
C
∥
u
∥
h
,
s
2
.
Therefore, by (2.9),
1
2
d
d
t
∥
∂
α
u
∥
h
2
+
(
A
∂
α
u
,
∂
α
u
)
h
+
∥
∂
α
u
∥
h
2
≤
C
∥
u
∥
h
,
s
+
1
∥
u
∥
h
,
s
.
Hence, using (2.6) and summing over

α

≤
s
we find, with
c
>
0
,
d
d
t
∥
u
∥
h
,
s
2
+
c
∥
u
∥
h
,
s
+
1
2
≤
C
∥
u
∥
h
,
s
+
1
∥
u
∥
h
,
s
≤
c
∥
u
∥
h
,
s
+
1
2
+
C
∥
u
∥
h
,
s
2
,
or, with
C
=
C
s
,
(2.11)
d
d
t
∥
u
∥
h
,
s
2
≤
C
∥
u
∥
h
,
s
2
,
from which the lemma follows. ∎
Note that the special case of
e

t
A
is included for
B
=
0
.
As a consequence, we have the following secondorder error estimate.
Theorem 2.1.
We have, for the solutions of (1.6) and (1.1), with
v
=
V
h
, and
C
=
C
T
,
∥
u
(
t
)

U
h
(
t
)
∥
h
≤
C
h
2
∥
V
∥
ℂ
4
for
n
k
≤
T
<
∞
.
Proof.
Setting
ω
=
u

U
h
, we find
d
ω
d
t
=

A
ω
+
B
ω
+
ρ
in
Ω
h
for
t
>
0
with
ω
(
0
)
=
0
,
where
ρ
=
(
(
𝒜
U
)
h

A
U
h
)

(
(
ℬ
U
h
)

B
U
h
)
. Here, by (2.2) and (2.3),
∥
ρ
(
t
)
∥
h
≤
∥
(
(
𝒜
U
)
h

A
U
h
)
(
t
)
∥
h
+
∥
(
(
ℬ
U
)
h

B
U
h
)
(
t
)
∥
h
≤
C
h
2
∥
U
(
t
)
∥
ℂ
4
,
and hence
ω
(
t
)
=
∫
0
t
E
(
t

s
)
ρ
(
s
)
𝑑
s
,
where
∥
ρ
(
s
)
∥
h
≤
C
h
2
∥
U
(
s
)
∥
ℂ
4
. Hence, by Lemma 2.1, since
∥
U
(
s
)
∥
ℂ
4
≤
C
∥
V
∥
ℂ
4
,
∥
ω
(
t
)
∥
h
≤
C
∫
0
t
∥
ρ
(
s
)
∥
h
𝑑
s
≤
C
h
2
∫
0
t
∥
U
(
s
)
∥
ℂ
4
𝑑
s
≤
C
h
2
∥
V
∥
ℂ
4
.
∎
Turning to the analysis of the time discretization, we first show the stability of
e
t
B
.
Lemma 2.2.
For any
s
≥
0
, we have, with
C
s
independent of h,
(2.12)
∥
e
t
B
v
∥
h
,
s
≤
e
C
s
t
∥
v
∥
h
,
s
for
t
≥
0
.
Here we may choose
C
0
=
β
1
, as in (2.8).
Proof.
Let
s
≥
0
and let

α

=
s
. Then for the solution of (1.6) with
A
=
0
,
(
∂
α
u
t
,
∂
α
u
)
h
=
(
∂
α
B
u
,
∂
α
u
)
h
=
(
B
∂
α
u
,
∂
α
u
)
h
+
Q
α
,
where

Q
α

≤
C
s
∥
u
∥
h
,
s
2
. Since
(
B
u
,
u
)
h
=
(
B
1
u
,
u
)
h
≤
β
1
∥
u
∥
h
2
,
we conclude that (2.11) holds, which shows (2.12). For
s
=
0
we have
Q
α
=
0
and hence (2.11) holds with
C
=
2
β
1
which implies (2.12), with
C
0
=
β
1
. ∎
We now show the following error estimate for the Strang splitting.
Lemma 2.3.
We have, with C independent of h and k,
∥
e

k
(
A

B
)
v

e
1
2
k
B
e

k
A
e
1
2
k
B
v
∥
h
≤
C
k
3
∥
v
∥
h
,
6
.
Proof.
Setting
F
(
k
)
=
e

k
(
A

B
)

e
1
2
k
B
e

k
A
e
1
2
k
B
and noting that
F
(
0
)
=
F
′
(
0
)
=
F
′′
(
0
)
=
0
, we may use Taylor’s formula to obtain
∥
F
(
k
)
∥
=
∥
(
F
(
k
)

F
(
0
)

k
F
′
(
0
)

1
2
k
2
F
′′
(
0
)
)
v
∥
h
≤
1
6
k
3
sup
s
≤
k
∥
F
′′′
(
s
)
v
∥
h
.
Here, for
s
≤
k
, using (2.1) and Lemmas 2.1 and 2.2,
∥
F
′′′
(
s
)
v
∥
h
≤
∥
(
A

B
)
3
e

k
(
A

B
)
v
∥
h
+
C
∑
i
1
+
i
2
+
i
3
=
3
∥
B
i
1
e
1
2
s
B
A
i
2
e

s
A
B
i
3
e
1
2
s
B
v
∥
h
≤
C
(
∥
v
∥
h
,
6
+
∑
i
1
+
i
2
+
i
3
=
3
∥
v
∥
h
,
i
1
+
2
i
2
+
i
3
)
≤
C
∥
v
∥
h
,
6
,
which shows the lemma. ∎
We now turn to the time stepping operator
E
k
defined in (1.8) and begin with the following stability result.
Lemma 2.4.
Let
k
≤
γ
h
. Then, with
β
0
,
β
1
as in (2.8),
(2.13)
∥
ℬ
k
∥
=
∥
(
I
+
k
2
B
)
p
∥
≤
e
1
2
μ
k
,
where
μ
=
1
2
(
γ
β
0
)
2
+
β
1
.
Further,
E
k
=
B
k
r
0
(
k
A
)
B
k
is stable, and
(2.14)
∥
E
k
n
∥
≤
e
μ
T
for
n
k
≤
T
.
Proof.
Since
B
0
is skewsymmetric, we have
∥
I
+
k
2
B
0
∥
2
=
1
+
k
4
∥
B
0
∥
2
≤
1
+
k
4
h

2
β
0
2
≤
1
+
(
γ
β
0
)
2
k
2
≤
e
(
γ
β
0
)
2
k
2
.
Hence
(2.15)
∥
I
+
k
2
B
∥
≤
∥
I
+
k
2
B
0
∥
+
k
2
∥
B
1
∥
≤
e
1
2
(
γ
β
0
)
2
k
2
+
β
1
k
2
≤
e
μ
k
2
,
which shows (2.13) since
2
p
k
=
1
. Hence (2.14) follows by
∥
r
0
(
k
A
)
∥
≤
1
. ∎
We start the analysis of the time discretization error with the following.
Lemma 2.5.
Let M be a square matrix, and assume
∥
e
s
M
∥
≤
C
for
s
≤
t
. Then we have, for
s
≤
t
,
∥
(
e
t
M

(
I
+
t
M
)
)
v
∥
h
≤
C
t
2
∥
M
2
v
∥
h
and
(2.16)
∥
(
e
t
M

(
I
+
t
M
+
1
2
t
2
M
2
)
)
v
∥
h
≤
C
t
3
∥
M
3
v
∥
h
.
If also
∥
(
I
+
1
2
s
M
)

1
∥
≤
C
for
s
≤
t
, then
(2.17)
∥
(
e

t
M

r
0
(
t
M
)
)
v
∥
h
≤
C
t
3
(
∥
M
3
v
∥
h
+
∥
v
∥
h
)
.
Proof.
By Taylor expansion we have
e
t
M
=
I
+
t
M
+
∫
0
t
(
t

s
)
e
M
s
M
2
𝑑
s
,
and hence
∥
(
e
t
M

(
I
+
t
M
)
)
v
∥
h
≤
C
∫
0
t
(
t

s
)
∥
M
2
v
∥
h
𝑑
s
=
1
2
C
t
2
∥
M
2
v
∥
h
.
Estimate (2.16) follows analogously. For (2.17), we use (2.16) together with
r
0
(
t
M
)
=
I

t
M
+
1
2
t
2
M
2
+
1
2
∫
0
t
(
t

s
)
2
r
0
′′′
(
s
M
)
𝑑
s
,
where
r
0
′′′
(
λ
)
=

3
2
(
1
+
1
2
λ
)

4
, to complete the proof. ∎
We now show the following error estimate for
ℬ
k
.
Lemma 2.6.
Let
k
≤
γ
h
. Then we have, with
C
=
C
γ
,
∥
(
e
1
2
k
B

ℬ
k
)
v
∥
h
≤
C
k
3
∥
v
∥
h
,
2
.
Proof.
Since
p
k
2
=
1
2
k
, we may write
e
1
2
k
B
v

(
1
+
k
2
B
)
p
v
=
∑
j
=
0
p

1
e
(
p

j

1
)
k
2
B
(
e
k
2
B

(
I
+
k
2
B
)
)
(
I
+
k
2
B
)
j
v
.
By Lemma 2.5, we have
∥
(
e
k
2
B

(
1
+
k
2
B
)
)
v
∥
h
≤
C
k
4
∥
B
2
v
∥
h
.
By Lemma 2.2,
∥
e
(
p

j

1
)
k
2
B
∥
≤
e
1
2
k
β
1
for
j
≤
p

1
. Using also (2.15), we find
∥
e
1
2
k
B
v

(
1
+
k
2
B
)
p
v
∥
h
≤
C
k
4
∑
j
=
0
p

1
∥
B
2
(
I
+
k
2
B
)
j
v
∥
h
≤
C
k
4
∑
j
=
0
p

1
e
μ
j
k
2
∥
B
2
v
∥
h
≤
C
k
4
p
e
1
2
μ
k
∥
B
2
v
∥
h
≤
C
k
3
∥
v
∥
h
,
2
which completes the proof. ∎
We now show the following error estimate for
E
k
.
Lemma 2.7.
Let
k
≤
γ
h
. Then we have, with
C
=
C
γ
,
∥
E
(
k
)
v

E
k
v
∥
h
≤
C
k
3
∥
v
∥
h
,
6
.
Proof.
In view of Lemma 2.3, it remains to show
∥
e
1
2
k
B
e

k
A
e
1
2
k
B
v

E
k
v
∥
h
≤
C
k
3
∥
v
∥
h
,
6
.
We have
e
1
2
k
B
e

k
A
e
1
2
k
B

ℬ
k
r
0
(
k
A
)
ℬ
k
=
(
e
1
2
k
B

ℬ
k
)
e

k
A
e
1
2
k
B
+
ℬ
k
(
e

k
A

r
0
(
k
A
)
)
e
1
2
k
B
+
ℬ
k
r
0
(
k
A
)
(
e
1
2
k
B

ℬ
k
)
=
J
1
+
J
2
+
J
3
.
Here, by the above lemmas,
∥
J
1
v
∥
h
≤
C
k
3
∥
e

k
A
e
1
2
k
B
v
∥
h
,
2
≤
C
k
3
∥
v
∥
h
,
2
,
∥
J
2
v
∥
h
≤
C
k
3
(
∥
A
3
e
1
2
k
B
v
∥
h
+
∥
e
1
2
k
B
v
∥
h
)
≤
C
k
3
∥
e
1
2
k
B
v
∥
h
,
6
≤
C
k
3
∥
v
∥
h
,
6
,
∥
J
3
v
∥
h
≤
C
k
3
∥
v
∥
h
,
2
,
which completes the proof. ∎
We can now prove the following error estimate:
Theorem 2.2.
Let
k
≤
γ
h
. Then we have for the solutions of (1.8) and (1.6), with
C
=
C
γ
,
T
,
∥
u
n

u
(
n
k
)
∥
h
≤
C
k
2
∥
v
∥
h
,
6
for
n
k
≤
T
.
Proof.
We write
(2.18)
u
n

u
(
n
k
)
=
E
k
n
v

E
(
n
k
)
v
=
∑
j
=
0
n

1
E
k
n

1

j
(
E
k

E
(
k
)
)
E
(
j
k
)
.
Using Lemmas 2.4, 2.7 and 2.1, we obtain, for
n
k
≤
T
,
∥
E
k
n
v

E
(
n
k
)
v
∥
h
≤
C
∑
j
=
0
n

1
∥
(
E
k

E
(
k
)
)
E
(
j
k
)
v
∥
h
≤
C
k
3
∑
j
=
0
n

1
∥
E
(
j
k
)
v
∥
h
,
6
≤
C
n
k
3
∥
v
∥
h
,
6
≤
C
k
2
∥
v
∥
h
,
6
.
∎
Since
∥
V
h
∥
h
,
6
≤
C
∥
V
∥
ℂ
6
, we immediately obtain from Theorems 2.1 and 2.2 the following total error estimate.
Theorem 2.3.
Let
v
=
V
h
and
k
≤
γ
h
. Then we have for the solutions of (1.8) and (1.1), with
C
=
C
γ
,
T
,
∥
u
n

U
h
(
n
k
)
∥
h
≤
C
h
2
∥
V
∥
ℂ
6
for
n
k
≤
T
.
3 The Case of a Small Diffusion Coefficient
In this section, we consider the variant of problem (1.1) with a small diffusion coefficient
ε
>
0
,
∂
U
∂
t
=
ε
div
(
a
∇
U
)
+
b
⋅
∇
U
in
Ω
for
t
>
0
with
U
(
0
)
=
V
.
The corresponding semidiscrete system (1.6) may then be written
(3.1)
d
u
d
t
=

ε
A
u
+
B
u
for
t
>
0
with
u
(
0
)
=
v
,
where A and B are as before. We shall see that (3.1) is stable, and satisfies an
O
(
h
2
)
error estimate, independently of ε. Further, for ε and k small, or more precisely, if
max
(
ε
,
k
)
≤
γ
h
and
k
ε
≤
2
h
2
α
, we will be able to show an
O
(
k
2
)
estimate for the time discretization error, even when we use the less accurate forward Euler method for the A part of the time stepping operator, and with weaker regularity requirements than earlier. Also, we do not need to use the symmetric Strang splitting, and consider now, with
r
1
(
λ
)
=
1

λ
,
U
n
=
E
~
k
n
v
,
where
E
~
k
=
r
1
(
k
ε
A
)
ℬ
~
k
with
ℬ
~
k
=
ℬ
k
2
=
(
I
+
k
2
B
)
2
p
.
We note the inverse inequality
h
∥
u
∥
h
,
s
≤
C
∥
u
∥
h
,
s

1
, and hence
(3.2)
ε
∥
A
u
∥
h
,
s
≤
C
∥
u
∥
h
,
s
+
1
for
ε
≤
γ
h
if
γ
>
0
.
As in Section 2, we first attend to the spatially semidiscrete problem.
Lemma 3.1.
Let
E
~
(
t
)
=
e

t
(
ε
A

B
)
. Then, for any
s
≥
0
, we have, with
C
=
C
s
independent of
ε
>
0
and
h
>
0
,
∥
E
~
(
t
)
v
∥
h
,
s
≤
e
C
t
∥
v
∥
h
,
s
for
t
≥
0
.
Proof.
Following the steps in the proof of Lemma 2.1, we have, for
u
(
t
)
=
E
~
(
t
)
v
and

α

=
s
,
(3.3)
(
∂
α
u
t
,
∂
α
u
)
h
+
ε
(
∂
α
A
u
,
∂
α
u
)
h
=
(
∂
α
B
u
,
∂
α
u
)
h
.
Here, as in the proof of Lemma 2.1,

(
∂
α
B
u
,
∂
α
u
)
h

≤
C
∥
u
∥
h
,
s
2
. Hence, by (3.3) and using (2.10),
1
2
d
d
t
∥
∂
α
u
∥
h
2
+
ε
(
A
∂
α
u
,
∂
α
u
)
h
+
ε
∥
∂
α
u
∥
h
2
≤
C
ε
∥
u
∥
h
,
s
+
1
∥
u
∥
h
,
s
+
C
∥
u
∥
h
,
s
2
,
and thus, by (2.6), with
c
>
0
,
d
d
t
∥
u
∥
h
,
s
2
+
ε
c
∥
u
∥
h
,
s
+
1
2
≤
ε
c
∥
u
∥
h
,
s
+
1
2
+
C
∥
u
∥
h
,
s
2
.
This implies (2.11), with C independent of ε and h, and thus completes the proof. ∎
In the same way as in Section 2, the stability shows the following error estimate.
Theorem 3.1.
We have, for the solutions of (3.1) and (1.1), with
C
=
C
T
independent of ε,
∥
u
(
t
)

U
h
(
t
)
∥
h
≤
C
h
2
∥
V
∥
ℂ
4
for
n
k
≤
T
.
In the analysis of the time discretization we begin with the analogue of Lemma 2.3.
Lemma 3.2.
We have, with
C
=
C
γ
independent of h and k,
∥
E
(
k
)
v

e

k
ε
A
e
k
B
v
∥
h
≤
C
(
ε
k
2
+
k
3
)
∥
v
∥
h
,
3
for
ε
≤
γ
h
.
Proof.
With
F
(
k
)
=
e

k
(
ε
A

B
)

e

k
ε
A
e
k
B
, we have
F
(
0
)
=
F
′
(
0
)
=
0
and hence, by Taylor’s formula,
∥
F
(
k
)
v
∥
h
=
∥
(
F
(
k
)

F
(
0
)

k
F
′
(
0
)
)
v
∥
h
≤
1
2
k
2
sup
s
≤
k
∥
F
′′
(
s
)
v
∥
h
.
Here
F
′′
(
s
)
=
e

s
(
ε
A

B
)
(
ε
A

B
)
2

e

s
ε
A
(
ε
2
A
2

2
ε
A
B
+
B
2
)
e
s
B
=
e

s
(
ε
A

B
)
(
ε
2
A
2

ε
A
B

ε
B
A
)

e

s
ε
A
(
ε
2
A
2

2
ε
A
B
)
e
s
B
+
(
e

s
(
ε
A

B
)

e

s
ε
A
e
s
B
)
B
2
=
G
1
(
s
)
+
G
2
(
s
)
+
G
3
(
s
)
.
Using (3.2), (2.5) and the boundedness of the exponentials, we find
∥
G
1
(
s
)
v
+
G
2
(
s
)
v
∥
h
≤
C
ε
∥
v
∥
h
,
3
for
s
≤
k
.
Further,
G
3
(
0
)
=
0
and hence
∥
G
3
(
s
)
v
∥
h
≤
s
sup
σ
≤
s
∥
G
3
′
(
σ
)
v
∥
h
≤
C
k
∥
v
∥
h
,
3
for
s
≤
k
.
Together these estimates complete the proof of the lemma. ∎
We now turn to the time stepping operator
E
~
k
and begin with its stability.
Lemma 3.3.
If
k
ε
≤
2
h
2
α
, then
(3.4)
∥
r
1
(
k
ε
A
)
∥
=
∥
I

k
ε
A
∥
≤
1
.
If also
k
≤
γ
h
,
then
E
~
k
is stable, or, with μ as in Lemma 2.4,
∥
E
~
k
n
∥
≤
e
μ
T
for
n
k
≤
T
.
Proof.
We note that, since A is positive semidefinite,
∥
I

k
ε
A
∥
≤
1
if
k
ε
∥
A
∥
≤
2
,
and thus (3.4) holds by (2.7). Hence, by Lemma 2.4,
∥
E
~
k
∥
≤
∥
r
1
(
k
A
)
∥
∥
ℬ
k
∥
2
≤
e
μ
k
.
∎
We now turn to the error analysis and show the following.
Lemma 3.4.
If
max
(
ε
,
k
)
≤
γ
h
and
k
ε
≤
2
h
2
α
, we have
∥
E
(
k
)
v

E
~
k
v
∥
h
≤
C
(
ε
k
2
+
k
3
)
∥
v
∥
h
,
3
.
Proof.
In view of Lemma 3.2 it remains to show
∥
e

k
ε
A
e
k
B
v

E
~
k
v
∥
h
≤
C
(
ε
k
2
+
k
3
)
∥
v
∥
h
,
3
.
We first note that by Lemma 2.5,
(3.5)
∥
(
e

k
ε
A

r
1
(
k
ε
A
)
)
v
∥
h
≤
C
ε
2
k
2
∥
A
2
v
∥
h
≤
C
ε
k
2
∥
v
∥
h
,
3
.
We write
e

k
ε
A
e
k
B

E
~
k
=
(
e

k
ε
A

r
1
(
k
ε
A
)
)
e
k
B
+
r
1
(
k
ε
A
)
(
e
k
B

ℬ
~
k
)
=
J
1
+
J
2
.
Here by (3.5) and Lemma 2.5, and by (3.4) and Lemma 2.4,
∥
J
1
v
∥
h
≤
C
ε
k
2
∥
e
k
B
v
∥
h
,
3
≤
C
ε
k
2
∥
v
∥
h
,
3
and
∥
J
2
v
∥
h
≤
C
k
3
∥
v
∥
h
,
2
,
which completes the proof ∎
The following is the resulting error estimate.
Theorem 3.2.
If
max
(
ε
,
k
)
≤
γ
h
and
k
ε
≤
2
h
2
/
α
we have, with
C
=
C
γ
,
T
independent of
h
,
k
and ε,
∥
u
n

u
(
n
k
)
∥
h
≤
C
(
ε
k
+
k
2
)
∥
v
∥
h
,
3
for
n
k
≤
T
.
Proof.
Using the analogue of (2.18), we find
∥
E
~
k
n
v

E
(
n
k
)
v
∥
h
≤
C
∑
j
=
0
n

1
∥
(
E
~
k

E
(
k
)
)
E
(
j
k
)
v
∥
h
≤
C
(
ε
k
2
+
k
3
)
∑
j
=
0
n

1
∥
E
(
j
k
)
v
∥
h
,
3
≤
C
T
(
ε
k
+
k
2
)
∥
v
∥
h
,
3
.
∎
As in Section 2, our error estimates in Theorems 3.1 and 3.2 together show a total error estimate.
Theorem 3.3.
With
v
=
V
h
and for
max
(
ε
,
k
)
≤
γ
h
and
k
ε
≤
2
h
2
α
we have, with
C
=
C
γ
,
T
independent of
h
,
k
and ε,
∥
u
n

U
h
(
n
k
)
∥
h
≤
C
h
2
∥
V
∥
ℂ
4
for
n
k
≤
T
.
4 Numerical Illustrations
In this section we present some numerical computations to illustrate our error estimates. We restrict ourselves to the onedimensional version of (1.1),
(4.1)
U
t
=
(
a
U
x
)
x
+
b
U
x
for
x
∈
(
0
,
2
π
)
,
t
>
0
,
with
U
(
x
,
0
)
=
sin
x
.
As before we shall choose
h
=
2
π
M
,
k
=
1
N
, where M and N are positive integers, and study the effect of doubling these integers.
We begin with the simple case
a
=
b
=
1
, in which case the exact solution is
U
(
x
,
t
)
=
e

t
sin
(
x
+
t
)
. In Table 1 we compile the errors in the numerical solution at
t
=
1
, first in the spatial discretization, then when our time stepping method is applied to the semidiscrete solution, and finally the total error. We use
N
=
4
,
8
,
16
,
32
,
64
, and
M
=
5
N
, so that
k
h
=
5
2
π
=
0.7958
. The successive ratios of the total errors are given in the last column and confirm the secondorder convergence estimates resulting from Theorems 2.1–2.3.
Table 1
Numerical results for constant coefficients. Here
h
=
2
π
M
,
k
=
1
N
.
M 
N 
∥
(
u

U
h
)
(
⋅
,
1
)
∥
h

∥
u
N

u
(
⋅
,
1
)
∥
h

∥
u
N

U
h
(
⋅
,
1
)
∥
h

Ratio 
20 
4 
0.01670 
0.01199 
0.02494 

40 
8 
0.00423 
0.00300 
0.00621 
4.01 
80 
16 
0.00106 
0.00075 
0.00155 
4.01 
160 
32 
0.00027 
0.00019 
0.00039 
3.97 
320 
64 
0.00007 
0.00005 
0.00010 
3.90 
We recall that in the case that a and b are constant, the matrices A and B involved in our method commute, and consequently the splitting error given in Lemma 2.3 vanishes. In order to also consider a situation when this does not happen, we let
a
(
x
)
=
1
+
1
2
cos
x
and
b
(
x
)
=
1
+
1
2
sin
x
.
To indicate that the matrices A and B do not commute in this case, we consider the corresponding continuous operators
𝒜
U
=

(
(
1
+
1
2
cos
x
)
U
x
)
x
,
ℬ
U
=
(
1
+
1
2
sin
x
)
U
x
,
and find, after some effort,
(
𝒜
ℬ

ℬ
𝒜
)
U
=

(
a
(
b
U
x
)
x
)
x
+
b
(
a
U
x
)
x
x
=
(
1

1
4
2

sin
x
2
+
cos
x
)
U
x

(
1
2
+
1
2
sin
x

1
4
sin
2
x
+
cos
x
)
U
x
x
.
Thus
𝒜
and
ℬ
do not commute, and therefore neither could A and B. The exact solution U is taken to be the semidiscrete solution with
M
=
2560
,
N
=
512
. The errors are presented in Table 2. Again we see that the errors are of second order, which agrees with the error bounds in Theorems 2.1–2.3.
Table 2
Numerical results for variable coefficients. Here
h
=
2
π
M
,
k
=
1
N
.
M 
N 
∥
(
u

U
h
)
(
⋅
,
1
)
∥
h

∥
u
N

u
(
⋅
,
1
)
∥
h

∥
u
N

U
h
(
⋅
,
1
)
∥
h

Ratio 
20 
4 
0.02641 
0.01419 
0.03323 

40 
8 
0.00651 
0.00356 
0.00817 
4.07 
80 
16 
0.00163 
0.00089 
0.00203 
4.02 
160 
32 
0.00041 
0.00022 
0.00051 
3.98 
320 
64 
0.00010 
0.00005 
0.00013 
3.92 
We finally consider a numerical example for Section 3, for which we use (4.1) with
a
=
ε
=
0.01
,
b
=
1
. Here
U
(
x
,
t
)
=
e

ε
t
sin
(
x
+
t
)
,
and
u
n
=
E
~
k
n
v
with
E
~
k
=
r
1
(
k
A
)
ℬ
~
k
. Note that the condition
ε
k
≤
2
h
2
α
, with
α
=
4
, now reduces to
ε
≤
π
5
h
<
0.8
h
, or
ε
≤
π
2
800
=
0.0123
for
N
=
64
, which is satisfied for our choice of ε. The results are given in Table 3 and agree with the error bounds of Section 3.
Table 3
Numerical results for constant coefficients and with small diffusion. Here
a
=
0.01
,
b
=
1
,
h
=
2
π
M
,
k
=
1
N
.
M 
N 
∥
(
u

U
h
)
(
⋅
,
1
)
∥
h

∥
u
N

u
(
⋅
,
1
)
∥
h

∥
u
N

U
h
(
⋅
,
1
)
∥
h

Ratio 
20 
4 
0.02872 
0.05381 
0.06237 

40 
8 
0.00721 
0.01365 
0.01555 
4.01 
80 
16 
0.00180 
0.00342 
0.00388 
4.00 
160 
32 
0.00045 
0.00085 
0.00097 
4.00 
320 
64 
0.00011 
0.00021 
0.00024 
4.04 