Show Summary Details
More options …

# Journal of Applied Analysis

Editor-in-Chief: Fechner, Włodzimierz / Ciesielski, Krzysztof

Managing Editor: Gajek, Leslaw

CiteScore 2018: 0.45

SCImago Journal Rank (SJR) 2018: 0.181
Source Normalized Impact per Paper (SNIP) 2018: 0.845

Mathematical Citation Quotient (MCQ) 2018: 0.20

Online
ISSN
1869-6082
See all formats and pricing
More options …
Volume 25, Issue 1

# Two-step collocation methods for two-dimensional Volterra integral equations of the second kind

Seyed Mousa Torabi
/ Abolfazl Tari
Published Online: 2019-05-21 | DOI: https://doi.org/10.1515/jaa-2019-0001

## Abstract

In this paper, we develop two-step collocation (2-SC) methods to solve two-dimensional nonlinear Volterra integral equations (2D-NVIEs) of the second kind. Here we convert a 2D-NVIE of the second kind to a one-dimensional case, and then we solve the resulting equation numerically by two-step collocation methods. We also study the convergence and stability analysis of the method. At the end, the accuracy and efficiency of the method is verified by solving two test equations which are stiff. In examples, we use the well-known differential transform method to obtain starting values.

MSC 2010: 65R20

## 1 Introduction

Many problems in applied mathematics, physics and engineering give rise to the nonlinear two-dimensional Volterra integral equation of the form

$y\left(x,t\right)=g\left(x,t\right)+{\int }_{0}^{t}{\int }_{0}^{x}K\left(x,t,z,s,y\left(z,s\right)\right)dzds,$(1.1)

where g and K are given, sufficiently smooth functions on $D:-\left[0,X\right]×\left[0,T\right]$ and $D×D×R$, respectively.

A numerical solution of equations of form (1.1) has been considered in some works. For example, in [13], the block-by-block method has been considered. In [2, 10], collocation and iterated collocation methods have been proposed for two-dimensional nonlinear VIEs (2D-VIEs). In [17], the differential transform method has been developed for linear and nonlinear 2D-VIEs. A new block-by-block method has been presented for these equations in [12]. Also, the Galerkin method has been developed for two-dimensional VIEs in some works. For example, in [11], the extrapolation method based on the asymptotic expansion of iterated Galerkin solutions has been studied for 2D-VIEs of the second kind. In [15], the spectral Galerkin method has been proposed for numerical solution of 2D-VIEs of the second kind. On the other hand, many studies have been made in the numerical solution of one-dimensional Volterra integral equations. For example, in [14], the differential transform method has been considered for VIEs. Recently, many new methods have been presented to solve some types of differential and integral equations [1, 8, 9]. Also, multi-step collocation methods have been proposed for one-dimensional VIEs of the second kind in some interesting works [6, 5, 4]. In this paper, we develop these methods for 2D-VIEs of the second kind. The stability is the main advantage of these methods compared to the majority of available numerical methods. Therefore, the presented method can be applied to stiff equations, which are defined as follows.

#### Definition 1.1.

Integral equation (1.1) is said to be “stiff” in cases where $\partial K\left(x,t,z,s,y\right)/\partial y$ assumes a large negative value [18].

## 2 Two-step collocation methods

As mentioned above, in this paper, we develop the 2-SC method of [4], to equations of form (1.1). So here, we present the method of [4], for the sake of the reader. Consider the VIE

$y\left(t\right)=g\left(t\right)+{\int }_{{t}_{0}}^{t}K\left(t,\eta ,y\left(\eta \right)\right)d\eta ,t\in \left[{t}_{0},T\right],$(2.1)

where g and K are real-valued sufficiently smooth functions. For a given positive integer N, we set ${t}_{n}={t}_{0}+nh$, $n=0,1,\mathrm{\dots },N$, with $Nh=T-{t}_{0}$. First we rewrite equation (2.1) in the form

$y\left(t\right)={F}^{\left[n\right]}\left(t\right)+{\mathrm{\Phi }}^{\left[n+1\right]}\left(t\right)$

with the lag-term

${F}^{\left[n\right]}\left(t\right)=g\left(t\right)+{\int }_{{t}_{0}}^{{t}_{n}}K\left(t,\eta ,y\left(\eta \right)\right)d\eta$

and the increment term

${\mathrm{\Phi }}^{\left[n+1\right]}\left(t\right)={\int }_{{t}_{n}}^{t}K\left(t,\eta ,y\left(\eta \right)\right)d\eta .$

The 2-SC method provides a continuous approximation $P\left({t}_{n}+sh\right)$, $s\in \left[0,1\right]$, to the solution $y\left({t}_{n}+sh\right)$ of (1.1) in the interval $\left[{t}_{n},{t}_{n+1}\right]$, which uses the information of the equation on the following two consecutive steps:

$\left\{\begin{array}{cc}\hfill P\left({t}_{n}+sh\right)& ={\phi }_{0}\left(s\right){y}_{n-1}+{\phi }_{1}\left(s\right){y}_{n}+\sum _{j=1}^{m}{\chi }_{j}\left(s\right)P\left({t}_{n-1,j}\right)+\sum _{j=1}^{m}{\psi }_{j}\left(s\right)\left({F}_{h}^{\left[n\right]}\left({t}_{n,j}\right)+{\mathrm{\Phi }}_{h}^{\left[n+1\right]}\left({t}_{n,j}\right)\right),\hfill \\ \hfill {y}_{n+1}& =P\left({t}_{n+1}\right).\hfill \end{array}$(2.2)

Here ${t}_{n,j}={t}_{n}+{c}_{j}h$ are collocation points, and ${c}_{j}$ are collocation parameters, ${F}_{h}^{\left[n\right]}$ and ${\mathrm{\Phi }}_{h}^{\left[n+1\right]}$ are approximations to ${F}^{\left[n\right]}$ and ${\mathrm{\Phi }}^{\left[n+1\right]}$, which are computed by appropriate quadrature rules as

${F}_{h}^{\left[n\right]}\left(t\right)=g\left(t\right)+h\left[\sum _{v=1}^{n}{b}_{0}K\left(t,{t}_{v-1},{y}_{v-1}\right)+\sum _{j=1}^{m}{b}_{j}K\left(t,{t}_{v-1,j},P\left({t}_{v-1,j}\right)\right)+{b}_{m+1}K\left(t,{t}_{v},{y}_{v}\right)\right],$(2.3)${\mathrm{\Phi }}_{h}^{\left[n+1\right]}\left({t}_{n,i}\right)=h\left[{w}_{i,0}K\left({t}_{n,i},{t}_{n},{y}_{n}\right)+\sum _{j=1}^{m}{w}_{i,j}K\left({t}_{n,i},{t}_{n,j},P\left({t}_{n,j}\right)\right)+{w}_{i,m+1}K\left({t}_{n,i},{t}_{n+1},{y}_{n+1}\right)\right],$(2.4)

${\phi }_{0}$, ${\phi }_{1}$, ${\chi }_{j}$ and ${\psi }_{j}$, $j=1,\mathrm{\dots },m$, are polynomials such that $P\left(t\right)$ be a continuous approximation to the solution $y\left(t\right)$ of (2.1) at each subinterval $\left[{t}_{n},{t}_{n+1}\right]$. The polynomial $P\left({t}_{n}+sh\right)$ will be determined after solving a system of equations in the values ${Y}_{i}^{\left[n+1\right]}:-P\left({t}_{n,i}\right)$ and ${y}_{n+1}$, at each step. For more details, see [4].

To discuss the order of the method, we recall the following theorem.

#### Theorem 2.1 ([4]).

Assume that, in (2.1), K and g are sufficiently smooth functions. If the polynomials ${\phi }_{\mathrm{0}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$, ${\phi }_{\mathrm{1}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$, ${\chi }_{j}\mathit{}\mathrm{\left(}s\mathrm{\right)}$ and ${\psi }_{j}\mathit{}\mathrm{\left(}s\mathrm{\right)}$, $j\mathrm{=}\mathrm{1}\mathrm{,}\mathrm{\dots }\mathrm{,}m$ satisfy the system of equations

$\left\{\begin{array}{cc}\hfill 1-{\phi }_{0}\left(s\right)-{\phi }_{1}\left(s\right)-\sum _{j=1}^{m}{\chi }_{j}\left(s\right)-\sum _{j=1}^{m}{\psi }_{j}\left(s\right)& =0,\hfill \\ \hfill {s}^{k}-{\left(-1\right)}^{k}{\phi }_{0}\left(s\right)-\sum _{j=1}^{m}{\left({c}_{j}-1\right)}^{k}{\chi }_{j}\left(s\right)-\sum _{j=1}^{m}{c}_{j}^{k}{\psi }_{j}\left(s\right)& =0\hfill \end{array}$(2.5)

for $s\mathrm{\in }\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{1}\mathrm{\right]}$ and $k\mathrm{=}\mathrm{1}\mathrm{,}\mathrm{2}\mathrm{,}\mathrm{\dots }\mathrm{,}p$, then method (2.2) has the local discretization error of order p, i.e.,

$\eta \left({t}_{n}+sh\right)=O\left({h}^{p+1}\right),h\to 0,$

where

$\eta \left({t}_{n}+sh\right)=y\left({t}_{n}+sh\right)-{\phi }_{0}\left(s\right)y\left({t}_{n}-h\right)-{\phi }_{1}\left(s\right)y\left({t}_{n}\right)-\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right)y\left({t}_{n}+\left({c}_{j}-1\right)h\right)+{\psi }_{j}\left(s\right)y\left({t}_{n}+{c}_{j}h\right)\right).$

We choose ${\phi }_{0}\left(s\right)$ and ${\phi }_{1}\left(s\right)$ as the polynomials of degree at most $2m-1$, which satisfy the collocation conditions, that is,

${\phi }_{0}\left({c}_{i}\right)=0,{\phi }_{1}\left({c}_{i}\right)=0,i=1,2,\mathrm{\dots },m.$(2.6)

Thus we have

(2.7)${\phi }_{1}\left(s\right)=\left({p}_{0}+{p}_{1}s+\mathrm{\cdots }+{p}_{m-1}{s}^{m-1}\right)\prod _{i=1}^{m}\left(s-{c}_{i}\right),$(2.8)

where ${q}_{0},\mathrm{\dots },{q}_{m-1}$ and ${p}_{0},\mathrm{\dots },{p}_{m-1}$ are free parameters.

Here we give two lemmas which we need in the following.

#### Lemma 2.2 (Gronwall inequality [16]).

Let {${y}_{n}$} and {${g}_{n}$} be nonnegative sequences and C a nonnegative constant. If

${y}_{n}\le C+\sum _{k=0}^{n-1}{g}_{k}{y}_{k}\mathit{ }\text{𝑓𝑜𝑟}n\ge 0,$

then

${y}_{n}\le C\prod _{k=0}^{n-1}\left(1+{g}_{k}\right)\le C\mathrm{exp}\left(\sum _{k=0}^{n-1}{g}_{k}\right).$

#### Lemma 2.3 ([3]).

The determinant of the Vandermonde matrix of the form

$V=\left[\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill \mathrm{\dots }\hfill & \hfill 1\hfill \\ \hfill {x}_{0}\hfill & \hfill {x}_{1}\hfill & \hfill \mathrm{\dots }\hfill & \hfill {x}_{n}\hfill \\ \hfill \mathrm{⋮}\hfill & \hfill \mathrm{⋮}\hfill & \hfill \hfill & \hfill \mathrm{⋮}\hfill \\ \hfill {x}_{0}^{n}\hfill & \hfill {x}_{1}^{n}\hfill & \hfill \mathrm{\dots }\hfill & \hfill {x}_{n}^{n}\hfill \end{array}\right]$

is $\mathrm{det}\mathit{}\mathrm{\left(}V\mathrm{\right)}\mathrm{=}{\mathrm{\prod }}_{\mathrm{0}\mathrm{\le }j\mathrm{<}i\mathrm{\le }n}\mathrm{\left(}{x}_{i}\mathrm{-}{x}_{j}\mathrm{\right)}$.

In the following theorem, we prove that system (2.5) has a unique solution.

#### Theorem 2.4.

Assume that ${c}_{i}\mathrm{\ne }{c}_{j}$, ${c}_{i}\mathrm{\ne }{c}_{j}\mathrm{-}\mathrm{1}$ and ${c}_{i}\mathrm{\ne }\mathrm{-}\mathrm{1}\mathrm{,}\mathrm{0}\mathrm{,}\mathrm{1}$. Then, choosing ${\phi }_{\mathrm{0}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$ and ${\phi }_{\mathrm{1}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$ as (2.7) and (2.8), respectively, the system (2.5) has a unique solution, and

${\chi }_{j}\left({c}_{i}\right)=0,{\psi }_{j}\left({c}_{i}\right)={\delta }_{ij},i,j=1,\mathrm{\dots },m.$

#### Proof.

Setting $s={c}_{i}$ in (2.5) for $i=1,\mathrm{\dots },m$ and from collocation conditions (2.6), we obtain

$\left\{\begin{array}{cc}\hfill \sum _{j=1}^{m}{\chi }_{j}\left({c}_{i}\right)+\sum _{j=1}^{m}{\psi }_{j}\left({c}_{i}\right)& =1,\hfill \\ \hfill \sum _{j=1}^{m}{\left({c}_{j}-1\right)}^{k}{\chi }_{j}\left({c}_{i}\right)+\sum _{j=1}^{m}{c}_{j}^{k}{\psi }_{j}\left({c}_{i}\right)& ={c}_{i}^{k},k=1,2,\mathrm{\dots },2m-1,\hfill \end{array}$(2.9)

Therefore, the coefficient matrix is of the form

$A=\left[\begin{array}{cccccc}\hfill 1\hfill & \hfill \mathrm{\dots }\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill \mathrm{\dots }\hfill & \hfill 1\hfill \\ \hfill {c}_{1}-1\hfill & \hfill \mathrm{\dots }\hfill & \hfill {c}_{m}-1\hfill & \hfill {c}_{1}\hfill & \hfill \mathrm{\dots }\hfill & \hfill {c}_{m}\hfill \\ \hfill \mathrm{⋮}\hfill & \hfill \hfill & \hfill \mathrm{⋮}\hfill & \hfill \mathrm{⋮}\hfill & \hfill \hfill & \hfill \mathrm{⋮}\hfill \\ \hfill {\left({c}_{1}-1\right)}^{2m-1}\hfill & \hfill \mathrm{\dots }\hfill & \hfill {\left({c}_{m}-1\right)}^{2m-1}\hfill & \hfill {c}_{1}^{2m-1}\hfill & \hfill \mathrm{\dots }\hfill & \hfill {c}_{m}^{2m-1}\hfill \end{array}\right],$

which is of Vandermonde type.

By Lemma 2.3, $det\left(A\right)={\prod }_{1\le j for $m\ge 2$ (for $m=1$, $det\left(A\right)=1$). Thus, by the assumptions of the theorem, $det\left(A\right)\ne 0$. On the other hand, it is obvious from (2.9) that

${\chi }_{j}\left({c}_{i}\right)=0,{\psi }_{j}\left({c}_{i}\right)={\delta }_{ij},i,j=1,\mathrm{\dots },m.$

So the theorem is proved. ∎

The next theorem investigates the order of convergence for the method (2.2).

#### Theorem 2.5 ([4]).

Let ${e}_{h}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{:-}y\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{-}P\mathit{}\mathrm{\left(}t\mathrm{\right)}$ be the error of method (2.2). Suppose that the hypothesis of Theorem 2.1 are satisfied for $p\mathrm{=}\mathrm{2}\mathit{}m\mathrm{-}\mathrm{1}$ with ${\phi }_{\mathrm{0}}$ and ${\phi }_{\mathrm{1}}$ as (2.7) and (2.8), respectively. Moreover, assume that

• (1)

${K}_{y}\left(t,\eta ,\cdot \right)$ exists and is bounded for ${t}_{0}\le \eta \le t\le T$,

• (2)

the quadrature formulas ( 2.3 ) and ( 2.4 ) are of order $O\left({h}^{q}\right)$,

• (3)

the starting error is ${\parallel {e}_{h}\parallel }_{\mathrm{\infty },\left[{t}_{0},{t}_{1}\right]}=O\left({h}^{d}\right)$.

Then the two-step collocation method (2.2) has the uniform order of convergence ${p}^{\mathrm{*}}\mathrm{=}\mathrm{min}\mathit{}\mathrm{\left\{}\mathrm{2}\mathit{}m\mathrm{,}q\mathrm{,}d\mathrm{+}\mathrm{1}\mathrm{\right\}}$ for any choice of $\mathrm{0}\mathrm{<}{c}_{\mathrm{1}}\mathrm{<}{c}_{\mathrm{2}}\mathrm{<}\mathrm{\cdots }\mathrm{<}{c}_{m}\mathrm{<}\mathrm{1}$, that is,

$\parallel {e}_{h}\parallel =O\left({h}^{{p}^{*}}\right),h\to 0.$

To analyze the stability of the presented method, we apply the method to the test equation

$y\left(t\right)=1+\lambda {\int }_{0}^{t}y\left(\eta \right)d\eta ,t\ge 0.$(2.10)

This leads to the following matrix recurrence relation [4]:

$\left[\begin{array}{c}\hfill {y}_{n+1}\hfill \\ \hfill {Y}^{\left[n+1\right]}\hfill \\ \hfill {F}_{h}^{\left[n\right]}\hfill \\ \hfill {y}_{n}\hfill \end{array}\right]=M\left(z\right)\left[\begin{array}{c}\hfill {y}_{n}\hfill \\ \hfill {Y}^{\left[n\right]}\hfill \\ \hfill {F}_{h}^{\left[n-1\right]}\hfill \\ \hfill {y}_{n-1}\hfill \end{array}\right],$

where $z=h\lambda$ and $M\left(z\right)$ is called stability matrix. The stability function of the method is defined as

$p\left(w,z\right)=det\left(wI-M\left(z\right)\right).$(2.11)

Denoting ${w}_{1},{w}_{2},\mathrm{\dots },{w}_{2m+2}$ as the roots of (2.11), the region of absolute stability of the method is defined by

$A=\left\{z\in C:|{w}_{i}\left(z\right)|<1,i=1,2,\mathrm{\dots },2m+2\right\}.$

Also, we say that the method is A-stable if

$\left\{z\in C:\mathrm{Re}\left(z\right)<0\right\}\subset A.$

## 3 Main results

In this section, we develop the 2-SC method described in the previous section to 2D-VIEs of form (1.1). To this end, let N and M be positive integers, and consider the uniform grids

${x}_{i}=ik,$$i=0,1,\mathrm{\dots },M,$$Mk\mathit{ }=X,$${t}_{i}=ih,$$i=0,1,\mathrm{\dots },N,$$Nh\mathit{ }=T.$

We set $x={x}_{i}$ in (1.1), and thus we have

$y\left({x}_{i},t\right)=g\left({x}_{i},t\right)+{\int }_{0}^{t}{\int }_{0}^{{x}_{i}}K\left({x}_{i},t,z,s,y\left(z,s\right)\right)dzds.$(3.1)

Now, substituting the inner integral of (3.1) by an appropriate quadrature rule depending on ${x}_{j}$, where $j=0,1,\mathrm{\dots },i$, we obtain

${y}_{i}\left(t\right)={g}_{i}\left(t\right)+{\int }_{0}^{t}\sum _{j=0}^{i}{w}_{ij}{K}_{ij}\left(t,s,{y}_{j}\left(s\right)\right)ds,$(3.2)

which is a one-dimensional VIE of the second kind, and we solve it by the two-step collocation method described in the previous section. In equation (3.2), ${y}_{i}\left(t\right)$, ${g}_{i}\left(t\right)$ and ${K}_{ij}\left(t,s,\cdot \right)$ denote $y\left({x}_{i},t\right)$, $g\left({x}_{i},t\right)$ and $K\left({x}_{i},t,{x}_{j},s,\cdot \right)$, respectively, and ${w}_{ij}$ are quadrature weights. In this procedure, we use the values obtained from the previous steps.

First, setting $x={x}_{1}$ and using the trapezoidal rule, we have

${y}_{1}\left(t\right)={g}_{1}\left(t\right)+{\int }_{0}^{t}\frac{k}{2}\left[K\left({x}_{1},t,0,s,y\left(0,s\right)\right)+K\left({x}_{1},t,{x}_{1},s,{y}_{1}\left(s\right)\right)\right]ds,$

where it is obvious that $y\left(0,s\right)=g\left(0,s\right)$.

Therefore, applying the two-step collocation method to this equation, we obtain an approximate polynomial to ${y}_{1}\left(t\right)=y\left({x}_{1},t\right)$, namely, ${P}_{1}\left(t\right)$.

For $x={x}_{2}$, we use Simpson’s rule for the interior integral in (3.1). Thus we obtain

${y}_{2}\left(t\right)={g}_{2}\left(t\right)+{\int }_{0}^{t}\frac{k}{3}\left[K\left({x}_{2},t,0,s,y\left(0,s\right)\right)+4K\left({x}_{2},t,{x}_{1},s,y\left({x}_{1},s\right)\right)+K\left({x}_{2},t,{x}_{2},s,{y}_{2}\left(s\right)\right)\right]ds,$

where $y\left(0,s\right)$ and $y\left({x}_{1},s\right)$ are known, and therefore we obtain ${P}_{2}\left(t\right)$, the approximate polynomial of ${y}_{2}\left(t\right)$ using the two-step collocation method.

For $x={x}_{i}$, $i=3,4,\mathrm{\dots },N$, we use Simpson’s rule for even indices and Simpson’s rule with the trapezoidal rule for the last subinterval with odd indices. From [7], this method (Simpson and trapezoidal) is stable.

To simplify the notation, we set

${K}_{i}\left(t,s,{y}_{i}\left(s\right)\right):-\sum _{j=0}^{i}{w}_{ij}{K}_{ij}\left(t,s,{y}_{j}\left(s\right)\right).$

To analyze the error of the presented method, we assume that the maximum error occurs at the ith stage, that is, for $x={x}_{i}$. At the ith stage, we have equation (3.1), and substituting the inner integral by a quadrature rule of order, for example, r, we have

${y}_{i}\left(t\right)={g}_{i}\left(t\right)+{\int }_{0}^{t}{K}_{i}\left(t,\eta ,{y}_{i}\left(\eta \right)\right)d\eta +{A}_{i}{k}^{r}t$(3.3)

with ${\parallel {A}_{i}\parallel }_{\mathrm{\infty }}\le {C}_{1}$ independent of k.

The next theorem investigates the error of the presented method.

#### Theorem 3.1.

Let ${e}_{i\mathrm{,}h}\mathrm{:-}{y}_{i}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{-}{P}_{i}\mathit{}\mathrm{\left(}t\mathrm{\right)}$ be the error of the new method at stage i. Suppose that the hypotheses of Theorem 2.1 are satisfied for $p\mathrm{=}\mathrm{2}\mathit{}m\mathrm{-}\mathrm{1}$ for the ith stage, with ${\phi }_{\mathrm{0}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$ and ${\phi }_{\mathrm{1}}\mathit{}\mathrm{\left(}s\mathrm{\right)}$ chosen according to (2.7) and (2.8), respectively. Moreover, assume that

• (i)

$\frac{\partial }{\partial y}{K}_{i}\left(t,s,\cdot \right)$ exists and is bounded for $0\le s\le t\le T$,

• (ii)

the quadrature formulas ( 2.3 ) and ( 2.4 ) at the i th stage are of order $O\left({h}^{q}\right)$,

• (iii)

the quadrature formula ( 3.2 ) used for the i th stage is of order $O\left({k}^{r}\right)$,

• (iv)

the starting error is ${\parallel {e}_{i,h}\parallel }_{\mathrm{\infty },\left[0,{t}_{1}\right]}=O\left({h}^{d}\right)$.

Then the order of convergence of the method is $O\mathit{}\mathrm{\left(}{h}^{{p}^{\mathrm{*}}}\mathrm{+}{k}^{r}\mathrm{\right)}$, where ${p}^{\mathrm{*}}\mathrm{=}\mathrm{min}\mathit{}\mathrm{\left\{}d\mathrm{+}\mathrm{1}\mathrm{,}q\mathrm{,}\mathrm{2}\mathit{}m\mathrm{\right\}}$.

#### Proof.

From the previous section, the approximate polynomial for ${y}_{i}\left(t\right)$ at the ith stage is

${P}_{i}\left({t}_{n}+sh\right)={\phi }_{0}\left(s\right){y}_{i,n-1}+{\phi }_{1}\left(s\right){y}_{i,n}+\sum _{j=1}^{m}{\chi }_{j}\left(s\right){P}_{i}\left({t}_{n-1,j}\right)+\sum _{j=1}^{m}{\psi }_{j}\left(s\right)\left({F}_{i,h}^{\left[n\right]}\left({t}_{n,j}\right)+{\mathrm{\Phi }}_{i,h}^{\left[n+1\right]}\left({t}_{n,j}\right)\right).$(3.4)

Since the functions ${\phi }_{0}\left(s\right)$, ${\phi }_{1}\left(s\right)$, ${\chi }_{j}\left(s\right)$ and ${\psi }_{j}\left(s\right)$ satisfy the collocation conditions, setting ${Y}_{i,j}^{\left[n+1\right]}:-{P}_{i}\left({t}_{n,j}\right)$, we have

${Y}_{i,j}^{\left[n+1\right]}={F}_{i,h}^{\left[n\right]}\left({t}_{n,j}\right)+{\mathrm{\Phi }}_{i,h}^{\left[n+1\right]}\left({t}_{n,j}\right),i=0,1,\mathrm{\dots },M,j=0,1,\mathrm{\dots },N.$

Hence polynomial (3.4) is of the form

${P}_{i}\left({t}_{n}+sh\right)={\phi }_{0}\left(s\right){y}_{i,n-1}+{\phi }_{1}\left(s\right){y}_{i,n}+\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right){Y}_{i,j}^{\left[n\right]}+{\psi }_{j}\left(s\right){Y}_{i,j}^{\left[n+1\right]}\right).$(3.5)

It follows from Theorem 2.1 and equation (3.3) that

${y}_{i}\left({t}_{n}+sh\right)={\phi }_{0}\left(s\right){y}_{i}\left({t}_{n-1}\right)+{\phi }_{1}\left(s\right){y}_{i}\left({t}_{n}\right)+\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right){y}_{i}\left({t}_{n-1,j}\right)+{\psi }_{j}\left(s\right){y}_{i}\left({t}_{n,j}\right)\right)+{h}^{p+1}{R}_{i,m,n}\left(s\right)+{k}^{r}{A}_{i,m,n}\left(s\right),$(3.6)

with ${\parallel {R}_{i,m,n}\parallel }_{\mathrm{\infty }}\le {C}_{2}$ independent of h. Thus, subtracting (3.6) from (3.5), we obtain

${e}_{i,h}\left({t}_{n}+sh\right)={\phi }_{0}\left(s\right){e}_{i,n-1}+{\phi }_{1}\left(s\right){e}_{i,n}+\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right){e}_{i,n,j}+{\psi }_{j}\left(s\right){e}_{i,n+1,j}\right)+{h}^{p+1}{R}_{i,m,n}\left(s\right)+{k}^{r}{A}_{i,m,n}\left(s\right),$(3.7)

where ${e}_{i,n+1,j}={e}_{i,h}\left({t}_{n,j}\right)$ and ${e}_{i,n}={e}_{i,h}\left({t}_{n}\right)$. On the other hand, applying the mean value theorem, hypothesis (i) ensures that

$\begin{array}{cc}& {K}_{i}\left({t}_{n,j},{t}_{v-1}+sh,{y}_{i}\left({t}_{v-1}+sh\right)\right)-{K}_{i}\left({t}_{n,j},{t}_{v-1}+sh,{P}_{i}\left({t}_{v-1}+sh\right)\right)\hfill \\ & =\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{v-1}+sh,{z}_{i,v-1}\left(s\right)\right){e}_{i,h}\left({t}_{v-1}+sh\right),v=1,\mathrm{\dots },n+1,\hfill \end{array}$

where ${z}_{i,v-1}\left(s\right)$ is between ${y}_{i}\left({t}_{v-1}+sh\right)$ and ${P}_{i}\left({t}_{v-1}+sh\right)$.

Now by hypothesis (ii) it follows that

${F}_{i,h}^{\left[n\right]}\left({t}_{n,j}\right)+{\mathrm{\Phi }}_{i,h}^{\left[n+1\right]}\left({t}_{n,j}\right)-{g}_{i}\left({t}_{n,j}\right)-{\int }_{0}^{{t}_{n,j}}{K}_{i}\left({t}_{n,j},\eta ,{P}_{i}\left(\eta \right)\right)d\eta ={E}_{i,m,n}{h}^{q}$

with ${\parallel {E}_{i,m,n}\parallel }_{\mathrm{\infty }}\le {C}_{3}$ independent of h.

Also, from

${y}_{i}\left({t}_{n,j}\right)-{P}_{i}\left({t}_{n,j}\right)={F}_{i}^{\left[n\right]}\left({t}_{n,j}\right)+{\mathrm{\Phi }}_{i}^{\left[n+1\right]}\left({t}_{n,j}\right)-{F}_{i,h}^{\left[n\right]}\left({t}_{n,j}\right)-{\mathrm{\Phi }}_{i,h}^{\left[n+1\right]}\left({t}_{n,j}\right),$

it follows that

$\begin{array}{cc}\hfill {e}_{i,n+1,j}& ={\int }_{0}^{{t}_{n}}{K}_{i}\left({t}_{n,j},\eta ,{y}_{i}\left(\eta \right)\right)d\eta +{\int }_{{t}_{n}}^{{t}_{n,j}}{K}_{i}\left({t}_{n,j},\eta ,{y}_{i}\left(\eta \right)\right)d\eta \hfill \\ & -{\int }_{0}^{{t}_{n}}{K}_{i}\left({t}_{n,j},\eta ,{P}_{i}\left(\eta \right)\right)d\eta -{\int }_{{t}_{n}}^{{t}_{n,j}}{K}_{i}\left({t}_{n,j},\eta ,{P}_{i}\left(\eta \right)\right)d\eta -{E}_{i,m,n}{h}^{q}\hfill \\ & ={\int }_{0}^{{t}_{n}}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},\eta ,{z}_{i}\left(\eta \right)\right){e}_{i,h}\left(\eta \right)d\eta +{\int }_{{t}_{n}}^{{t}_{n,j}}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},\eta ,{z}_{i}\left(\eta \right)\right){e}_{i,h}\left(\eta \right)d\eta -{E}_{i,m,n}{h}^{q}\hfill \\ & =h\sum _{v=1}^{n}{\int }_{0}^{1}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{v-1}+sh,{z}_{i}\left({t}_{v-1}+sh\right)\right){e}_{i,h}\left({t}_{v-1}+sh\right)ds\hfill \\ & +h{\int }_{0}^{{c}_{j}}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{n}+sh,{z}_{n}\left(s\right)\right){e}_{i,h}\left({t}_{n}+sh\right)ds-{E}_{i,m,n}{h}^{q}.\hfill \end{array}$(3.8)

From hypothesis (i),

${e}_{i,h}\left({t}_{0}+sh\right)={h}^{d}{V}_{i}\left(s\right)$(3.9)

with ${\parallel V\parallel }_{\mathrm{\infty }}\le {C}_{4}$ independent of h.

Substituting expressions (3.7) and (3.9) in equation (3.8), we obtain

$\begin{array}{cc}\hfill {e}_{i,n+1,j}& =h{\int }_{0}^{1}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{0}+sh,{z}_{i}\left({t}_{0}+sh\right)\right){h}^{d}{V}_{i}\left(s\right)ds\hfill \\ & +h\sum _{v=2}^{n}{\int }_{0}^{1}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{v-1}+sh,{z}_{i}\left({t}_{v-1}+sh\right)\right)\hfill \\ & \mathrm{×}\left\{{\phi }_{0}\left(s\right){e}_{i,v-2}+{\phi }_{1}\left(s\right){e}_{i,v-1}+\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right){e}_{i,v-1,j}+{\psi }_{j}\left(s\right){e}_{i,v,j}\right)\hfill \\ & +{h}^{p+1}{R}_{i,m,n}\left(s\right)+{k}^{r}{A}_{i,m,n}\left(s\right)\right\}ds\hfill \\ & +h{\int }_{0}^{{c}_{j}}\frac{\partial }{\partial y}{K}_{i}\left({t}_{n,j},{t}_{n}+sh,{z}_{n}\left(s\right)\right)\hfill \\ & \mathrm{×}\left({\phi }_{0}\left(s\right){e}_{i,n-1}+{\phi }_{1}\left(s\right){e}_{i,n}+\sum _{j=1}^{m}\left({\chi }_{j}\left(s\right){e}_{i,n,j}+{\psi }_{j}\left(s\right){e}_{i,n+1,j}\right)\hfill \\ & +{h}^{p+1}{R}_{i,m,n}\left(s\right)+{k}^{r}{A}_{i,m,n}\left(s\right)\right)ds+E{}_{i,m,n}h{}^{q}.\hfill \end{array}$

On the other hand, setting $s=1$ in (3.7), we have

${e}_{i,n+1}={\phi }_{0}\left(1\right){e}_{i,n-1}+{\phi }_{1}\left(1\right){e}_{i,n}+\sum _{j=1}^{m}\left({\chi }_{j}\left(1\right){e}_{i,n,j}+{\psi }_{j}\left(1\right){e}_{i,n+1,j}\right)+{h}^{p+1}{R}_{i,m,n}\left(s\right)+{k}^{r}{A}_{i,m,n}\left(s\right).$(3.10)

Therefore, denoting

${\epsilon }_{i,v}=\left[\begin{array}{c}\hfill {e}_{i,v,1}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {e}_{i,v,m}\hfill \end{array}\right],{𝐄}_{i,n}=\left[\begin{array}{c}\hfill {E}_{i,n,1}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {E}_{i,n,m}\hfill \end{array}\right],{𝐀}_{i,n}=\left[\begin{array}{c}\hfill {A}_{i,n,1}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {A}_{i,n,m}\hfill \end{array}\right],$

we obtain

$\begin{array}{cc}& \left(I-h{B}_{i}^{\left[n+1\right]}\right){\epsilon }_{i,n+1}-h{w}^{\left[n+1\right]}{e}_{i,n}\hfill \\ & =h\sum _{v=1}^{n}{B}_{n}^{\left[v\right]}{\epsilon }_{i,v}+h\sum _{v=1}^{n}{w}_{n}^{\left[v\right]}{e}_{i,v-1}+{h}^{p+2}\sum _{v=2}^{n}{\rho }_{n}^{\left[v\right]}+{h}^{p+2}{\rho }^{\left[n+1\right]}+{h}^{q}{𝐄}_{n}+{h}^{d+1}{𝐒}_{n}+{k}^{r}{𝐀}_{n},\hfill \end{array}$(3.11)

where the matrices ${B}_{i}^{\left[n+1\right]}$, ${B}_{n}^{\left[v\right]}$, ${w}^{\left[n+1\right]}$, ${w}_{n}^{\left[v\right]}$ and the vectors ${\rho }_{n}^{\left[v\right]}$, ${\rho }^{\left[n+1\right]}$, ${𝐒}_{n}$ involve the integrals over $\left[0,{c}_{j}\right]$ or $\left[0,1\right]$ of $\frac{\partial }{\partial y}{K}_{i}$ multiplied by ${\phi }_{0}$, ${\phi }_{1}$, ${\chi }_{j}$, ${\psi }_{j}$, ${R}_{i,m,n}$, ${V}_{i}$ and ${A}_{i,m,n}$.

Put

${ϵ}_{i,v}=\left[\begin{array}{c}\hfill {\epsilon }_{i,v+1}\hfill \\ \hfill {e}_{i,v}\hfill \end{array}\right].$

Then, from (3.10) and (3.11), it follows that

$\parallel {ϵ}_{i,n+1}\parallel \le h{D}_{1}\sum _{v=1}^{n}\parallel {ϵ}_{i,v}\parallel +{D}_{2}\sum _{v=n-2}^{n}\parallel {ϵ}_{i,v}\parallel +{\gamma }_{1}{h}^{d+1}+{\gamma }_{2}{h}^{q}+{\gamma }_{3}{h}^{2m}+{\gamma }_{4}{k}^{r},$

where ${D}_{1},{D}_{2},{\gamma }_{1},{\gamma }_{2},{\gamma }_{3},{\gamma }_{4}$ are upper bounds of the norm of vectors and matrices appearing in (3.11).

Hence, using the Gronwall inequality, it follows that

$\parallel {ϵ}_{i,n+1}\parallel \le C\gamma \left({h}^{{p}^{*}}+{k}^{r}\right)=O\left({h}^{{p}^{*}}+{k}^{r}\right),$

where $\gamma =\mathrm{max}\left\{{\gamma }_{1},\mathrm{\dots },{\gamma }_{4}\right\}$ and C is a constant independent of h and k. ∎

To analyze the stability of method, we define the following notations for the ith stage of the method:

$\begin{array}{c}\hfill {b}_{i}=\left[\begin{array}{c}\hfill {b}_{i,1}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {b}_{i,m}\hfill \end{array}\right],{w}_{i,0}=\left[\begin{array}{c}\hfill {w}_{i,1,0}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {w}_{i,m,0}\hfill \end{array}\right],{w}_{i,m+1}=\left[\begin{array}{c}\hfill {w}_{i,1,m+1}\hfill \\ \hfill \mathrm{⋮}\hfill \\ \hfill {w}_{i,m,m+1}\hfill \end{array}\right],\hfill \\ \hfill {W}_{i}={\left({w}_{i,j,l}\right)}_{j,l=1}^{m},e=\left[1,\mathrm{\dots },1\right],A={\left[{\chi }_{j}\left({c}_{i}\right)\right]}_{i,j=1}^{m},B={\left[{\psi }_{j}\left({c}_{i}\right)\right]}_{i,j=1}^{m},\hfill \end{array}$

where ${b}_{i,j}$s and ${w}_{i,j,l}$s are the weights of quadrature rules of the ith stage for ${F}_{i,h}^{\left[n\right]}$ and ${\mathrm{\Phi }}_{i,h}^{\left[n+1\right]}$, respectively.

The following theorem can be obtained analogously to the theorem of [4] for each stage of the presented method.

#### Theorem 3.2.

Applying the presented method to test equation (2.10) for each stage of the method, leads to the matrix recurrence relation

$\left[\begin{array}{c}\hfill {y}_{i,n+1}\hfill \\ \hfill {Y}_{i}^{\left[n+1\right]}\hfill \\ \hfill {F}_{i,h}^{\left[n\right]}\hfill \\ \hfill {y}_{i,n}\hfill \end{array}\right]={M}_{i}\left(z\right)\left[\begin{array}{c}\hfill {y}_{i,n}\hfill \\ \hfill {Y}_{i}^{\left[n\right]}\hfill \\ \hfill {F}_{i,h}^{\left[n-1\right]}\hfill \\ \hfill {y}_{i,n-1}\hfill \end{array}\right],$

where $z\mathrm{=}h\mathit{}\lambda$ and the stability matrix ${M}_{i}\mathit{}\mathrm{\left(}z\mathrm{\right)}$ is ${M}_{i}\mathit{}\mathrm{\left(}z\mathrm{\right)}\mathrm{=}{P}_{i}^{\mathrm{-}\mathrm{1}}\mathit{}\mathrm{\left(}z\mathrm{\right)}\mathit{}{Q}_{i}\mathit{}\mathrm{\left(}z\mathrm{\right)}$ with

${P}_{i}\left(z\right)=\left[\begin{array}{cccc}\hfill -zB{w}_{i,m+1}\hfill & \hfill I-zB{W}_{i}\hfill & \hfill -B\hfill & \hfill 0\hfill \\ \hfill 1-z{\psi }^{T}\left(1\right){w}_{i,m+1}\hfill & \hfill -z{\psi }^{T}\left(1\right){W}_{i}\hfill & \hfill -{\psi }^{T}\left(1\right)\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill I\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right],$${Q}_{i}\left(z\right)=\left[\begin{array}{cccc}\hfill {\phi }_{1}\left(c\right)+zB{w}_{i,0}\hfill & \hfill A\hfill & \hfill 0\hfill & \hfill {\phi }_{0}\left(c\right)\hfill \\ \hfill {\phi }_{1}\left(1\right)+z{\psi }^{T}\left(1\right){w}_{i,0}\hfill & \hfill {\chi }^{T}\left(1\right)\hfill & \hfill 0\hfill & \hfill {\phi }_{0}\left(1\right)\hfill \\ \hfill z{b}_{i,m+1}\text{𝑒}\hfill & \hfill z\text{𝑒}{b}_{i}^{T}\hfill & \hfill 1\hfill & \hfill z{b}_{i,0}\text{𝑒}\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right].$

Therefore, defining

${V}_{i,n}=\left[\begin{array}{c}\hfill {y}_{i,n}\hfill \\ \hfill {Y}_{i}^{\left[n\right]}\hfill \\ \hfill {F}_{i,h}^{\left[n-1\right]}\hfill \\ \hfill {y}_{i,n-1}\hfill \end{array}\right]\mathit{ }\text{and}\mathit{ }{V}_{n}={\left[{V}_{1,n},\mathrm{\dots },{V}_{N,n}\right]}^{T},$

we obtain the matrix recurrence relation of the method in the general case as ${V}_{n+1}=M\left(z\right){V}_{n}$, where the stability matrix $M\left(z\right)$ is $M\left(z\right)=\mathrm{diag}\left({M}_{1}\left(z\right),\mathrm{\dots },{M}_{N}\left(z\right)\right)$.

## 4 Numerical examples

In this section, we give some examples to show the accuracy and stability of the presented method. In the examples, we apply the method with $m=2$, ${c}_{1}=\frac{3}{5}$, ${c}_{2}=\frac{4}{5}$ and ${q}_{0}={p}_{0}=1$, ${q}_{1}={p}_{1}=-1$ (for (2.7), (2.8)). So we have $p=2m-1=3$ and the polynomials ${\phi }_{0}$, ${\phi }_{1}$, ${\chi }_{j}$ and ${\psi }_{j}$, $j=1,2$, as

${\phi }_{0}\left(s\right)={\phi }_{1}\left(s\right)=\left(1-s\right)\left(s-\frac{4}{5}\right)\left(s-\frac{3}{5}\right),$$\begin{array}{cccc}\hfill {\chi }_{1}\left(s\right)& =\frac{1}{30}\left(-301+151s\right)\left(s-\frac{4}{5}\right)\left(s-\frac{3}{5}\right),\hfill & {\chi }_{2}\left(s\right)\hfill & \hfill =\frac{1}{20}\left(242-67s\right)\left(s-\frac{4}{5}\right)\left(s-\frac{3}{5}\right),\\ \hfill {\psi }_{1}\left(s\right)& =\left(s-\frac{4}{5}\right)\left(\frac{149}{50}-\frac{1303}{100}s-\frac{9}{20}{s}^{2}\right),\hfill & {\psi }_{2}\left(s\right)\hfill & \hfill =\left(s-\frac{3}{5}\right)\left(\frac{-179}{75}+\frac{431}{50}s+\frac{23}{30}{s}^{2}\right).\end{array}$

Also, the stability polynomial is

$p\left(w,z\right)={w}^{2}\left({p}_{4}\left(z\right){w}^{4}+{p}_{3}\left(z\right){w}^{3}+{p}_{2}\left(z\right){w}^{2}+{p}_{1}\left(z\right)w+{p}_{0}\left(z\right)\right),$

where

${p}_{0}\left(z\right)=z\left(\frac{24}{625}-\frac{132}{3125}z\right),$${p}_{1}\left(z\right)=-z\left(\frac{432}{625}+\frac{144}{3125}z\right),$${p}_{2}\left(z\right)=-\frac{96}{25}+\frac{72}{625}z-\frac{156}{3125}{z}^{2},$${p}_{3}\left(z\right)=\frac{192}{25}-\frac{1344}{625}z+\frac{1872}{3125}{z}^{2},$${p}_{4}\left(z\right)=-\frac{96}{25}+\frac{336}{125}z-\frac{288}{625}{z}^{2}.$

From [4], the described method is A-stable by these choices of parameters and polynomials.

As mentioned previously, in this paper, we apply the well-known differential transform (DT) method [17] to obtain the required starting values. This method gives an approximation to the Taylor expansion at $\left({x}_{0},{t}_{0}\right)$ of the solution, which has high accuracy near the point $\left({x}_{0},{t}_{0}\right)$. Therefore, it is suitable to obtain the starting values.

#### Example 4.1.

Consider the integral equation

$y\left(x,t\right)=\frac{x\left(3+100{x}^{2}t\right)}{3\left(1+t\right)}-100{\int }_{0}^{t}{\int }_{0}^{x}{y}^{2}\left(z,s\right)dzds,x,t\in \left[0,3\right],$(4.1)

which is a stiff equation with the exact solution $y\left(x,t\right)=\frac{x}{1+t}$. Applying the two-dimensional DT method of [17] to equation (4.1), we obtain

$Y\left(m,n\right)=G\left(m,n\right)-\frac{100}{mn}\sum _{l=0}^{n-1}\sum _{k=0}^{m-1}Y\left(k,l\right)Y\left(m-k-1,n-l-1\right)\mathit{ }\text{for}m,n=1,2,\mathrm{\dots },$

which is a recurrence relation with $Y\left(m,0\right)=Y\left(0,n\right)=0$ for $m,n=0,1,2,\mathrm{\dots }$, and $G\left(m,n\right)$ is the differential transform of $g\left(x,t\right)$. Therefore, the DT approximate solution of equation (4.1) is given by

${y}_{N}\left(x,t\right)=\sum _{m=0}^{N}\sum _{n=0}^{N}Y\left(m,n\right){x}^{m}{t}^{n}.$(4.2)

We use relation (4.2) to determine the required starting values. Table 1 shows the absolute errors of the presented method and the DT method at some points.

Table 1

Computational results of Example 4.1 at some nodes.

#### Example 4.2.

As the second example, consider the stiff equation

$y\left(x,t\right)=g\left(x,t\right)-100{\int }_{0}^{t}{\int }_{0}^{x}zsy\left(z,s\right)dzds,x,t\in \left[0,3\right],$(4.3)

with $g\left(x,t\right)=x\mathrm{Ln}\left(1+t\right)+\frac{100}{3}{x}^{3}\left[\frac{1}{2}\left({t}^{2}-1\right)\mathrm{Ln}\left(1+t\right)+\frac{1}{2}t-\frac{1}{4}{t}^{2}\right]$ and the exact solution $y\left(x,t\right)=x\mathrm{Ln}\left(1+t\right)$.

Similar to the previous example, using the two-dimensional DT method, we obtain

$Y\left(m,n\right)=G\left(m,n\right)-\frac{100}{mn}\sum _{l=0}^{n-1}\sum _{k=0}^{m-1}{\delta }_{k,1}{\delta }_{l,1}Y\left(m-k-1,n-l-1\right)\mathit{ }\text{for}m,n=1,2,\mathrm{\dots },$

where $G\left(m,n\right)$ is the differential transform of $g\left(x,t\right)$ and $Y\left(m,0\right)=Y\left(0,n\right)=0$, $m,n=0,1,2,\mathrm{\dots }$. Therefore, the approximate solution of equation (4.3) is given by

${y}_{N}\left(x,t\right)=\sum _{m=0}^{N}\sum _{n=0}^{N}Y\left(m,n\right){x}^{m}{t}^{n},$

which gives us the required starting values. The absolute errors of presented method and DT method are given in Table 2.

Table 2

Computational results of Example 4.2 at some nodes.

## 5 Conclusion

In this paper, we extended the two-step collocation methods for two-dimensional nonlinear Volterra integral equations (2D-NVIEs) of the second kind. We converted the 2D-NVIE of the second kind to a one-dimensional VIE of the second kind, and then we solved the resulting equations using two-step collocation methods. The numerical results confirm the convergence and stability of the method.

## Acknowledgements

The authors would like to thank the anonymous referees for their valuable comments that helped the authors to improve the paper.

## References

• [1]

N. Bildik and S. Deniz, A new efficient method for solving delay differential equations and a comparison with other methods, Europ. Phys. J. Plus. 132 (2017), no. 1, 51–61.

• [2]

H. Brunner and J.-P. Kauthen, The numerical solution of two-dimensional Volterra integral equations by collocation and iterated collocation, IMA J. Numer. Anal. 9 (1989), no. 1, 47–59.

• [3]

N. L. Carothers, A Short Course on Banach Space Theory, London Math. Soc. Stud. Texts 64, Cambridge University Press, Cambridge, 2005.  Google Scholar

• [4]

D. Conte, Z. Jackiewicz and B. Paternoster, Two-step almost collocation methods for Volterra integral equations, Appl. Math. Comput. 204 (2008), no. 2, 839–853.

• [5]

D. Conte and B. Paternoster, Multistep collocation methods for Volterra integral equations, Appl. Numer. Math. 59 (2009), no. 8, 1721–1736.

• [6]

D. Conte and I. D. Prete, Fast collocation methods for Volterra integral equations of convolution type, J. Comput. Appl. Math. 196 (2006), no. 2, 652–663.

• [7]

L. M. Delves and J. L. Mohamed, Computational Methods for Integral Equations, Cambridge University Press, Cambridge, 1985.  Google Scholar

• [8]

S. Deniz, Comparison of solutions of systems of delay differential equations using Taylor collocation method, Lambert W function and variational iteration method, Sci. Iranica. Trans. D Comp. Sci. Engin. Elec. 22 (2015), no. 3, 1052–1058.  Google Scholar

• [9]

S. Deniz and N. Bildik, A new analytical technique for solving Lane–Emden type equations arising in astrophysics, Bull. Belg. Math. Soc. Simon Stevin 24 (2017), no. 2, 305–320.  Google Scholar

• [10]

H. Guoqiang and Z. Liqing, Asymptotic error expansion of two-dimensional Volterra integral equation by iterated collocation, Appl. Math. Comput. 61 (1994), no. 2–3, 269–285.  Google Scholar

• [11]

G. Han and R. Wang, The extrapolation method for two-dimensional Volterra integral equations based on the asymptotic expansion of iterated Galerkin solutions, J. Integral Equations Appl. 13 (2001), no. 1, 15–34.

• [12]

R. Katani and S. Shahmorad, A new block by block method for solving two-dimensional linear and nonlinear Volterra integral equations of the first and second kinds, Bull. Iranian Math. Soc. 39 (2013), no. 4, 707–724.  Google Scholar

• [13]

F. Mirzaee and Z. Rafei, The block by block method for the numerical solution of the nonlinear two-dimensional Volterra integral equations, J. King Saud Uni. Sci. 23 (2011), no. 2, 191–195.

• [14]

Z. M. Odibat, Differential transform method for solving Volterra integral equation with separable kernels, Math. Comput. Modelling 48 (2008), no. 7–8, 1144–1149.

• [15]

J. Saberi, O. Navid Samadi and E. Tohidi, Numerical solution of two-dimensional Volterra integral equations by spectral Galerkin method, J. Appl. Math. Bioinf. 1 (2011), 159–174.  Google Scholar

• [16]

L. Tao and H. Yong, A generalization of discrete Gronwall inequality and its application to weakly singular Volterra integral equation of the second kind, J. Math. Anal. Appl. 282 (2003), no. 1, 56–62.

• [17]

A. Tari, M. Y. Rahimi, S. Shahmorad and F. Talati, Solving a class of two-dimensional linear and nonlinear Volterra integral equations by the differential transform method, J. Comput. Appl. Math. 228 (2009), no. 1, 70–76.

• [18]

P. J. van der Houwen and H. J. J. te Riele, Backward differentiation type formulas for Volterra integral equations of the second kind, Numer. Math. 37 (1981), no. 2, 205–217.

Revised: 2018-02-27

Accepted: 2018-04-12

Published Online: 2019-05-21

Published in Print: 2019-06-01

Citation Information: Journal of Applied Analysis, Volume 25, Issue 1, Pages 1–11, ISSN (Online) 1869-6082, ISSN (Print) 1425-6908,

Export Citation

© 2019 Walter de Gruyter GmbH, Berlin/Boston.