# Existence and uniqueness of solutions for the stochastic Volterra-Levin equation with variable delays

• Shoubo Jin
From the journal Open Mathematics

## Abstract

The Picard iteration method is used to study the existence and uniqueness of solutions for the stochastic Volterra-Levin equation with variable delays. Several sufficient conditions are specified to ensure that the equation has a unique solution. First, the stochastic Volterra-Levin equation is transformed into an integral equation. Then, to obtain the solution of the integral equation, the successive approximation sequences are constructed, and the existence and uniqueness of solutions for the stochastic Volterra-Levin equation are derived by the convergence of the sequences. Finally, two examples are given to demonstrate the validity of the theoretical results.

MSC 2010: 34K50; 60H10

## 1 Introduction

As stochastic modeling is used in the fields such as physics, economics, chemistry, and scholars have paid more and more attention to stochastic differential equations. Therefore, the existence and uniqueness of solutions of the equation have become a hot topic in recent years. The Volterra equation is a significant differential equation, which has been applied to the circulating fuel nuclear reactor, the neural networks, the population projection and others. In 1928, Volterra [1] first proposed the Volterra equation, i.e.,

(1) x ( t ) = t L t p ( s t ) f ( x , s ) d s ,

and Levin [2] obtained the asymptotical stability of (1). Burton investigated the stability of equation (1) by the contraction mapping principle in [3]. Zhao and Yuan [4] considered 3/2-stability of a generalized Volterra-Levin equation. The discrete Volterra equation describing the evolutionary process of the population was recently investigated in [5].

To analyze the Volterra equation, Levin [2] used the limited condition that is pretty hard to be checked in practical application, lim x 0 x f ( x ) d x = , and the author also required that the function p ( t ) has good properties, such as d p d t 0 , d 2 p d t 2 0 and d 3 p ( t ) d t 3 0 for any t ( 0 , + ) . Although the conditions of f ( x ) were simplified by averages in [3], there were still more requirements for the function f ( x ) . In this paper, the constrains of f ( x ) and p ( t ) will be weaken in the stochastic Volterra-Levin equation with variable delays.

Let { Ω , , P } be a complete probability space equipped with some filtration { t } t 0 satisfying the usual conditions, that the filtration is right continuous and { 0 } contains all P-null sets. Let { W ( t ) , t 0 } denote a standard Brownian motion defined on { Ω , , P } . We investigate the existence and uniqueness of solutions for the stochastic Volterra-Levin equations with variable delays, i.e.,

(2) d x ( t ) = t L t p ( s t ) f ( x ( s α ( s ) ) ) d s d t + g ( x ( t β ( t ) ) ) d W ( t ) , t [ 0 , T ] , x ( s ) = φ ( s ) C ( [ L τ , 0 ] ; R ) .

where f ( x ) and g ( x ) are known functions satisfying certain conditions, the constant L > 0 , p ( s ) C ( [ L , T ] ; R ) , and R = ( , + ) , α ( t ) and β ( t ) are the variable delays, satisfying α ( t ) , β ( t ) [ 0 , τ ] .

Scholars have become increasingly interested in the stochastic Volterra-Levin equations. The equation has been applied to many special research fields, such as the population model of spatial heterogeneity [6], the predator-prey model [7], and the nonautonomous competitive model [8]. After reviewing and sorting out the literature, it is found that most scholars currently use the principle of contraction mapping to explore the equation. For example, Luo [9] analyzed the exponential stability of the classical stochastic Volterra-Levin equations. Zhao et al. [10] investigated the mean square asymptotic stability of the generalized stochastic Volterra-Levin equations, which improved the results in [9]. Li and Xu [11] demonstrated the existence and global attractiveness of periodic solutions for impulsive stochastic Volterra-Levin equations. In this paper, the Picard iteration method is directly used to prove the existence and uniqueness of solutions of the stochastic Volterra-Levin equations with variable delays, which can give a more intuitive approximate solution. Recently, for the case without delay, Jaber [12] proved the weak existence and uniqueness of affine stochastic Volterra equations. Dung [13] revealed Itô differential representation of the stochastic Volterra integral equations. For the case of constant delay, Guo and Zhu [14] used this approximate method to prove the existence of solutions of stochastic Volterra-Levin equations. Some delay Volterra integral problems on a half-line were analyzed in [15]. The qualitative properties of solutions of nonlinear Volterra equations without random disturbance were investigated in [16]. However, there are only a few results of the stochastic Volterra equations with variable delay.

Generally, a time delay is inevitable and variable in practical application, and the future state of an existing system depends not only on the current state of the systems but also on the past [17,18,19]. When the function f ( x ) = x in equation (2), Benhadri and Zeghdoudi [20] applied the variable delays to the Volterra-Levin equation with Poisson jump and obtained the mean square stability by the fixed-point theory. The authors in [5] discussed the linear discrete Volterra equation with infinite delay when the function g ( x ) = 0 , which means there are not any random noises. In this paper, we will investigate the Volterra-Levin equation with the variable delays and the standard Brownian motion in more general conditions. Moreover, the Picard successive approximation method is used to prove the existence and uniqueness of the solution in some sufficient conditions. Compared with [2] and [3], these conditions are easier to be verified.

The rest of this paper is organized as follows. In Section 2, some necessary conditions and lemmas are established. In Section 3, the existence and uniqueness of solutions are proved. In Section 4, two examples are given to demonstrate the validity of the main results.

## 2 Assumptions and lemmas

To obtain the existence and uniqueness of the solutions for equation (2), the following assumptions are given in this paper.

1. lim x 0 f ( x ) x = β > 0 .

2. g ( 0 ) = f ( 0 ) = 0 , and there exists a constant μ > 0 , such that f ( x ) x > μ .

3. L 0 p ( s ) d s = m > 0 , L 0 p ( s ) s d s = m 1 > 0 and max L s 0 p ( s ) = m 2 .

4. There is a positive constant K > 0 , such that f ( x ) f ( y ) g ( x ) g ( y ) K x y for all x , y R .

5. 5 K 2 2 μ K ( 1 e m μ T ) + 2 m 1 2 + 1 2 m μ ( 1 e 2 m μ T ) K 2 L 4 m 2 2 + 1 2 K 2 + 9 10 e m K T < 1 .

## Remark 2.1

Assumptions H 1 H 3 are some common conditions for studying the Volterra-Levin equations. For instance, Luo [9] discussed the exponential stability for classical stochastic Volterra-Levin equations on Assumptions H 1 H 3 . Zhao et al. [10] studied the mean square asymptotic stability of a class of generalized nonlinear stochastic Volterra-Levin equations on similar assumptions. Assumption H 4 is the Lipschitz condition, which is the core condition for ensuring the existence and uniqueness of solutions for the initial value problem.

Now, we transform (2) into the following form by using the properties of integrals.

## Lemma 2.1

Assuming that H 1 H 3 are established, equation (2) can be transformed into

(3) x ( t ) = e 0 t m a ( u α ( u ) ) d u φ ( 0 ) L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s + 0 t [ x ( v ) x ( v α ( v ) ) ] m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v + L 0 p ( s ) t + s t f ( x ( u α ( u ) ) ) d u d s 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) L 0 p ( s ) v + s v f ( x ( u α ( u ) ) ) d u d s d v 0 t e s t m a ( u α ( u ) ) d u g ( x ( s β ( s ) ) ) d W ( s ) .

## Proof

Let a ( t ) = f ( x ( t ) ) x ( t ) x ( t ) 0 β x ( t ) = 0 , it is obtained that a ( t ) C ( [ τ , T ] ; R + ) from Assumptions H 1 and H 2 . Using

(4) t L t p ( s t ) f ( x ( s α ( s ) ) ) d s = L 0 p ( s ) f ( x ( s + t α ( s + t ) ) ) d s = d d t L 0 p ( s ) 0 t + s f ( x ( u α ( u ) ) ) d u d s = d d t L 0 p ( s ) t t + s f ( x ( u α ( u ) ) ) d u d s + m f ( x ( t α ( t ) ) ) = d d t L 0 p ( s ) t + s t f ( x ( u α ( u ) ) ) d u d s + m a ( t α ( t ) ) x ( t α ( t ) ) ,

Equation (2) can be transformed into

(5) d x ( t ) = m a ( t α ( t ) ) x ( t α ( t ) ) d t + d L 0 p ( s ) t + s t f ( x ( u α ( u ) ) ) d u d s + g ( x ( t β ( t ) ) ) d W ( t ) .

The two sides of the aforementioned equation are multiplied by e s t m a ( u α ( u ) ) d u , and then integral from 0 to t , using the distribution integral method and the following formula:

(6) d x ( t ) e s t m a ( u α ( u ) ) d u = e s t m a ( u α ( u ) ) d u d x ( t ) + m a ( t α ( t ) ) x ( t ) e s t m a ( u α ( u ) ) d u d t .

It obtains

(7) x ( t ) e s t m a ( u α ( u ) ) d u x ( 0 ) e s 0 m a ( u α ( u ) ) d u = 0 t [ x ( v ) x ( v α ( v ) ) ] m a ( v α ( v ) ) e s v m a ( u α ( u ) ) d u d v + e s t m a ( u α ( u ) ) d u L 0 p ( s ) t + s t f ( x ( u α ( u ) ) ) d u d s e s 0 m a ( u α ( u ) ) d u L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s 0 t e s t m a ( u α ( u ) ) d u m a ( t α ( t ) ) L 0 p ( s ) t + s t f ( x ( u α ( u ) ) ) d u d s d t + 0 t e s t m a ( u α ( u ) ) d u g ( x ( t β ( t ) ) ) d W ( t ) .

Two sides of the aforementioned equality are multiplied by e s t m a ( u α ( u ) ) d u , and using

(8) e s t m a ( u α ( u ) ) d u 0 t e s v m a ( u α ( u ) ) d u g ( x ( v β ( v ) ) ) d W ( v ) = 0 t e t s m a ( u α ( u ) ) d u g ( x ( s β ( s ) ) ) d W ( s ) .

We can obtain equation (3).□

## Remark 2.2

The method of transforming the stochastic differential equation into the integral equation, has been widely used. When the function g ( x ) is independent of the variable x , Luo studied the exponential stability for a class of stochastic Volterra-Levin equations by using the method in [9]. Zhao et al. investigated the mean square asymptotic of the generalized nonlinear stochastic Volterra-Levin equations [10]. Based on the semigroup of operators, Yang et al. transformed the heat conduction equation into the fractional Volterra integral equation in [21]. In this lemma, due to the appearance of variable delays, we need to deal with it more precisely.

## 3 Existence and uniqueness

Picard iteration is the most commonly used method in the proof of the existence of solutions to the stochastic equations [20,21, 22,23]. In this paper, the existence and uniqueness of solutions for equation (2) are proved by the Picard iteration method. An important characteristic of this method is that it is constructive, and the bounds on the difference between iterates and the solutions are easily available. Such bounds are not only useful for the approximation of solutions but also necessary in the study of qualitative properties of solutions.

Now, let’s briefly summarize this idea of the Picard iteration method. To obtain the solution for a class of integral equation y ( t ) = y 0 + 0 t f ( τ , y ( τ ) ) d τ , Picard successive approximation sequences are constructed as follows.

y m + 1 ( t ) = y 0 + 0 t f ( τ , y m ( τ ) ) d τ .

If the sequences { y m ( t ) } converge uniformly to a continuous function y ( t ) in some interval J , then we may past to the limit in both sides of the aforementioned equation to obtain

y ( t ) = lim m y m + 1 ( t ) = y 0 + lim m 0 t f ( τ , y m ( τ ) ) d τ = y 0 + 0 t f ( τ , y ( τ ) ) d τ .

So that y ( t ) is the desired solution.

## Theorem 3.1

Suppose that assumptions H 1 H 5 hold, then equation (2) has a unique solution in [ 0 , T ] .

## Proof

The Picard iteration method is used in the proof of this theorem, and using Lemma 2.1, we construct the Picard iteration sequences.

(9) x 0 0 = φ ( s ) , x 0 ( t ) = φ ( 0 ) , ( 0 t T ) , x 0 n = φ ( s ) , n 1 , x n ( t ) = I 0 n 1 ( t ) + I 1 n 1 ( t ) + I 2 n 1 ( t ) + I 3 n 1 ( t ) + I 4 n 1 ( t ) ,

where

I 0 n 1 ( t ) = e 0 t m a ( u α ( u ) ) d u φ ( 0 ) L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s , I 1 n 1 ( t ) = 0 t [ x n 1 ( v ) x n 1 ( v α ( v ) ) ] m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v , I 2 n 1 ( t ) = L 0 p ( s ) t + s t f ( x n 1 ( u α ( u ) ) ) d u d s , I 3 n 1 ( t ) = 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) L 0 p ( s ) v + s v f ( x n 1 ( u α ( u ) ) ) d u d s d v , I 4 n 1 ( t ) = 0 t e s t m a ( u α ( u ) ) d u g ( x n 1 ( s β ( s ) ) ) d W ( s ) .

(1) We first verify the mean square boundness of x n ( t ) ( n 0 ) , so we only need to prove E sup 0 t T x n ( t ) 2 is bounded.

It is obvious that E x 0 ( t ) 2 = E φ ( 0 ) 2 < + for n = 0 . Suppose E x n 1 ( t ) 2 is bounded, we begin to prove E x n ( t ) 2 is bounded. Using the formula (9), it obtains E x n ( t ) 2 5 i = 0 4 E I i n 1 ( t ) 2 .

Using Assumptions H 2 and H 4 , it obtains

(10) E [ sup 0 t T I 0 n 1 ( t ) 2 ] = E sup 0 t T e 0 t m a ( u α ( u ) ) d u φ ( 0 ) L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s 2 2 E φ ( 0 ) 2 + 2 E L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s 2 2 E φ ( 0 ) 2 + 2 K 2 m 1 2 E [ sup L τ t 0 φ ( t ) 2 ] < +

and

(11) E [ sup 0 t T I 1 n 1 ( t ) 2 ] = E sup 0 t T 0 t [ x n 1 ( v ) x n 1 ( v α ( v ) ) ] m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 2 E sup L τ t T x n 1 ( t ) 2 0 t m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 = 2 E sup L τ t T x n 1 ( t ) 2 1 e 0 t m a ( u α ( u ) ) d u 2 2 ( 1 e m K T ) 2 E [ sup L τ t T x n 1 ( t ) 2 ] < + .

Further, by using Hölder inequality, we obtain

(12) E [ sup 0 t T I 2 n 1 ( t ) 2 ] K 2 E sup 0 t T L 0 p ( s ) t + s t x n 1 ( u α ( u ) ) d u d s 2 K 2 max s [ L , 0 ] p ( s ) 2 E sup 0 t T L 0 t + s t x n 1 ( u α ( u ) ) d u d s 2 K 2 max s [ L , 0 ] p ( s ) 2 E sup 0 t T L 0 t L t x n 1 ( u α ( u ) ) d u d s 2 K 2 L 4 max s [ L , 0 ] p ( s ) 2 E [ sup L τ t T x n 1 ( t ) 2 ] K 2 L 4 m 2 2 [ sup L τ s < 0 E φ ( s ) 2 + sup 0 t T E x n 1 ( t ) 2 ] < +

and

(13) E sup 0 t T I 3 n 1 ( t ) 2 K 2 E sup 0 t T 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) L 0 p ( s ) v + s v x n 1 ( u α ( u ) ) d u d s d v 2 K 2 L 2 max s [ L , 0 ] p ( s ) 2 E sup 0 t T 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) v L v x n 1 ( u α ( u ) ) d u d v 2 K 2 L 2 max s [ L , 0 ] p ( s ) 2 E sup 0 t T 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) d v × 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) v L v x n 1 ( u α ( u ) ) d u 2 d v K 2 L 2 max s [ L , 0 ] p ( s ) 2 E sup 0 t T 1 e 0 t m a ( u α ( u ) ) d u × 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) v L v x n 1 ( u α ( u ) ) d u 2 d v

K 2 L 3 max s [ L , 0 ] p ( s ) 2 E sup 0 t T 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) v L v x n 1 ( u α ( u ) ) 2 d u d v K 2 L 4 max s [ L , 0 ] p ( s ) 2 E sup L τ t T x n 1 ( t ) 2 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) d v K 2 L 4 max s [ L , 0 ] p ( s ) 2 E sup L τ t T x n 1 ( t ) 2 1 e 0 t m a ( u α ( u ) ) d u K 2 L 4 m 2 2 [ sup L τ s < 0 E φ ( s ) 2 + sup 0 t T E x n 1 ( t ) 2 ] < + .

From Assumptions H 1 and H 4 , we know

(14) E sup 0 t T I 4 n 1 ( t ) 2 E sup 0 t T 0 t e s t m a ( u α ( u ) ) d u g ( x n 1 ( s β ( s ) ) ) d W ( s ) 2 K 2 sup 0 t T E 0 t e 2 s t m a ( u α ( u ) ) d u x n 1 ( s β ( s ) ) 2 d s K 2 sup 0 t T E 1 e 2 0 t m a ( u α ( u ) ) d u sup L τ t T x n 1 ( t ) 2 K 2 E [ sup L τ s 0 φ ( s ) 2 + sup 0 t T x n 1 ( t ) 2 ] < + .

So,

(15) E [ sup L τ t T x n ( t ) 2 ] E [ sup L τ t 0 φ ( s ) 2 ] + E [ sup 0 t T x n ( t ) 2 ] 10 E φ ( 0 ) 2 + ( 1 + 10 K 2 m 1 2 ) E [ sup L τ s 0 φ ( s ) 2 ] + [ 10 K 2 L 4 m 2 2 + 5 K 2 + 10 ( 1 e m K T ) ] ( E sup L τ s 0 φ ( s ) 2 + E sup 0 s T x n 1 ( t ) 2 ) < + .

(2) Verifying the mean square continuity of x n ( t )

Suppose t 1 > 0 and r is sufficiently small, we obtain the properties as follows.

E x n ( t 1 + r ) x n ( t 1 ) 2 5 i = 0 4 E I i n 1 ( t 1 + r ) I i n 1 ( t 1 ) 2 .

By Itô integration, we have

(16) E I 0 n 1 ( t 1 + r ) I 0 n 1 ( t 1 ) 2 e t 1 t 1 + r m a ( u α ( u ) ) d u 1 2 E φ ( 0 ) L 0 p ( s ) s 0 f ( φ ( u α ( u ) ) ) d u d s 2 0 ( r 0 ) ,

(17) E I 1 n 1 ( t 1 + r ) I 1 n 1 ( t 1 ) 2 2 e t 1 t 1 + r m a ( u α ( u ) ) d u 1 2 E 0 t 1 [ x n 1 ( v ) x n 1 ( v α ( v ) ) ] m a ( v α ( v ) ) d v 2 + 2 E t 1 t 1 + r [ x n 1 ( v ) x n 1 ( v α ( v ) ) ] m a ( v α ( v ) ) d v 2 0 ( r 0 ) ,

(18) E I 2 n 1 ( t 1 + r ) I 2 n 1 ( t 1 ) 2 2 E L 0 p ( s ) t 1 + s t 1 + r + s f ( x n 1 ( u α ( u ) ) ) d u d s 2 + 2 m 2 E t 1 t 1 + r f ( x n 1 ( u α ( u ) ) ) d u 2 0 ( r 0 ) ,

(19) E I 3 n 1 ( t 1 + r ) I 3 n 1 ( t 1 ) 2 2 ( e t 1 t 1 + r m a ( u α ( u ) ) d u 1 ) 2 E 0 t 1 L 0 p ( s ) v + s v f ( x n 1 ( u α ( u ) ) ) d u d s d v 2 + 2 E t 1 t 1 + r m a ( v α ( v ) ) τ 0 p ( s ) v + s v f ( x n 1 ( u α ( u ) ) ) d u d s d v 2 0 ( r 0 ) ,

and

(20) E I 4 n 1 ( t 1 + r ) I 4 n 1 ( t 1 ) 2 2 e t 1 t 1 + r m a ( u α ( u ) ) d u 1 2 E 0 t 1 e s t 1 m a ( u α ( u ) ) d u g ( x n 1 ( s α ( s ) ) ) d W ( s ) + 2 E t 1 t 1 + r e s t 1 + r m a ( u α ( u ) ) d u g ( x n 1 ( s α ( s ) ) ) d W ( s ) 2 2 e t 1 t 1 + r m a ( u α ( u ) ) d u 1 2 E 0 t 1 e 2 s t 1 m a ( u α ( u ) ) d u g 2 ( x n 1 ( s α ( s ) ) ) d s + 2 m 2 E t 1 t 1 + r e 2 s t 1 + r a 2 ( u α ( u ) ) d u g 2 ( x n 1 ( s α ( s ) ) ) d s 0 ( r 0 ) .

So E x n ( t 1 + r ) x n ( t 1 ) 2 5 i = 0 4 E I i n 1 ( t 1 + r ) I i n 1 ( t 1 ) 2 0 , the mean square continuity of x n ( t ) is verified.

(3) This part proves the convergence of sequences { x n ( t ) } n 0 .

By using the similar method of Step (1), we have

E sup 0 t T x n + 1 ( t ) x n ( t ) 2 5 E sup 0 t T 0 t ( x n ( v ) x n 1 ( v ) ) m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 + 5 E sup 0 t T 0 t ( x n ( v α ( v ) ) x n 1 ( v α ( v ) ) ) m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 + 5 K 2 E sup 0 t T L 0 p ( s ) t + s t x n ( u α ( u ) ) x n 1 ( u α ( u ) ) d u d s 2 + 5 K 2 E sup 0 t T × 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) L 0 p ( s ) v + s v x n ( u α ( u ) ) x n 1 ( u α ( u ) ) d u d s d v 2 + 5 K 2 E sup 0 t T 0 t e s t m a ( u α ( u ) ) d u x n ( s β ( s ) ) x n 1 ( s β ( s ) ) d W ( s ) 2

(21) 10 E sup L τ t T x n ( t ) x n 1 ( t ) 2 E sup 0 t T 0 t m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 + 5 K 2 E L 0 p ( s ) s d s E sup L τ t T x n ( t ) x n 1 ( t ) 2 + 5 K 2 E sup 0 t T 0 t e v t m a ( u α ( u ) ) d u m a ( v α ( v ) ) d v L 0 p ( s ) s d s sup t L τ x n ( t ) x n 1 ( t ) 2 2 + 5 K 2 E sup 0 t T 0 t e 2 s t m a ( u α ( u ) ) d u x n ( s β ( s ) ) x n 1 ( s β ( s ) ) 2 d s 10 K μ ( 1 e m μ T ) E sup L τ t T x n ( t ) x n 1 ( t ) 2 + 5 K 2 L 0 p ( s ) s d s 2 E [ sup 0 t T x n ( t ) x n 1 ( t ) 2 ] + 5 K 2 E sup 0 t T 1 e 0 t m a ( u α ( u ) ) d u L 0 p ( s ) s d s sup 0 t T x n ( t ) x n 1 ( t ) 2 + 5 K 2 E sup 0 t T 0 t e 2 m μ ( t s ) d s sup 0 t T x n ( t ) x n 1 ( t ) 2 5 K 2 2 μ K ( 1 e m μ T ) + 2 L 0 p ( s ) s d s 2 + sup 0 t T 1 2 m μ ( 1 e 2 m μ t ) × E sup 0 t T x n ( t ) x n 1 ( t ) 2 5 K 2 2 μ K ( 1 e m μ T ) + 2 m 1 2 + 1 2 m μ ( 1 e 2 m μ T ) E sup 0 t T x n ( t ) x n 1 ( t ) 2 δ E sup 0 t T x n ( t ) x n 1 ( t ) 2 .

δ = 5 K 2 2 μ K ( 1 e m μ T ) + 2 m 1 2 + 1 2 m μ ( 1 e 2 m μ T ) , so

E sup 0 t T x n + 1 ( t ) x n ( t ) 2 M δ n .

By Chebyshev inequality, we obtain

P { sup 0 t T x n + 1 ( t ) x n ( t ) 2 δ n / 4 } M δ n δ n / 2 = M δ n 2 .

From Assumption H 5 , we have δ < 1 . By Borel-Cantelli lemma, it follows that there exists a positive integer n 0 = n 0 ( w ) for almost all w Ω , satisfying

sup 0 t T x n + 1 ( t ) x n ( t ) δ n / 4 ,

for any n n 0 .

Next we show that x n ( t ) are uniformly convergent on [ L τ , + ] . Since x n ( t ) = x 0 ( t ) + i = 1 n [ x i ( t ) x i 1 ( t ) ] can be regarded as the partial sum of function series x 0 ( t ) + i = 1 [ x i ( t ) x i 1 ( t ) ] , as well as sup 0 t T x i ( t ) x i 1 ( t ) δ ( i 1 ) / 4 ( i = 1 , 2 , ) , it follows that x n ( t ) are uniformly convergent on [ L τ , + ] by using the convergence of constant series i = 1 δ ( i 1 ) / 4 and Weierstrass’ discriminance.

Let x ( t ) be the sum function, it obtains the function sequences { x n ( t ) } n 0 converge uniformly to x ( t ) on [ L τ , + ) . Considering x n ( t ) are continuous and F t compatible, we obtain the sum fucntion x ( t ) that is also continuous and F t compatible. Using inequality (15), { x n ( t ) } n 0 are the Cauchy sequences in L 2 , so

E x n ( t ) x ( t ) 2 0 ( n ) .

On the other hand, by inequality (15), we have

(22) E [ sup L τ t T x ( t ) 2 ] 10 E φ ( 0 ) 2 + ( 10 K 2 m 1 2 + 2 ) E [ sup L τ s 0 φ ( s ) 2 ] 10 e m K T 10 K 2 L 4 m 2 2 5 K 2 9 .

So by Assumption H 5 , it obtains E [ sup L τ t T x ( t ) 2 ] < + .

(4) This part proves that x ( t ) is a solution for equation (2)

After simple calculation, we have

(23) E 0 t ( x n ( v ) x ( v ) ) m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 E [ sup 0 t T x n ( t ) x ( t ) 2 ] ( 1 e v t m a ( u α ( u ) ) d u ) 2 0 ( n )

and

(24) E 0 t ( x n ( v α ( v ) ) x ( v α ( v ) ) ) m a ( v α ( v ) ) e v t m a ( u α ( u ) ) d u d v 2 0 ( n ) .

Using Hölder inequality, it follows

(25) E