Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access August 25, 2022

Khasminskii-type theorem for a class of stochastic functional differential equations

  • Li Ma EMAIL logo , Ru Wang and Liangqing Yan
From the journal Open Mathematics

Abstract

This paper is concerned with the existence and uniqueness theorems for stochastic functional differential equations with Markovian switching and jump, where the linear growth condition is replaced by more general Khasminskii-type conditions in terms of a pair of Lyapunov-type functions.

MSC 2010: 60H101; 60F99

1 Introduction

In general, in order for a stochastic differential equation to have a unique global solution for any given initial data, the coefficients of the equation are generally required to satisfy the linear growth condition (see, e.g., [1]) or a non-Lipschitz condition and linear growth condition (see, e.g., [2,3,4]), so the linear growth condition plays an important role in avoiding explosion in the finite time. However, many important equations in practice do not satisfy the linear growth condition, such as stochastic Lotka-Volterra systems (see, e.g., [5]). So it is necessary to establish more general existence-and-uniqueness theorems. There are many results of the solution of a stochastic functional differential equation (SFDE) without jumps. Xuerong and Rassias (see, e.g., [6]) examined the global solutions of SFDE under a more general condition, which has been introduced by Khasminskii. By this idea, Yi et al. (see, e.g., [7]) considered existence-and-uniqueness theorems of global solutions to SFDE. Qi et al. (see, e.g., [8]) established the existence-and-uniqueness theorems of global solutions to SFDE under local Lipschitz condition and Khasminskii-type conditions. Minghui et al. (see, e.g., [9]) established existence-and-uniqueness theorems for SFDE where the linear growth condition is replaced by more general Khasminskii-type conditions in terms of a pair of Lyapunov-type functions. Then, Fuke (see, e.g., [10]) considered the existence-and-uniqueness theorems of global solutions to neutral SFDE with the local Lipschitz condition but without the linear growth condition. Later, Fuke established the Khasminskii-type theorems for SFDE with finite delay. Quanxin (see, e.g., [11]) found the pth moment exponential stability of impulsive SFDEs with Markovian switching. Recently, Quanxin and his cooperators (see, e.g., [12]) studied the Razumikhin stability theorem for a class of impulsive stochastic delay differential systems.

For SFDE with jumps, Wei et al. (see, e.g., [13]) found the existence and uniqueness of solutions to a general neutral SFDE with infinite delay and Lévy jumps in the phase space C g under the local Carathêodory-type conditions and gave the exponential estimation and almost surely asymptotic estimation of solutions. By using the Razumikhin method and Lyapunov functions, Quanxin (see, e.g., [14]) obtained several Razumikhin-type theorems to prove the pth moment exponential stability of the suggested system and further discussed the pth moment exponential stability of stochastic delay differential equations with Lévy noise and Markov switching.

For a class of nonlinear stochastic differential delay equations with Poisson jump and time-dependent delay, Haidan and Quanxin (see, e.g., [15]) proved that the considered stochastic system has a unique global solution and investigated the pth moment exponential stability and the almost surely exponential stability of solutions under the local Lipschitz condition and a new nonlinear growth condition, which are weaker than those in the previous literature, by virtue of the Lyapunov function and the semi-martingale convergence theorem.

In this paper, we consider the existence-and-uniqueness theorems of global solutions to neutral SFDE with Markovian switching and Lévy jumps and establish the Khasmiskii-type theorem in the spirit of Minghui et al. (see, e.g., [9]). The main difficulty comes from the neutral term and the Lévy jumps term. After placing some assumptions on the neutral term and jump term, we obtained the existence and uniqueness of the solution by elementary inequality, the Gronwall inequality, the Burkh o ¨ lder-Davis-Gundy inequality, and the Itô formula.

This paper is organized as follows. We will establish the Khasminiskii-type existence-and-uniqueness theorems for neutral SFDEs with Markovian switching and Lévy jumps in Section 2. We will proceed to consider a special class of neutral SFDEs with Markovian switching and Lévy jumps, namely, neutral stochastic differential delay equations with variable delays in Section 3. An example is given in Section 4 to illustrate our results throughout the paper.

2 The Khasminskii-type theorem for SFDEs with Markovian switching and jumps

Throughout this paper, unless otherwise specified, we use the following notations. Let x be the Euclidean norm of a vector x R n . Let R + be the family of nonnegative real numbers. If A is a matrix, its trace norm is denoted by A = trace ( A T A ) . Let τ > 0 . Let C ( [ τ , 0 ] ; R n ) be the family of continuous functions from [ τ , 0 ] to R n with supremum norm φ = sup τ θ 0 φ ( θ ) , which is a Banach space. Let ( Ω , , { t } t 0 , P ) be a complete probability space with a filtration { t } t 0 satisfying the usual conditions (i.e., it is increasing and right continuous while 0 contains all P-null sets). Let p 1 and L t p ( [ τ , 0 ] ; R n ) be the family of t -measurable C ( [ τ , 0 ] , R n ) -valued random variables ϕ such that E ϕ p < . Let W ( t ) = ( W 1 ( t ) , , W m ( t ) ) T be an m-dimensional Brownian motion defined on the probability space. Let p ¯ = { p ¯ ( t ) , t 0 } be a stationary and R n -valued Poisson point process. Then, for A ( R n { 0 } ) , here ( R n { 0 } ) denotes the Borel σ -field on R n { 0 } , and we define the Poisson counting measure N associated with p ¯ by

N ( ( 0 , t ] × A ) = 0 < s t I A ( p ¯ ( s ) ) .

For simplicity, we denote N ( t , A ) = N ( ( 0 , t ] × A ) . It is well known that there exists a σ -finite measure π , such that

E [ N ( t , A ) ] = π ( A ) t , P ( N ( t , A ) = n ) = exp ( π ( A ) t ) ( π ( A ) t ) n n ! .

This measure π is called the Lévy measure. Moreover, by Doob-Meyer’s decomposition theorem, there exists a unique { t } t 0 -adapted martingale N ˜ ( t , A ) and a unique { t } t 0 -adapted natural increasing process N ˆ ( t , A ) such that

N ( t , A ) = N ˜ ( t , A ) + N ˆ ( t , A ) , t > 0 .

Here, N ˜ ( t , A ) is called the compensated Lévy jumps and N ˆ ( t , A ) = π ( A ) t is called the compensator.

Let r ( t ) , t 0 , be a right-continuous Markovian chain on the probability space taking values in a finite state space S = { 1 , 2 , , N } with generator Γ = ( r i j ) N × N given by

P { r ( t + Δ ) = j r ( t ) = i } = γ i j Δ + o ( Δ ) , if i j ; 1 + γ i j Δ + o ( Δ ) , if i = j .

Here, Δ > 0 and γ i j 0 , i j , is the transition rate of the Markovian chain from i to j , while γ i i = i j γ i j . We assume that the Markovian chain, Brownian motion, and Lévy jumps are independent. For Z ( R n { 0 } ) , π ( Z ) < , consider a nonlinear neutral SFDE with Markovian switching and Lévy jump,

(1) d [ x ( t ) G ( x t , r ( t ) ) ] = f ( x t , x ( t ) , t , r ( t ) ) d t + g ( x t , x ( t ) , t , r ( t ) ) d W ( t ) + Z h ( x t , x ( t ) , t , r ( t ) , v ) N ( d t , d v ) ,

with the initial data x 0 = ξ C ( [ τ , 0 ] ; R n ) . Here, G : C ( [ τ , 0 ] ; R n ) × S R n , f : C ( [ τ , 0 ] ; R n ) × R n × R + × S R n , g : C ( [ τ , 0 ] ; R n ) × R n × R + × S R n × m , h : C ( [ τ , 0 ] ; R n ) × R n × R + × S × Z R n , and for θ [ τ , 0 ] , x t ( θ ) = x ( t + θ ) . Now we denote by C 1 , 2 ( [ τ , ) × R n ; R + ) the family of all continuous nonnegative function V ( t , x ) defined on [ τ , ) × R n , such that they are continuously twice differentiable in x and once in t . Given V C 1 , 2 ( [ τ , ) × R n ; R + ) , define the function L V : R + × C ( [ τ , 0 ] ; R n ) R by

L V ( t , φ ) = V t ( t , φ ( 0 ) G ( φ , r ( t ) ) ) + V x ( t , φ ( 0 ) G ( φ , r ( t ) ) ) f ( φ , φ ( 0 ) , t , r ( t ) ) + 1 2 trace [ g T ( φ , φ ( 0 ) , t , r ( t ) ) V x x ( t , φ ( 0 ) G ( φ , r ( t ) ) ) g ( φ , φ ( 0 ) , t , r ( t ) ) ] + Z [ V ( t , φ ( 0 ) G ( φ , r ( t ) ) + h ( φ , φ ( 0 ) , t , r ( t ) , v ) ) V ( t , φ ( 0 ) G ( φ , r ( t ) ) ) ] π ( d v ) ,

where

V t ( t , x ) = V ( t , x ) t , V x ( t , x ) = V ( t , x ) x 1 , , V ( t , x ) x n , V x x ( t , x ) = 2 V ( t , x ) x i x j n × n .

Assumption 2.1

(Local Lipschitz condition) For any integer m 1 , there exists a positive constant k m , such that

f ( φ , x , t , i ) f ( ϕ , y , t , i ) 2 g ( φ , x , t , i ) g ( ϕ , y , t , i ) 2 Z h ( φ , x , t , i , v ) h ( ϕ , y , t , i , v ) 2 π ( d v ) k m ( φ ϕ 2 + x y 2 ) ,

for any φ , ϕ C ( [ τ , 0 ] ; R n ) , x , y R n with φ ϕ x y m , i S and any t R + .

Assumption 2.2

(Contraction condition) For any p 1 , there exists a constant κ ( 0 , 1 2 ) such that for all φ C ( [ τ , 0 ] ; R n ) , i S ,

E ( G ( φ , i ) p ) κ p sup τ θ 0 E ( φ ( θ ) p ) .

Assumption 2.3

(Khasminskii-type condition) Let p 1 . There are two functions V C 1 , 2 ( [ τ , ) × R n ; R + ) and U C ( [ τ , ) × R n ; R + ) , a probability measure μ ( ) on [ τ , 0 ] as well as a positive constant K , two positive constants c 1 , c 2 , such that for any ( t , x ) R + × R n ,

(2) c 1 x p V ( t , x ) c 2 x p ,

and for any ( t , φ ) R + × C ( [ τ , 0 ] ; R n ) ,

(3) L V ( t , φ ) K [ 1 + sup τ θ 0 V ( t + θ , φ ( θ ) ) ] U ( t , φ ( 0 ) ) + τ 0 U ( t + θ , φ ( θ ) ) d μ ( θ ) .

Assumption 2.4

For the function V stated in Assumption 2.3 and constant K , we have

V x ( t , φ ( 0 ) G ( φ , r ( t ) ) ) g ( φ , φ ( 0 ) , t , r ( t ) ) K ( 1 + sup τ θ 0 V ( t + θ , φ ( θ ) ) ) ,

Z [ V ( t , φ ( 0 ) G ( φ , r ( t ) ) + h ( φ , φ ( 0 ) , t , r ( t ) , v ) ) V ( t , φ ( 0 ) G ( φ , r ( t ) ) ) ] 2 π ( d v ) K 2 ( 1 + sup τ θ 0 V ( t + θ , φ ( θ ) ) ) 2

for all ( t , φ ) R + × C ( [ τ , 0 ] ; R n ) .

Theorem 2.1

Under Assumptions 2.12.4, for any given initial data x 0 = ξ C ( [ τ , 0 ] ; R n ) , there is a unique global solution x ( t ) to equation (1) on t [ τ , ) . Moreover, for any T 0 , the solution has the property that

E sup τ t T V ( t , x ( t ) ) C 4 e C 3 T ,

where

C 1 = E V ( 0 , x ( 0 ) G ( ξ , r ( 0 ) ) ) + τ 0 U ( s , x ( s ) ) d s , C 2 = 2 E ξ p + 2 p C 1 c 1 + 1 2 p κ p c 2 + 2 p K c 1 + 2 2 p + 5 c 2 K 2 c 1 2 ( 1 2 p κ p ) T , C 3 = 2 p k c 1 + 2 2 p + 5 K 2 c 2 2 c 1 2 ( 1 2 p κ p ) , C 4 = C 2 c 2 .

Proof

Similar to [16] Theorem 3.15, there is a unique maximal local solution x ( t ) on t [ τ , σ ) , where σ is the explosion time. To show that x ( t ) is actually global, we need to show σ = , a.s. Let k 0 > 0 be sufficiently large for ξ < k 0 . For each integer k k 0 , define the stopping time

σ k = inf { τ t < σ : x ( t ) k } ,

where, as usual, inf = . Clearly, σ k is nondecreasing and lim k σ k = σ σ . This proof can be completed if σ = a.s.. By the Itô formula ([17], Lemma 4.4.6),

(4) V ( t , x ( t ) G ( x t , r ( t ) ) ) = V ( 0 , x ( 0 ) G ( x 0 , r ( 0 ) ) ) + 0 t L V ( s , x s ) d s + 0 t [ V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) ] d W ( s ) + 0 t Z [ V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ] N ˜ ( d s , d v ) .

By (3), we have

(5) V ( t , x ( t ) G ( x t , r ( t ) ) ) V ( 0 , x ( 0 ) G ( x 0 , r ( 0 ) ) ) + K 0 t ( 1 + sup τ θ 0 V ( s + θ , x ( s + θ ) ) ) d s 0 t U ( s , x ( s ) ) d + 0 t τ 0 U ( s + θ , x ( s + θ ) ) d μ ( θ ) d s

+ 0 t [ V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) ] d W ( s ) + 0 t Z [ V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ] N ˜ ( d s , d v ) .

By the Fubini theorem, we have

(6) 0 t τ 0 U ( s + θ , x ( s + θ ) ) d μ ( θ ) d s = τ 0 0 t U ( s + θ , x ( s + θ ) ) d s d μ ( θ ) τ 0 τ t U ( s , x ( s ) ) d s d μ ( θ ) τ t U ( s , x ( s ) ) d s .

Substituting (6) into (5) yields

V ( t , x ( t ) G ( x t , r ( t ) ) ) C ¯ + K 0 t ( 1 + sup τ θ 0 V ( s + θ , x ( s + θ ) ) ) d s + 0 t V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) d W ( s ) + 0 t Z [ V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ] N ˜ ( d s , d v ) ,

where C ¯ = V ( 0 , x ( 0 ) G ( x 0 , r ( 0 ) ) ) + τ 0 U ( s , x ( s ) ) d s . This implies that for any k k 0 , t < T , where T is an arbitrary positive constant,

V ( t σ k , x ( t σ k ) G ( x t σ k , r ( t σ k ) ) ) C ¯ + K 0 t σ k ( 1 + sup τ θ 0 V ( s + θ , x ( s + θ ) ) ) d s + 0 t σ k V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) d W ( s ) + 0 t σ k Z [ V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ] N ˜ ( d s , d v ) .

Taking upper bound and expectation on the above inequality, we obtain

(7) E sup 0 t T V ( t σ k , x ( t σ k ) G ( x t σ k , r ( t σ k ) ) ) C ¯ + K sup 0 t T 0 t σ k ( 1 + sup τ θ 0 V ( s + θ , x ( s + θ ) ) ) d s + E sup 0 t T 0 t I [ 0 , σ k ] ( s ) V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) d W ( s ) + E sup 0 t T 0 t I [ 0 , σ k ] ( s ) Z [ V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ] N ˜ ( d s , d v ) .

By the Burkhölder-Davis-Gundy inequality ([5], Theorem 1.7.3) and Assumption 2.4,

(8) E sup 0 t T 0 t I [ 0 , σ k ] ( s ) V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) d W ( s ) 32 E 0 T I [ 0 , σ k ] ( s ) V x ( s , x ( s ) G ( x s , r ( s ) ) ) g ( x s , x ( s ) , s , r ( s ) ) 2 d s 1 2 32 K E 0 T [ 1 + sup τ θ 0 V ( s σ k + θ , x ( s σ k + θ ) ) ] 2 d s 1 2 32 K E 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] 2 d s 1 2 32 K E [ 1 + sup τ t T V ( t σ k , x ( t σ k ) ) ] 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] d s 1 2 c 1 ( 1 2 p κ p ) 2 p + 1 c 2 E ( 1 + sup τ t T V ( t σ k , x ( t σ k ) ) ) + 2 p + 4 c 2 K 2 c 1 ( 1 2 p κ p ) E 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] d s .

By Assumption 2.4, similar to (8) and [5], Theorem 1.7.3,

(9) E sup 0 t T 0 t I [ 0 , σ k ] ( s ) Z ( V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) ) N ˜ ( d s , d v ) 32 E 0 T I [ 0 , σ k ] ( s ) Z V ( s , x ( s ) G ( x s , r ( s ) ) + h ( x s , x ( s ) , s , r ( s ) , v ) ) V ( s , x ( s ) G ( x s , r ( s ) ) ) 2 π ( d v ) d s 1 2 32 K E 0 T ( 1 + sup τ θ 0 V ( s σ k + θ , x ( s σ k + θ ) ) 2 ) d s 1 2 c 1 ( 1 2 p κ p ) 2 p + 1 c 2 E ( 1 + sup τ t T V ( t σ k , x ( t σ k ) ) ) + 2 p + 4 c 2 K 2 c 1 ( 1 2 p κ p ) E 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] d s .

Substituting (8) and (9) into (7),

(10) E sup 0 t T V ( t σ k , x ( t σ k ) G ( x t σ k , r ( t σ k ) ) ) C 1 + K + 2 p + 5 c 2 K 2 c 1 ( 1 2 p κ p ) E 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] d s + c 1 ( 1 2 p κ p ) 2 p c 2 E ( 1 + sup τ t T V ( t σ k , x ( t σ k ) ) ) ,

where C 1 = E V ( 0 , x ( 0 ) G ( ξ , r ( 0 ) ) ) + τ 0 U ( s , x ( s ) ) d s .

Recall elementary inequality a + b p 2 p 1 ( a p + b p ) for any p 1 , a R n , b R n , so a p 1 2 p 1 a + b p b p . Note the relationship between sup and inf, so by Fatou’s lemma, (2), and Assumptions 2.22.3, we have

(11) E sup 0 t T V ( t σ k , x ( t σ k ) G ( x t σ k , r ( t σ k ) ) ) c 1 E sup 0 t T x ( t σ k ) G ( x t σ k , r ( t σ k ) ) p c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p + E sup 0 t T G ( x t σ k , r ( t σ k ) ) p = c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p E inf 0 t T G ( x t σ k , r ( t σ k ) ) p c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p inf 0 t T E G ( x t σ k , r ( t σ k ) ) p c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p sup 0 t T E G ( x t σ k , r ( t σ k ) ) p c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p κ p sup 0 t T sup τ θ 0 E x ( t σ k + θ ) p c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p κ p E sup 0 t T sup τ θ 0 x ( t σ k + θ ) p = c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p κ p E sup τ t T x ( t σ k ) p ,

and

(12) E 0 T [ 1 + sup τ t s V ( t σ k , x ( t σ k ) ) ] d s E 0 T [ 1 + c 2 sup τ t s x ( t σ k ) p ] d s .

Substituting (11) and (12) into (10) gives

(13) c 1 1 2 p 1 E sup 0 t T x ( t σ k ) p κ p E sup τ t T x ( t σ k ) p C 1 + K + 2 p + 5 c 2 K 2 c 1 ( 1 2 p κ p ) E 0 T [ 1 + c 2 sup τ t s x ( t σ k ) p ] d s + c 1 ( 1 2 p κ p ) 2 p c 2 + c 1 1 2 p κ p E sup τ t T x ( t σ k ) p .

By (13), we have

(14) c 1 1 2 p 1 κ p E sup τ t T x ( t σ k ) p C 1 + c 1 2 p 1 E ξ p + K + 2 p + 5 c 2 K 2 c 1 ( 1 2 p κ p ) T + K + 2 p + 5 c 2 2 K 2 c 1 ( 1 2 p κ p ) E 0 T sup τ t s x ( t σ k ) p d s + c 1 ( 1 2 p κ p ) 2 p c 2 + c 1 1 2 p κ p E sup τ t T x ( t σ k ) p .

So,

c 1 2 p E sup τ t T x ( t σ k ) p C 1 + c 1 2 p 1 E ξ p + K + 2 p + 5 c 2 K 2 c 1 ( 1 2 p κ p ) T + c 1 ( 1 2 p κ p ) 2 p c 2 + K + 2 p + 5 c 2 2 K 2 c 1 ( 1 2 p κ p ) E 0 T sup τ t s x ( t σ k ) p d s .

That is,

E sup τ t T x ( t σ k ) p C 2 + C 3 0 T E sup τ t s x ( t σ k ) p d s ,

where

C 2 = 2 E ( ξ p ) + 2 p C 1 c 1 + 1 2 p κ p c 2 + 2 p K c 1 + 2 2 p + 5 c 2 K 2 c 1 2 ( 1 2 p κ p ) T , C 3 = 2 p K c 1 + 2 2 p + 5 K 2 c 2 2 c 1 2 ( 1 2 p κ p ) .

By the Gronwall inequality ([18] Lemma 2), we therefore obtain

(15) E sup τ t T x ( t σ k ) p C 2 e C 3 T .

So

k p P ( σ k t ) E x ( t σ k ) p E sup τ t T x ( t σ k ) p C 2 e C 3 T .

Let k + , then lim k + P ( σ k t ) = 0 , and hence, P ( σ t ) = 0 and P ( σ > t ) = 1 . Since T t > τ and T is arbitrary, we must have that σ = a.s. By (2) and (15), we have

(16) E sup τ t T V ( t σ k , x ( t σ k ) ) C 2 c 2 e C 3 T .

Letting k in (16) yields

(17) E sup τ t T V ( t , x ( t ) ) C 2 c 2 e C 3 T .

The proof is therefore completed.□

Remark 2.1

From (15), we see that the p th moment will grow at most exponentially with exponent C 3 . That is,

limsup t 1 t log E x ( t ) p C 3 .

Here, C 3 = 2 p k c 1 + 2 2 p + 5 K 2 c 2 2 c 1 2 ( 1 2 p κ p ) .

The next theorem shows that the p th exponential estimations implies the almost surely asymptotic estimations, and we give an upper bound for the sample Lyapunov exponent.

Theorem 2.2

Under Assumptions 2.12.4, for any given initial data x 0 = ξ C ( [ τ , 0 ] ; R n ) , we have

(18) limsup t 1 t log x ( t ) 2 k c 1 + 2 8 K 2 c 2 2 c 1 2 ( 1 4 κ 2 ) a.s.

That is, the sample Lyapunov exponent of the solution should not be greater than 2 k c 1 + 2 8 K 2 c 2 2 c 1 2 ( 1 4 κ 2 ) .

Proof

For each n = 1 , 2 , , it follows from (15) (taking p = 2 ) that

E sup n 1 t n x ( t ) 2 β n e γ n ,

where β n = 2 E ξ 2 + 4 C 1 c 1 + 1 4 κ 2 c 2 + 4 K c 1 + 2 9 c 2 K 2 c 1 2 ( 1 4 κ 2 ) n , and γ = 4 k c 1 + 2 9 K 2 c 2 2 c 1 2 ( 1 4 κ 2 ) . Hence, for any ε > 0 , by the Chebyshev inequality, it follows that

P { ω : sup n 1 t n x ( t ) 2 > e ( γ + ε ) n } β n e ε n .

Since n β n e ε n 2 E ξ 2 + 4 C 1 c 1 + 1 4 κ 2 c 2 + 4 K c 1 + 2 9 c 2 K 2 c 1 2 ( 1 4 κ 2 ) n n e ε n < , by the Borel-Cantelli Lemma, we deduce that there exists an integer n 0 such that

sup n 1 t n x ( t ) 2 e ( γ + ε ) n , a.s. n n 0 .

Thus, for almost all ω Ω , if n 1 t n and n n 0 , then

(19) 1 t log x ( t ) = 1 2 t log x ( t ) 2 ( γ + ε ) n 2 ( n 1 ) a.s .

Taking the limsup in (19) leads to an almost surely exponential estimate, that is,

limsup t 1 t log x ( t ) γ + ε 2 a.s.

The required assertion (18) follows because ε > 0 is arbitrary.□

3 Neutral SDDEs with variable delays

We now turn to considering neutral stochastic differential delay equations (SDDEs) with Markovian switching and Lévy jumps where the delays are time-dependent variables. That is, we consider the following equation.

(20) d [ x ( t ) G ¯ ( x ( t δ ( t ) ) , r ( t ) ) ] = f ¯ ( x ( t δ ( t ) ) , x ( t ) , t , r ( t ) ) d t + g ¯ ( x ( t δ ( t ) ) , x ( t ) , t , r ( t ) ) d W ( t ) + Z h ¯ ( x ( t δ ( t ) ) , x ( t ) , t , r ( t ) , v ) N ( d t , d v ) ,

on t 0 with the initial data x 0 = ξ C ( [ τ , 0 ] ; R n ) , where δ : R + [ 0 , τ ] , G ¯ : R n × S R n , f ¯ : R n × R n × R + × S R n , g ¯ : R n × R n × R + × S R n × m , and h ¯ : R n × R n × R + × S × Z R n are all Borel measurable. [7,9] have established the Khasminskii-type theorems for SDDEs with constant delay. But these results could not be applied to the SDDEs where the delay is time-variable. If we define f : C ( [ τ , 0 ] ; R n ) × R n × R + × S R n , g : C ( [ τ , 0 ] ; R n ) × R n × R + × S R n × m , h : C ( [ τ , 0 ] ; R n ) × R n × R + × S × Z R n , and G : C ( [ τ , 0 ] ; R n ) × S R n by

G ( φ , r ( t ) ) = G ¯ ( φ ( δ ( t ) ) , r ( t ) ) , f ( φ , φ ( 0 ) , t , r ( t ) ) = f ¯ ( φ ( δ ( t ) ) , φ ( 0 ) , t , r ( t ) ) , g ( φ , φ ( 0 ) , t , r ( t ) ) = g ¯ ( φ ( δ ( t ) ) , φ ( 0 ) , t , r ( t ) ) , h ( φ , φ ( 0 ) , t , r ( t ) , v ) = h ¯ ( φ ( δ ( t ) ) , φ ( 0 ) , t , r ( t ) , v ) ,

then we can apply the theory established in the previous sections to this neutral SDDE with Markovian switching. Let us proceed in this way to see what we can obtain. First, we can transfer Assumption 2.1 into the following one.

Assumption 3.1

For each integer m 1 , there is a positive constant k m such that

(21) f ¯ ( y , x , t , i ) f ¯ ( y ¯ , x ¯ , t , i ) 2 g ¯ ( y , x , t , i ) g ¯ ( y ¯ , x ¯ , t , i ) 2 Z h ¯ ( y , x , t , i , v ) h ¯ ( y ¯ , x ¯ , t , i , v ) 2 π ( d v ) k m ( y y ¯ 2 + x x ¯ 2 ) ,

for those y , x , y ¯ , x ¯ R n with y x y ¯ x ¯ m and any t R + .

The following assumption is corresponding to Assumption 2.2.

Assumption 3.2

(Contraction condition) For any p 1 , there exists a constant κ ( 0 , 1 2 ) such that for all φ C ( [ τ , 0 ] ; R n ) , i S ,

E G ¯ ( φ ( δ ( t ) ) , i ) p κ p [ E φ ( 0 ) p + E φ ( δ ( t ) ) p ] .

Comparing with Assumption 2.3, we can obtain the following. For V C 1 , 2 ( [ τ , ) × R n ; R + ) , the operator L V : R + × C ( [ τ , 0 ] ; R n ) R takes the form as follows:

L V ( t , φ ) = V ( φ ( δ ( t ) ) , φ ( 0 ) , t , r ( t ) ) ,

where V : R n × R n × R + × S R is defined by

V ( y , x , t , i ) = V t ( t , x G ¯ ( y , i ) ) + V x ( t , x G ¯ ( y , i ) ) f ¯ ( y , x , t , i ) + 1 2 trace [ g ¯ ( y , x , t , i ) T V x x ( t , x G ¯ ( y , i ) ) g ¯ ( y , x , t , i ) ] + Z [ V ( t , x G ¯ ( y , i ) ) + h ¯ ( y , x , t , i , v ) V ( t , x G ¯ ( y , i ) ) ] π ( d v ) .

And clearly, (3) should become

(22) L V ( t , φ ) K [ 1 + V ( t , φ ( 0 ) ) + V ( t δ ( t ) , φ ( δ ( t ) ) ) ] U ( t , φ ( 0 ) ) + τ 0 U ( t + θ , φ ( θ ) ) d μ ( θ ) ,

with

τ 0 U ( t + θ , φ ( θ ) ) d μ ( θ ) = U ( t δ ( t ) , φ ( δ ( t ) ) ) .

This implies that μ ( ) should be a point probability measure at δ ( t