Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access April 13, 2022

Strong convergence of a self-adaptive inertial Tseng's extragradient method for pseudomonotone variational inequalities and fixed point problems

  • Victor Amarachi Uzor , Timilehin Opeyemi Alakoya and Oluwatosin Temitope Mewomo EMAIL logo
From the journal Open Mathematics

Abstract

In this paper, we study the problem of finding a common solution of the pseudomonotone variational inequality problem and fixed point problem for demicontractive mappings. We introduce a new inertial iterative scheme that combines Tseng’s extragradient method with the viscosity method together with the adaptive step size technique for finding a common solution of the investigated problem. We prove a strong convergence result for our proposed algorithm under mild conditions and without prior knowledge of the Lipschitz constant of the pseudomonotone operator in Hilbert spaces. Finally, we present some numerical experiments to show the efficiency of our method in comparison with some of the existing methods in the literature.

MSC 2010: 65K15; 47J25; 65J15; 90C33

1 Introduction

Let H be a real Hilbert space with inner product , and induced norm . In this paper, we consider the variational inequality problem (VIP) of finding a point p C such that

(1) A p , x p 0 , x C ,

where C is a nonempty closed convex subset of H , and A : H H is a nonlinear operator. We denote by V I ( C , A ) the solution set of the VIP (1).

Variational inequality theory, which was first introduced independently by Fichera [1] and Stampacchia [2], is a vital tool in mathematical analysis, and has a vast application across several fields of study, such as optimisation theory, engineering, physics, operator theory, economics, and many others (see [3,4, 5,6] and references therein). Over the years, several iterative methods have been formulated and adopted in solving VIP (1) (see [7,8,9, 10,11] and references therein). There are two common approaches to solving the VIP, namely, the regularised methods and the projection methods. These approaches usually require that the nonlinear operator A in VIP (1) has certain monotonicity. In this study, we adopt the projection method and consider the case in which the associated nonlinear operator is pseudomonotone (see definition below) – a larger class than monotone mappings.

Now, we review some nonlinear operators in nonlinear analysis.

Definition 1.1

A mapping A : H H is said to be

  1. γ -strongly monotone on H if there exists a constant γ > 0 such that

    (2) A x A y , x y γ x y 2 , x , y H .

  2. γ -inverse strongly monotone on H if there exists a constant γ > 0 such that

    A x A y , x y γ A x A y 2 , x , y H .

  3. Monotone on H , if

    (3) A x A y , x y 0 , x , y H .

  4. γ -strongly pseudomonotone on H , if there exists a constant γ > 0 such that

    (4) A y , x y 0 A x , x y γ x y 2 , x , y H .

  5. Pseudomonotone on H , if

    (5) A y , x y 0 A x , x y 0 , x , y H .

  6. Lipschitz-continuous on H , if there exists a constant L > 0 such that

    (6) A x A y L x y , x , y H .

    If L [ 0 , 1 ) , then A is said to be a contraction mapping.

  7. Sequentially weakly continuous on H , if for each sequence { x n } ,

    x n x implies T x n T x , x H .

From the above definitions, we observe that ( 1 ) ( 3 ) ( 5 ) and ( 1 ) ( 4 ) ( 5 ) . However, the converses are not generally true. Moreover, if A is γ - strongly monotone and L - Lipschitz continuous, then A is γ L 2 - inverse strongly monotone (see [12,13]).

The simplest known projection method for solving VIP is the gradient method (GM), which involves a single projection onto the feasible set C per iteration. However, the algorithm only converges weakly under some strict conditions that the operator is either strongly monotone or inverse strongly monotone, but fails to converge if A is monotone. The classical gradient projection algorithm proposed by Sibony [14] is given as follows:

(7) x n + 1 = P C ( x n λ A x n ) ,

where A is strongly monotone and L -Lipschitz continuous, with step size λ 0 , 2 L .

Korpelevich [15] and Antipin [16] proposed the extragradient method (EGM) for solving VIP (1), thereby relaxing the conditions placed in (7). The initial algorithm proposed by Korpelevich was employed in solving saddle point problems, but was later extended to VIPs in both Euclidean space and infinite dimensional Hilbert spaces. The EGM method is given as follows:

(8) x 0 C y n = P C ( x n λ A x n ) x n + 1 = P C ( x n λ A y n ) ,

where λ 0 , 1 L , A is monotone and L -Lipschitz continuous, and P C denotes the metric projection from H onto C . If the set V I ( C , A ) is nonempty, then the algorithm only converges weakly to an element in V I ( C , A ) .

Over the years, EGM has been of interest to several researchers. Also, many results and variants have been developed from this method, using the assumptions of Lipschitz continuity, monotonicity, and pseudomonotonicity, see [17,18, 19,20] and references therein.

Due to the extensive amount of time required in executing the EGM method, as a result of calculating two projections onto the closed convex set C in each iteration, Censor et al. [8] proposed the subgradient extragradient method (SEGM) in which they replaced the second projection onto C by a projection onto a half-space, thus, making computation easier and convergence rate faster. The SEGM is presented as follows:

(9) y n = P C ( x n λ A x n ) T n = { w H : x n λ A x n y n , w y n 0 } x n + 1 = P T n ( x n λ A y n ) , n 0 ,

where λ 0 , 2 L . The authors only obtained a weak convergence result for the proposed method. However, they later introduced a hybrid SEGM in [7] and obtained a strong convergence result. Likewise, Tseng [21], in the bid to improve on the EGM, proposed Tseng’s extragradient method (TEGM), which only requires one projection per iteration, as follows:

(10) y n = P C ( x n λ A x n ) x n + 1 = y n + λ ( A x n A y n ) , n 0 ,

where A is monotone, L -Lipschitz continuous, and λ 0 , 2 L . The TEGM (10) converges to a weak solution of the VIP with the assumption that V I ( C , A ) is nonempty. The TEGM is also known as the forward-backward method. Recently, some authors have carried out some interesting works on the TEGM (see [22,23] and references therein).

In this work, we consider the inertial algorithm, which is a two-step iteration process and a technique for accelerating the speed of convergence of iterative schemes. The inertial extrapolation technique was derived by Polyak [24] from a dynamic system called the heavy ball with friction. Due to its efficiency, the inertial technique has become a centre of attraction and interest to many researchers in this field. Over the years, researchers have studied the inertial algorithm and applied it to solve different optimisation problems, see [25,26, 27,28] and references therein.

Very recently, Tan and Qin [29] proposed the following Tseng’s extragradient algorithm for solving pseudomonotone VIP:

(11) s n = x n + δ n ( x n x n 1 ) y n = P C ( s n ψ n A s n ) z n = y n ψ n ( A y n A s n ) x n + 1 = α n f ( z n ) + ( 1 α n ) z n ,

δ n = min ε n x n x n 1 , δ if x n x n 1 δ , otherwise .

ψ n + 1 = min ϕ s n y n A s n A y n , ψ n if A s n A y n 0 ψ n , otherwise ,

where f is a contraction and A is a pseudomonotone, Lipschitz continuous, and sequentially weakly continuous mapping. The authors proved a strong convergence result for the proposed method under mild conditions on the control parameters.

Another area of interest in this study is the fixed point theory. Let U : H H be a nonlinear map. The fixed point problem (FPP) is to find a point p H (called the fixed point of U ) such that

(12) U p = p .

In this work, we denote the set of fixed points of U by F ( U ) . Our interest in this study is to find a common element of the fixed point set, F ( U ) , and the solution set of the variational inequality, V I ( C , A ) . That is, the problem of finding a point x H such that

(13) x V I ( C , A ) F ( U ) .

Many algorithms have been proposed over the years and in recent times for solving the common solution problem (13) (see [30,31,32, 33,34,35, 36,37,38, 39,40] and references therein). Common solution problem of this type has drawn the attention of researchers because of its potential application to mathematical models whose constraints can be expressed as FPP and VIP. This arises in areas like signal processing, image recovery, and network resource allocation. An instance of this is in network bandwidth allocation problem for two services in a heterogeneous wireless access networks in which the bandwidth of the services is mathematically related (see [37,41,42] and references therein).

Recently, Cai et al. [22] proposed the following inertial Tseng’s extragradient algorithm for approximating the common solution of pseudomonotone VIP and FPP for nonexpansive mappings in real Hilbert spaces:

(14) x 0 , x 1 H w n = x n + θ n ( x n x n 1 ) y n = P C ( w n ψ A w n ) z n = y n ψ ( A y n A w n ) x n + 1 = α n f ( x n ) + ( 1 α n ) [ β n T z n + ( 1 β n ) z n ] ,

where f is a contraction, T is a nonexpansive mapping, A is pseudomonotone, L -Lipschitz and sequentially weakly continuous, and ψ 0 , 1 L . They proved a strong convergence result for the proposed algorithm under some suitable conditions.

One of the major drawbacks of Algorithm (14) is the fact that the step size ψ of the algorithm depends on the Lipschitz constant of the cost operator. In many cases, this Lipschitz constant is unknown or even difficult to estimate. This makes it difficult to implement algorithms of this nature.

Very recently, Thong and Hieu [23] proposed an iterative scheme for finding a common element of the solution set of monotone variational inequality and set of fixed points of demicontractive mappings as follows:

(15) y n = P C ( x n ψ n A x n ) z n = y n ψ n ( A y n A x n ) x n + 1 = α n f ( x n ) + ( 1 α n ) [ β n U z n + ( 1 β n ) z n ] ,

ψ n + 1 = min μ x n y n A x n A y n , ψ n if A x n A y n 0 ψ n , otherwise ,

where A is monotone and L -Lipschitz continuous, U is a demicontractive mapping such that I U is demiclosed at zero, and f is a contraction. The authors proved a strong convergence result under suitable conditions for the proposed method.

Motivated by the above results and the ongoing research activities in this direction, in this paper our aim is to introduce an effective iterative technique, which employs the efficient combination of the inertial technique, TEGM together with the viscosity method for finding a common solution of FPP of demicontractive mappings and pseudomonotone VIP with Lipschitz continuous and sequentially weakly continuous operator in Hilbert spaces. In line with this goal, we construct an algorithm with the following features:

  1. Our algorithm approximates the solution of a more general class of VIP and FPP.

  2. The proposed method only requires one projection per iteration onto the feasible set, which guarantees the minimal cost of computation.

  3. Moreover, our method is computationally efficient. It employs an efficient self-adaptive step size technique which makes the algorithm independent of the Lipschitz constant of the cost operator.

  4. We employ the combination of the inertial technique together with the viscosity method, which are two of the efficient techniques for accelerating the rate of convergence of iterative schemes.

  5. We prove a strong convergence theorem for the proposed algorithm without following the conventional “two-cases” approach often employed by researchers (e.g. see [22,23,29,43,44,45]). This makes our results in this paper to be more concise and precise.

Furthermore, by several numerical experiments, we demonstrate the efficiency of our proposed method over many other existing methods in related literature.

The remainder of this paper is organised as follows. In Section 2, useful definitions and lemmas employed in the study are presented. In Section 3, we present the proposed algorithm and highlight some of its notable features. Section 4 presents the convergence analysis of the proposed method. In Section 5, we carry out some numerical experiments to illustrate the computational advantage of our method over some of the existing methods in the literature. Finally, in Section 6 we give a concluding remark.

2 Preliminaries

Let H be a real Hilbert space and C be a nonempty closed convex subset of H . We denote the weak and strong convergence of sequence { x n } n = 1 to x by x n x , as n and x n x , as n .

The metric projection [46,47], P C : H C is defined, for each x H , as the unique element P C x C such that

x P C x = inf { x z : z C } .

It is a known fact that P C is nonexpansive, i.e. P C x P C y x y x , y C . Also, the mapping P C is firmly nonexpansive, i.e.

P C x P C y 2 P C x P C y , x y ,

for all x , y H . Some results on the metric projection map are given below.

Lemma 2.1

[48] Let C be a nonempty closed convex subset of a real Hilbert space H. For any x H and z C , Then,

z = P C x x z , z y 0 , for all y C .

Lemma 2.2

[48,49] Let C be a nonempty, closed, and convex subset of a real Hilbert space H, x H . Then:

  1. P C x P C y 2 x y , P C x P C y , y C .

  2. x P C x 2 + y P C x 2 x y 2 , y C .

  3. ( I P C ) x ( I P C ) y 2 x y , ( I P C ) x ( I P C ) y , y C .

Definition 2.3

A mapping T : H H is said to be

  1. Nonexpansive on H , if there exists a constant L > 0 such that

    T x T y x y , x , y H .

  2. Quasi-nonexpansive on H , if F ( T ) and

    T x p x p , p F ( T ) , x H .

  3. λ -strictly pseudocontractive on H with 0 λ < 1 , if

    T x T y 2 x y 2 + λ ( I T ) x ( I T ) y 2 , x , y H .

  4. β -demicontractive with 0 β < 1 if

    T x p 2 x p 2 + β ( I T ) x 2 , p F ( T ) , x H ,

    or equivalently

    T x x , x p β 1 2 x T x 2 , p F ( T ) , x H ,

    or equivalently

    T x p , x p x p 2 + β 1 2 x T x 2 , p F ( T ) , x H .

Remark 2.4

It is known that every strictly pseudocontractive mapping with a nonempty fixed point set is demicontractive. The class of demicontractive mappings includes all the other classes of mappings defined above (see [23]).

Next, we give some examples of the class of demicontractive mappings, as shown in [23,50].

Example 2.5

  1. Let H be the real line and C = [ 1 , 1 ] . Define T on C by:

    T x = 2 3 x sin 1 x , x 0 0 if x = 0 .

    Then T is demicontractive.

  2. Consider a mapping T : [ 2 , 1 ] [ 2 , 1 ] defined such that,

    T x = x 2 x .

Then T is a demicontractive map that is neither quasi-nonexpansive nor strictly pseudocontractive.

We have the following lemmas which will be employed in our convergence analysis.

Lemma 2.6

[25] For each x , y H , and δ R , we have the following results:

  1. x + y 2 x 2 + 2 y , x + y ;

  2. x + y 2 = x 2 + 2 x , y + y 2 ;

  3. δ x + ( 1 δ ) y 2 = δ x 2 + ( 1 δ ) y 2 δ ( 1 δ ) x y 2 .

Lemma 2.7

[51] Let { a n } be a sequence of nonnegative real numbers, { α n } be a sequence in ( 0 , 1 ) with n = 1 α n = , and { b n } be a sequence of real numbers. Assume that

a n + 1 ( 1 α n ) a n + α n b n , for all n 1 ,

if lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .

Lemma 2.8

[52] Assume that T : H H is a nonlinear operator with F ( T ) 0 . Then, I T is said to be demiclosed at zero if for any { x n } in H, the following implication holds: x n x and ( I T ) x n 0 x F ( T ) .

Lemma 2.9

[53] Assume that D is a strongly positive bounded linear operator on a Hilbert space H with coefficient γ ¯ > 0 and 0 < ρ D 1 . Then I ρ D 1 ρ γ ¯ .

Lemma 2.10

[54] Let U : H H be β -demicontractive with F ( U ) and set U λ = ( 1 λ ) + λ U , λ ( 0 , 1 β ) . Then,

  1. F ( U ) = F i x ( U λ ) .

  2. U λ x z 2 x z 2 1 λ ( 1 β λ ) ( I U λ ) x 2 , x H , z F ( U ) .

  3. F ( U ) is a closed convex subset of H.

Lemma 2.11

[55] Consider the problem with C being a nonempty, closed, convex subset of a real Hilbert space H and A : C H being pseudomonotone and continuous. Then p is a solution of VIP (1) if and only if

A x , x p 0 , x C .

3 Proposed algorithm

In this section, we propose an inertial viscosity-type Tseng’s extragradient algorithm with self adaptive step size and highlight some of its important features. We establish the convergence of the algorithm under the following conditions:

Condition A

  1. The feasible set C is closed, convex, and nonempty.

  2. The solution set denoted by Ω = V I ( C , A ) F ( U ) is nonempty.

  3. The mapping A is pseudomonotone, L -Lipschitz continuous on H , and sequentially weakly continuous on C .

  4. The mapping U : H H is a τ -demicontractive map such that I U is demiclosed at zero.

  5. D : H H is a strongly positive bounded linear operator with coefficient γ ¯ .

  6. f : H H is a contraction with coefficient ρ ( 0 , 1 ) such that 0 < γ < γ ¯ ρ .

Condition B

  1. { α n } ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = .

  2. The positive sequence { ε n } satisfies lim n ε n α n = 0 , { β n } ( a , 1 τ ) for some a > 0 .

Now, the algorithm is presented as follows:

Algorithm 3.1

Inertial TEGM with self-adaptive stepsize

  1. Given δ > 0 , ψ 1 > 0 , ϕ ( 0 , 1 ) . Select initial data x 0 , x 1 H , and set n = 1 .

  2. Given the ( n 1 )th and nth iterates, choose δ n such that 0 δ n δ ˆ n , n N with δ ˆ n defined by

    (16) δ ˆ n = min ε n x n x n 1 , δ , if x n x n 1 , δ , otherwise .

  3. Compute

    r n = x n + δ n ( x n x n 1 ) .

  4. Compute

    y n = P C ( r n ψ n A r n ) .

    If y n = r n , then set z n = r n and go to Step 5. Else go to Step 4.

  5. Compute

    z n = y n ψ n ( A y n A r n ) .

  6. Compute

    x n + 1 = α n γ f ( r n ) + ( I α n D ) [ ( 1 β n ) z n + β n U z n ] .

  7. Compute

    (17) ψ n + 1 = min ϕ r n y n A r n A y n , ψ n , if A r n A y n 0 , ψ n , otherwise .

    Set n n + 1 and return to Step 1.

Below are some of the interesting features of our proposed algorithm.

Remark 3.2

  1. Observe that Algorithm 3.1 involves only one projection onto the feasible set C per iteration, which makes the algorithm computationally efficient.

  2. The step size ψ n in (17) is self-adaptive and supports easy and simple computations, which makes it possible to implement our algorithm without prior knowledge of the Lipschitz constant of the cost operator.

  3. We also point out that in Step 1 of the algorithm, the inertial technique employed can easily be implemented in numerical computation, since the value of x n x n 1 is known prior to choosing δ n .

Remark 3.3

It can easily be seen from (16) and condition (B1) that

lim n δ n x n x n 1 = 0 and lim n δ n α n x n x n 1 = 0 .

4 Convergence analysis

First, we establish some lemmas which will be employed in the convergence analysis of our proposed algorithm.

Lemma 4.1

The sequence { ψ n } generated by (17) is a nonincreasing sequence and lim n ψ n = ψ min ψ 1 , ϕ L .

Proof

It follows from (17) that ψ n + 1 ψ n , n N . Hence, { ψ n } is nonincreasing. Also, since A is Lipschitz continuous, we have

A r n A y n L r n y n ,

which implies that

r n y n A r n A y n 1 L .

Consequently, we obtain

ϕ r n y n A r n A y n ϕ L , when A r n A y n 0 .

Combining this together with (17), we obtain

ψ n min ψ 1 , ϕ L .

Since { ψ n } is nonincreasing and bounded below, we can conclude that

lim n ψ n = ψ min ψ 1 , ϕ L .

Lemma 4.2

Let { r n } and { y n } be two sequences generated by Algorithm 3.1, and suppose that conditions (A1)–(A3) hold. If there exists a subsequence { r n k } of { r n } convergent weakly to z H and lim n r n k y n k = 0 , then z V I ( C , A )

Proof

Using the property of the projection map and y n = P C ( r n ψ n A r n ) , we obtain

r n k ψ n k A r n k y n k , x y n k 0 x C ,

which implies that

1 ψ n k r n k y n k , x y n k A r n k , x y n k x C .

From this we obtain

(18) 1 ψ n k r n k y n k , x y n k + A r n k , y n k r n k A r n k , x r n k x C .

Since { r n k } converges weakly to z H , we have that { r n k } is bounded. Then, from the Lipschitz continuity of A and r n k y n k 0 , we obtain that { A r n k } and { y n k } are also bounded. Since ψ n k ψ 1 , ϕ L , from (18) it follows that

(19) lim inf k A r n k , x r n k 0 x C .

Moreover, we have that

(20) A y n k , x y n k = A y n k A r n k , x r n k + A r n k , x r n k + A y n k , r n k y n k .

Since lim k r n k y n k = 0 , then by the Lipschitz continuity of A we have lim k A r n k A y n k = 0 . This together with (19) and (20) gives

lim inf k A y n k , x y n k 0 .

Now, choose a decreasing sequence { θ k } of positive numbers such that θ k 0 as k . For any k , we represent the smallest positive integer with N k such that:

(21) A y n j , x y n j + θ k 0 j N k .

It is clear that the sequence { N k } is increasing since θ k is decreasing. Furthermore, for any k , from { y N k } C , we can assume A y N k 0 (otherwise, y N k is a solution) and set:

υ N k = A y N k A y N k 2 .

Consequently, we have A y N k , υ N k = 1 , for each k . From (21), one can easily deduce that

A y N k , x + θ k υ N k y N k 0 , k .

By the pseudomonotonicity of A , we have

A ( x + θ k υ N k ) , x + θ k υ N k y N k 0 ,

which implies that

(22) A x , x y N k A x A ( x + θ k υ N k ) , x + θ k υ N k y N k θ k A x , υ N k .

Next, we show that lim k θ k υ N k = 0 . Indeed, since r n k z and lim k r n k y n k = 0 , we obtain y N k z , k . Since { y n } C , we obtain z C . By the sequentially weakly continuity of A on C , we have { A y n k } A z . We can assume that A z 0 (otherwise, z is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, we have

0 < A z lim k A y n k .

By the fact that { y N k } { y n k } and θ k 0 as k , we obtain

0 lim sup k θ k υ N k = lim sup k θ k A y N k lim sup k θ k lim inf k A y n k = 0 ,

and this implies that lim sup k θ k υ N k = 0 . Now, by the facts that A is Lipschitz continuous, sequences { y N k } , { υ N k } are bounded and lim k θ k υ N k = 0 , we conclude from (22) that

lim inf k A x , x y N k 0 .

Consequently, we have

A x , x z = lim k A x , x y N k = lim inf k A x , x y N k 0 , x C .

Thus, by Lemma 2.11, z V I ( C , A ) as required.□

Lemma 4.3

Let sequences { z n } and { y n } be two sequences generated by Algorithm 3.1such that conditions (A1)–(A3) hold. Then, for all p Ω we have

(23) z n p 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 ,

and

(24) z n y n ϕ ψ n ψ n + 1 r n y n .

Proof

By applying the definition of { ψ n } , we have

(25) A r n A y n ϕ ψ n + 1 r n y n , n N .

Clearly, if A r n = A y n , then inequality (25) holds. Otherwise, from (17) we have

ψ n + 1 = min ψ r n y n A r n A y n , ψ n ϕ r n y n A r n A y n .

It then follows that

A r n A y n ϕ ψ n + 1 r n y n .

Thus, the inequality (25) is valid both when A r n = A y n and A r n A y n . Now, from the definition of z n and applying Lemma 2.6 we have

(26) z n p 2 = y n ψ n ( A y n A r n ) p 2 = y n p 2 + ψ n 2 A y n A r n 2 2 ψ n y n p , A y n A r n = r n p 2 + y n r n 2 + 2 y n r n , r n p + ψ n 2 A y n A r n 2 2 ψ n y n p , A y n A r n = r n p 2 + y n r n 2 2 y n r n , y n r n + 2 y n r n , y n p + ψ n 2 A y n A r n 2 2 ψ n y n p , A y n A r n = r n p y n r n + 2 y n r n , y n p + ψ n 2 A y n A r n 2 2 ψ n y n p , A y n A r n .

Since y n = P C ( r n ψ n A r n ) , then by the projection property, we obtain

y n r n + ψ n A r n , y n p 0 ,

or equivalently,

(27) y n r n , y n p ψ n A r n , y n p .

So, from (25), (26), and (27), we have

(28) z n p 2 r n p 2 y n r n 2 2 ψ n A r n , y n p + ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 2 ψ n y n p , A y n A r n = r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 2 ψ n y n p , A y n .

Now, from p V I ( C , A ) , we have that

A p , y n p 0 , y n C .

Then, by the pseudomonotonicity of A , we obtain

(29) A y n , y n p 0 .

Combining (28) and (29), we have that

z n p 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 .

Moreover, from the definition of z n and (25), we obtain

z n y n ϕ ψ n ψ n + 1 r n y n ,

which completes the proof.□

Theorem 4.4

Assume conditions ( A ) and ( B ) hold. Then, the sequence { x n } generated by Algorithm 3.1converges strongly to an element p Ω , where p = P Ω ( I D + γ f ) ( p ) is a solution of the variational inequality

( D γ f ) p , p q 0 , q Ω .

Proof

We divide the proof of Theorem 4.4 as follows:

Claim 1. The sequence { x n } generated by Algorithm 3.1 is bounded.

First, we show that P Ω ( I D + γ f ) is a contraction of H . For all x , y H , we have

P Ω ( I D + γ f ) ( x ) P Ω ( I D + γ f ) ( y ) ( I D + γ f ) ( x ) ( I D + γ f ) ( y ) ( I D ) x ( I D ) y + γ f x f y ( 1 γ ¯ ) x y + γ ρ x y = ( 1 ( γ ¯ γ ρ ) ) x y .

It shows that P Ω ( I D + γ f ) is a contraction. Thus, by the Banach contraction principle there exists an element p Ω such that p = P Ω ( I D + γ f ) ( p ) . Next, setting g n = ( 1 β n ) z n + β n U z n and applying (23) we have

(30) g n p 2 = ( 1 β n ) z n + β n U z n p 2 = ( 1 β n ) ( z n p ) + β n ( U z n p ) 2 = ( 1 β n ) 2 z n p 2 + β n 2 U z n p 2 + 2 ( 1 β n ) β n U z n p , z n p

( 1 β n ) 2 z n p 2 + β n 2 [ z n p 2 + τ z n U z n 2 ] + 2 ( 1 β n ) β n z n p 2 1 τ 2 z n U z n 2 = z n p 2 + β n ( β n τ ( 1 β n ) ( 1 τ ) z n U z n 2 = z n p 2 β n ( 1 τ β n ) z n U z n 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 β n ( 1 τ β n ) U z n z n 2 .

By the condition on β n , from this we obtain

(31) g n p 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 .

From Lemma 4.1, we have that

lim n 1 ϕ 2 ψ n 2 ψ n + 1 2 = 1 ϕ 2 > 0 .

This implies that there exists n 0 N such that 1 ϕ 2 ψ n 2 ψ n + 1 2 > 0 for all n n 0 . Hence, from (31) we obtain

(32) g n p 2 r n p 2 n n 0 .

Also, by definition of r n and triangle inequality,

(33) r n p = x n + δ n ( x n x n 1 p ) x n p + δ n x n x n 1 = x n p + α n δ n α n x n x n 1 .

From Remark 3.3, we have δ n α n x n x n 1 0 as n . Thus, there exists a constant G 1 > 0 that satisfies:

(34) δ n α n x n x n 1 G 1 , n 1 .

So, from (32), (33), and (34) we obtain

(35) g n p r n p x n p + α n G 1 , n n 0 .

Now, by applying Lemma 2.6 and (35), n n 0 we have

x n + 1 p = α n γ f ( r n ) + ( I α n D ) g n p = α n ( γ f ( r n ) D p ) + ( I α n D ) ( g n p ) α n γ f ( r n ) D p + ( 1 α n γ ¯ ) g n p α n γ f ( r n ) γ f ( p ) + α n γ f ( p ) D p + ( 1 α n γ ¯ ) ( x n p + α n G 1 ) α n γ ρ r n p + α n γ f ( p ) D p + ( 1 α n γ ¯ ) ( x n p + α n G 1 ) α n γ ρ ( x n p + α n G 1 ) + α n γ f ( p ) D p + ( 1 α n γ ¯ ) ( x n p + α n G 1 ) = ( 1 α n ( γ ¯ γ ρ ) ) x n p + α n γ f ( p ) D p + ( 1 α n ( γ ¯ γ ρ ) ) α n G 1 ( 1 α n ( γ ¯ γ ρ ) ) x n p + α n ( γ ¯ γ ρ ) γ f ( p ) D p γ ¯ γ ρ + G 1 γ ¯ γ ρ max x n p , γ f ( p ) D p γ ¯ γ ρ + G 1 γ ¯ γ ρ max x n 0 p , γ f ( p ) D p γ ¯ γ ρ + G 1 γ ¯ γ ρ .

Hence, the sequence { x n } is bounded, and so { r n } , { y n } , { z n } are also bounded.

Claim 2. The following inequality holds for all p Ω and n N

x n + 1 p 2 1 2 α n ( γ ¯ γ ρ ) ( 1 α n γ ρ ) x n p 2 + 2 α n ( γ ¯ γ ρ ) ( 1 α n γ ρ ) α n γ ¯ 2 2 ( γ ¯ γ ρ ) G 3 + 3 G 2 ( ( 1 α n γ ¯ ) 2 + α n γ ρ ) 2 ( γ ¯ γ ρ ) δ n α n x n x n 1 + 1 ( γ ¯ γ ρ ) γ f ( p ) D p , x n + 1 p ( 1 α n γ ¯ ) 2 ( 1 α n γ ρ ) 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 + β n ( 1 τ β n ) U z n z n 2 .

Using the Cauchy-Schwartz inequality and Lemma 2.6, we obtain

(36) r n p 2 = x n + δ n ( x n x n 1 ) p 2 = x n p 2 + δ n 2 x n x n 1 2 + 2 δ n x n p , x n x n 1 x n p 2 + δ n 2 x n x n 1 2 + 2 δ n x n x n 1 x n p = x n p 2 + δ n x n x n 1 ( δ n x n x n 1 + 2 x n p ) x n p 2 + 3 G 2 δ n x n x n 1 = x n p 2 + 3 G 2 α n δ n α n x n x n 1 ,

where G 2 sup n N { x n p , θ n x n x n 1 } > 0 .

Now, by applying Lemma 2.6, (30), and (36) we have

x n + 1 p 2 = α n γ f ( r n ) + ( I α n D ) g n p 2 = α n ( γ f ( r n ) D p ) + ( I α n D ) ( g n p ) 2 ( 1 α n γ ¯ ) 2 g n p 2 + 2 α n γ f ( r n ) D p , x n + 1 p ( 1 α n γ ¯ ) 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 β n ( 1 τ β n ) U z n z n 2 + 2 α n γ f ( r n ) f ( p ) , x n + 1 p + 2 α n γ f ( p ) D p , x n + 1 p ( 1 α n γ ¯ ) 2 r n p 2 1 ϕ 2 ψ n 2 ψ n + 1 2 r n y n 2 β n ( 1 τ β n ) U z n z n 2 + α n γ ρ ( r n p 2 + x n + 1 p 2 ) + 2 α n γ f ( p ) D p , x n + 1 p ( 1 α n γ ¯ ) 2 x n p 2 + 3 G 2 α n δ n α