* A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces

Abstract:We consider a new subgradient extragradient iterative algorithmwith inertial extrapolation for approximating a common solution of variational inequality problems and fixed point problems of amultivalued demicontractive mapping in a real Hilbert space. We established a strong convergence theorem for our proposed algorithmunder some suitable conditions andwithout prior knowledge of the Lipschitz constant of the underlying operator. We present numerical examples to show that our proposed algorithm performs better than some recent existing algorithms in the literature.


Introduction
Let C be a nonempty, closed and convex subset of a real Hilbert space H with inner product ⟨·, ·⟩ and norm || · ||. The Variational Inequality Problem (VIP) is defined as finding a point x * ∈ C satisfying ⟨Ax * , z − x * ⟩ ≥ 0, for all z ∈ C, (1) where A : C → H is a given mapping. The VIP is an important tool in studying a wide class of problems arising in pure and applied science such as minimization problems, saddle point problems, partial differential equations, optimal control problems, economics, engineering and mathematical programming. We denote the set of solutions of the VIP by VI(C, A). This field is experiencing an explosive growth in both theory and applications. Several iterative methods have been developed for solving the VIP and its related optimization problems, see [1][2][3][4][5] and references therein. It is well known that the VIP is equivalent to the following fixed point problem (see [6]) where µ > 0 is an arbitrary constant and P C is the metric projection from H onto C. A well known algorithm for finding the solution of VIP is the extragradient method proposed by Korpelevich [7] which is given as follows: x 0 ∈ C, yn = P C (xn − µAxn) , where C ⊆ R n , A : C → R n is a monotone and L-Lipschitz continuous operator with µ ∈ (︁ 0, 1 L )︁ .
If the solution set VI(C, A) is nonempty, then the sequence {xn} generated by EgM converges weakly to an element in VI(C, A). n recent years, the EgM has been further extended to infinite dimensional spaces (see [8][9][10][11][12]). Note that the EgM involves two projections onto the set C and two evaluations of A per iteration. A major improvement on the EgM is to minimize the number of evaluations of P C per iteration. An attempt in this direction was initiated by Y. Censor et al. [13] who modified the EgM method by replacing the second projection with a projection onto a half-space. This new method which thus involves only one projection onto C is called the subgradient extragradient method and is given as follows:

Algorithm 1.2. (The Subgradient Extragradient method (SEgM))
Censor et al. [13] showed that if the solution set VI(C, A) is nonempty, the sequence {xn} generated by SEgM converges weakly to an element p ∈ VI(C, A), where p = limn→∞ P VI(C,A) (xn). Using only a single projection onto C, Maingé and Gobinddass [14] (see also Maingé [15]) also obtained a result relating to weak convergence algorithm for approximating solutions of the VIP in a real Hilbert space by means of a projected reflected gradient-type method [16] and inertial terms. Several other alternatives to the EgM and its modification have also been proposed in the literature.
In solving optimization problems, strong convergence of iterative schemes are more desirable than their weak convergence counterparts as pointed out by Bauschke and Combettes in [17]. Therefore, it is beneficial to develop an algorithm that generates a strong convergent sequence.
Let S be a nonlinear mapping and let F(S) denote the set of fixed points of S (i.e. F(S) := {x ∈ H : Sx = x}). It is important to study the problem of finding a common solution of both the VIP and fixed point problem due to its possible applications to mathematical models whose constraints can be expressed as fixed point problems and variational inequalities. This happens, in particular, in practical problems such as in signal processing, network resource allocation, or image recovery, see for instance [18][19][20][21][22][23][24][25]. In [13], Y. Censor et al. studied the approximation of common solution to a variational inequality problem and fixed point problem for a nonexpansive mapping. In particular, they proposed the following SEgM and proved its weak convergence to a solution u * ∈ F(S) ∩ VI(C, A) :  Motivated by the work of [26], Thong and Hieu [27] proposed the following two algorithms for finding a common element of the set of solutions of VIP and the fixed point of a demicontractive mapping.
Also in 2000, Moudafi [28] introduced the viscosity approximation method for approximating fixed points of nonexpansive mappings. Let f be a contraction on H, and start with an arbitrary x 0 ∈ H. We then define a sequence {xn} recursively by where {λn} is a sequence in (0, 1). Xu [29] proved that if {λn} satisfy certain conditions, the sequence {xn} generated by (7) converges strongly to the unique solution x ∈ F(T) of the variational inequality Based on the heavy ball methods of a two-order time dynamical system, Polyak [30] first proposed an inertial extrapolation as an acceleration process to solve the smooth convex minimization problem. The inertial algorithm is a two-step iteration where the next iterate is defined by making use of the previous two iterates. Recently, several of researchers have constructed some fast iterative algorithms by using inertial extrapolation which includes inertial proximal methods [31,32], inertial forward-backward methods [33], inertial split equilibrium methods [34], inertial proximal ADMM [35] and the fast iterative shrinkage thresholding algorithms FISTA [36,37]. In 2008, using the technique of inertial extrapolation, Maingé [38] introduced the following inertial Mann algorithm: for each n ≥ 1. He showed that the iterative sequence {xn} converges weakly to a fixed point of T under the following conditions: To satisfy Condition (A2) of the sequence {xn}, it is necessary to first calculate θn at each step (see [32]). In 2015, Bot and Csetnek [39] removed the condition (A2) and substituted (A1) and (A3) with the following conditions: (B1) for each n ≥ 1, {θn} ⊂ [0, α) is nondecreasing with θ 1 = 0 and 0 ≤ α < 1, (B2) for each n ≥ 1, where λ, σ, δ > 0.
Recently, Dong et al. [40] introduced the following inertial extragradient algorithm by incorporating the inertial term in the EgA.
Observe that the stepsize µ of the extragradient, inertial extragradient and the subgradient extragradient algorithms play an essential role in the convergence properties of the iterative methods. The Lipschitz constant L is typically assumed to be known, or at least estimated priorly. In many cases, this parameter is unknown or difficult to approximate. Moreover, the stepsize defined by this constant is often very small and deteriorates the convergence rate. In practice, a larger stepsize can often be used and yield better numerical results. It is thus natural to ask the following question: Is it possible to have an inertial subgradient extragradient algorithm with self adaptive stepsize which converges strongly to a common solution of a variational inequality and fixed point problem?
It is our aim therefore to provide an affirmative answer to this question. Motivated by the work of Censor et al [26], Thong and Hieu [27] and Dong et al. [40], we introduce an inertial viscosity subgradient extragradient type algorithm with self adaptive stepsize. We also prove a strong convergence theorem for approximating a common solution of the VIP and a finite family of multivalued λ-demicontractive mappings in a real Hilbert space under some suitable conditions. Our algorithm is composed of a modified subgradient extragradient algorithm together with a viscosity approximation and inertial extrapolation. Stepsize is selected self adaptively, hence does not require a prior estimate of the Lipschitz constant L. Furthermore, we present some numerical examples to show that our proposed algorithm is more efficient than some existing ones in literature.

Preliminaries
Let C be a nonempty, closed and convex subset of a real Hilbert space H. We denote the strong and weak convergence of a sequence {xn} ⊆ H to a point p ∈ H by xn → p and xn ⇀ p respectively. For each x ∈ H, there exists a unique element P C x in C such that The mapping P C is called the metric projection from H onto C. It is well known that P C has the following characteristics: (i) ⟨x − y, P C x − P C y⟩ ≥ ||P C x − P C y|| 2 , for every x, y ∈ H, (ii) for x ∈ H and z ∈ C, z = P C x if and only if ⟨x − z, z − y⟩ ≥ 0, for all y ∈ C, (10) (iii) for x ∈ H and y ∈ C, Given x, y ∈ H, y ≠ 0, let Q = {z ∈ H : ⟨y, z − x⟩ ≤ 0}. Then, for all u ∈ H, the projection P Q (u) is defined by which gives us an explicit formula for finding the projection of any point onto a half space.
The normal cone of a nonempty closed convex subset C of H at a point x ∈ C, denoted by N C (x) is defined as and the graph G(B) of B is not properly contained in the graph of any other monotone operator. [41], Theorem 3.3).

It is clear that a monotone mapping B is maximal if and only if for any
Let A : C → H be a monotone mapping and let B : H → 2 H be a mapping defined by Then B is maximal monotone and x ∈ B −1 (0) if and only if x ∈ VI(C, A) (see, e.g. [41]). Let C be a nonempty subset of H and k ∈ (0, 1). A mapping F : We recall some basic definitions of multivalued mappings.

Definition 2.2. A multivalued mapping
We note that the class of λ-demicontractive mappings includes several other types of classes of nonlinear mappings such as nonexpansive and quasi-nonexpansive. The best approximation operator P S for a multivalued mapping S : H → P(H) is defined by One can easily prove that F(S) = F(P S ) and P S satisfies the endpoint condition. However, Song and Cho [42] gave an example of a best approximation operator P S which is nonexpansive, but where S is not necessarily nonexpansive. The following results will be used in the sequel. Lemma 2.4. [43,44] In a real Hilbert space H, the following inequalities hold: Lemma 2.5. [45] Let H be a real Hilbert space, Then the following identity holds: Lemma 2.8. [20] Let {an} be a sequence of real numbers such that there exists a subsequence {n i } of {n} with an i < a n i +1 for all i ∈ N. Consider the integer {m k } defined by Then {m k } is a nondecreasing sequence verifying limn→∞ mn = ∞, and for all k ∈ N, the following estimate holds: am k ≤ a m k +1 , and a k ≤ a m k +1 .

Main results
In this section, we give a precise statement of our algorithm and discuss its strong convergence. Let C be a nonempty, closed and convex subset of a real Hilbert space H and A : C → H be a monotone and L-Lipschitz continuous mapping.
Observe that if wn = un = xn and xn ∈ S i xn, then we are at a common solution of the variational inequality (1) and a common fixed point of S i , for all i = 1, 2, . . . , m. In our convergence analysis, we will implicitly assume that this does not occur after finitely many iterations so that our Algorithm 3.1 generates an infinite sequence. We will see in the following result that the stepsize rule defined by (16) is well defined.

Lemma 3.3. [47]
There exists a nonnegative integer mn satisfying (16). In addition In order to establish our main result, we make the following assumption: Also, note that Step 1 in our Algorithm 3.1 is easily implemented in numerical computation since the value of ||xn − x n−1 || is known a prori before choosing αn.
We proceed to prove the following lemmas before proving the convergence of our main Algorithm 3.1. Proof. Let p ∈ Γ, then Also from (11) and (15), we get Since A is monotone, then ⟨Awn − Ap, wn − p⟩ ≥ 0, for all n ≥ 1, and hence ⟨Awn , wn − p⟩ ≥ ⟨Ap, wn − p⟩.

By induction, we get
Taking lim sup of both sides in the last inequality (noting that cn → 0), we have lim sup This contradicts the fact that {sn} is a nonnegative real sequence. Therefore, lim sup n→∞ bn ≥ −1.
We next state and prove our main theorem.
Proof. Let p ∈ Γ and denote ||xn − p|| 2 by Φn. We consider the following two possible cases.
Since wn j = P C (un j − µn j Aun j ), by the characterization of P C , we obtain Therefore, we have from (45) and (46) that Passing to the limit in the above inequality in (47) (using the continuity of A and noting that lim inf j→∞ µn j > 0), it follows from (38) that ⟨v −x, w⟩ ≥ 0.
As a consequence, we obtain that for all n ≥ n 0 , Hence, limn→∞ ||xn − p|| = 0. This implies that {xn} converges strongly to p. This completes the proof.
Recall that the class of quasi-nonexpansive mappings is 0-demicontractive. Thus, we can also obtain the following result for approximating a common solution of the VIP and a finite family of multivalued quasinonexpansive mappings.
(i) For suitable starting points, Algorithm 3.1 generates appropriate solutions which approximate the whole solution set Γ as guaranteed by Theorem 3.8. This is an interesting property which is different, for example, from the class of Tikhonov-type regularization approaches where the corresponding sequences always converge to the same solution. (ii) We emphasize here that Algorithm 3.1 does not require a prior estimate of the Lipschitz constant for its strong convergence.

Numerical example
In this section, we provide examples to compare our inertial viscosity subgradient extragradient Algorithm 3.1 with Algorithms (5) and (6) of Thong and Hieu [27] and, and with iEgA (9) of Dong et al. [40]. All computations are carried out using Matlab version 2017(a) on a HP personal computer. We start by giving the following example of multivalued λ-demicontractive mapping given by Jailoka and Suntai in [48]. Let H = R, and for each i ∈ N defined S i : R → 2 R by Then S i is λ i -demicontractive with λ i = 4i 2 + 8i 4i 2 + 12i + 9 ∈ (0, 1).
Example 4.1. Many problems arising in signal and image processing can be formulated as inverting the equa- where x ∈ R N is the unknown original image or data to be recovered, b ∈ R M is the vector of noisy observations, e is an additive noise with bounded variance and B : R N → R M is a bounded linear observation operator. In particular, we note that B is typically ill behaved because it models an acquisition process that encounters loss of information. When attempting to find sparse solutions to linear inverse problems of type (55), a successful model is the convex unconstrained minimization problem where ν is a postive number, || · || is the Euclidean norm and || · || 1 is the l 1 norm. The aim of the l 1 term, which is the convex sparsity-promoting penalty, is to make the small component of x become zero. By means of convex analysis, one is able to show that a minimizer to (56) is actually a solution to the LASSO problem for any nonnegative real number t (see [49]). It is easy to see that the optimization problem (57) is a special case of the variational inequality problem (1), where A(x) = B T (Bx − b) and C = {x : ||x|| 1 ≤ t}. Hence, we can use the proposed Algorithm (3.1) to approximate a solution of (55). The projection onto the closed l 1 ball in R N is computed through the soft thresholding operator defined by for λ > 0. We set f (x) = x 16 , D(x) = x, σ = 6, δ = 0.9, η = 0.7, δn = 1 n+1 , ϵn = 1 (n+1) 4 and α = 3 in Algorithm 3.1, and for each n ∈ N and i ≥ 0, define We set the image to go through a random blur and random noise and choose different values of starting point as follows: x 0 = −0.5 * randn(N, 1) and x 1 = 2 * randn(N, 1), where We then plot the graphs of the error term (||x n+1 − xn||) against number of iterations for Algorithm 3.1, THSEgM(i) and THSEgM(ii). The numerical result is shown in Table 1 and Figure 1 It is easy to verify that A is 1-Lipschitz continuous and monotone on H. Let C := {x ∈ H : ||x|| ≤ 1} be the unit ball. It is known that   Clearly, S i is 0-demicontractive and Γ = VI(C, A) ∩ ⋂︀ i∈N F(S i ) = {0}. We choose the following starting points: Case(i): x 0 (t) = t 2 exp(7t) and x 1 (t) = 1 6 sin(−3t), Case(ii): x 0 (t) = 0.5 cos(t) and x 1 (t) = cos(−10t), Case(iii): x 0 (t) = 5 exp(t) and x 1 (t) = 2 3 cos(t). Let f (x) = x(t) 2 , D(x) = ∫︀ 1 0 x(t)dt, σ = 4, η = 0.5, δn = 1 n+1 , ϵn = 1 (n+1) 2 , α = 3, and β n,i = 1 n for each i ∈ N ∪ {0}. Using ||x n+1 − xn|| ||x 2 − x 1 || < 10 −6 as a stopping criterion, we plot the garphs of ||x n+1 − xn|| against the number of iterations for both Algorithm 3.1 and iEgA (9). The numerical result is shown in Table 2 and Figure 2.

Concluding remarks
The paper has introduced and proved a strong convergence of a viscosity subgradient extragradient algorithm with inertial extrapolation for approximating common solutions of a variational inequality problem and a fixed point of a finite family of demicontractive mappings in a real Hilbert space. Some numerical examples are also provided to show that the proposed scheme converges faster than some existing ones. We highlight some further contributions made in this paper as follows: (i) Unlike the results of Censor et al. [13], Kraikaew and Saejung [50], Thong and Hieu [27] which require that the stepsize satisfies µn = µ ∈ (︂ 0, 1 L )︂ , where L is the Lipschitz constant of A. In this paper, the stepsize µn is determined through a line search which gives room for a larger stepsize value at each iteration. This also improves the corresponding results in [40] and [51]. (ii) Also, the inertial term technique in Step 1 of Algorithm 3.1 is new and efficient. It helps us to avoid imposing some strong conditions that have usually been used for inertial-type algorithms by many authors, see for instance [33,37,39] and reference there in. Furthermore, it assists in obtaining the strong convergence of the sequence {xn} generated by Algorithm 3.1. This improves the weak convergence results of inertial-type algorithms announced by, for instance [33,40,52]. (iii) In [53], using a Mann-type algorithm, the authors proved a weak convergence result of an inertial subgradient extragradient algorithm where as, in this paper we used a viscosity approximation method and proved a strong convergence result of an inertial subgradient extragradient algorithm in a real Hilbert space.