A strong convergence theorem for a zero of the sum of a finite family of maximally monotone mappings

Abstract The purpose of this article is to study the method of approximation for zeros of the sum of a finite family of maximally monotone mappings and prove strong convergence of the proposed approximation method under suitable conditions. The method of proof is of independent interest. In addition, we give some applications to the minimization problems and provide a numerical example which supports our main result. Our theorems improve and unify most of the results that have been proved for this important class of nonlinear mappings.


Introduction
Let H be a real Hilbert space with inner product 〈 〉 .,. and induced norm ∥⋅∥.
and it is called maximally monotone if it is monotone and the graph of A is not properly contained in the graph of any other monotone mapping. The resolvent of A with parameter > λ 0 is = ( + ) − J I λA λA 1 , where I is the identity mapping on H, and it enjoys firmly nonexpansive property, that is, for any ∈ ( + ) x y I λ A , ran , we have . This problem, which includes variational inequality problems, equilibrium problems, complementary problems, minimization problems, nonlinear evolution equations, fixed point problems as special cases, is quite general. In fact, a number of problems arising in applied areas such as image recovery, machine learning and signal processing can be mathematically modeled as (4), see [1,2] and references therein. To be more precise, a stationary solution to the initial value problem of the evolution equation can be formulated as (4) when the governing maximal monotone F is of the form ≔ + +⋯+ F A A A m 1 2 (see, e.g., [3]). Furthermore, optimization problems often need (see, e.g., [4]) to solve a minimization problem of the form x H m 1 2 1,2, , i are proper lower semicontinuous convex functions from H to the extended real line where ∂ of g i is the subdifferential operator of g i in the sense of convex analysis, then (6) is equivalent to (4). Consequently, considerable research efforts have been devoted to methods of finding approximate solutions (when they exist) of inclusions of the form (4) for a sum of a finite number of monotone mappings (see, e.g., [3,5]).
For the case where = m 2, the inclusion problem (4) reduces to the problem of finding ∈ z H such that where A and B are monotone mappings. For solving problem (7), several authors have studied different iterative schemes (see, e.g., [6][7][8][9][10][11][12][13][14][15][16] and references therein). The most attractive methods for solving the inclusion problem (7) are the Peaceman-Rachford and Douglas-Rachford iterative methods. The nonlinear Peaceman-Rachford and Douglas-Rachford, splitting iterative methods, introduced by Lions and Mercier [3], are given by n λA λB n λB n 1 (9) respectively, where > λ 0 is a fixed scalar. The nonlinear Peaceman-Rachford algorithm (8) fails, in general, to converge (even in the weak topology in the infinite-dimensional setting). This is due to the fact that the generating mapping λA λB is merely nonexpansive. The nonlinear Douglas-Rachford algorithm (9) was initially proposed in [3] for finding a zero of the sum of two maximally monotone mappings and has been studied by many authors (see, e.g., [1,3,11,17,18] and references therein). This method always converges in the weak topology to a solution of (7), since the generating operator λA λB λB for this algorithm is firmly nonexpansive (see, e.g., [11]). In 1979, Passty [11] studied the forward-backward splitting method which is given by where { } λ n is a sequence of positive scalars, A and B are maximal monotone mappings. He proved that the sequence in (10) converges weakly to the solution of problem (7). Different authors have used algorithm (10), for the inclusion problem (7), when A is a single-valued α-inversely strong monotone (or α-strongly monotone) mapping and B is a maximal monotone mapping defined in real Hilbert spaces (see, e.g., [18,19]).
We remark that the aforementioned results provide weak convergence. But we also indicate that several authors have studied different iterative methods (see, e.g., [21][22][23][24] and references therein) and proved strong convergent results to approximate zeros of the sum of monotone mappings A and B, where → A H H : is an α-inverse strongly monotone mapping and → B H : 2 H is a maximally monotone mapping under certain conditions (see, e.g., [19,[25][26][27]).
In 2012, Takahashi et al. [19] studied the following Halpern-type iteration in a Hilbert space setting: for n n n n n nr B n n n 1 n (11) where ∈ u H is a fixed vector and A is an α-inverse strongly monotone and single-valued mapping on H, and B is a maximally monotone mapping on H. They proved that the sequence { } x n generated by (11) converges strongly to a point ∈ ( + ) ( ) −  [15] constructed an algorithm that converges strongly to a solution of the sum of two maximally monotone mappings using a different technique. subset of × H H m , which is defined by: He proved weak convergence results provided that H has a finite dimension or We remark that the extended solution set is associated with the common fixed points of a countable family of nonexpansive mappings and so the methods of approximating fixed points are used to approximate the solution of problem (4).
Motivated and inspired by the above results, our purpose in this article is to construct a viscosity-type algorithm for finding zeros of the sum of a finite family of maximally monotone mappings via the extended solution set ( and discuss its strong convergence. The viscosity method introduced by Moudafi [30] involves a contraction mapping f in the procedure and it can be regarded as a regularization process for the solution of problem (4), which is supposed to induce the convergence in norm of the iterates. Another advantage of this method is that it allows one to select a particular solution point of (4), which satisfies some variational inequality. The assumption that one of the mappings is α-inverse strongly monotone is dispensed with. Our results provide an affirmative answer to our question. Our method of proof is of independent interest. Our results improve and generalize several results in the literature.

Preliminaries
In this section, we recall some definitions and known results that will be used in the sequel.
Let C be a nonempty, closed and convex subset of a real Hilbert space H. A mapping , and it is said to be Lemma 2.1. [29] Let C be a closed and convex subset of a real Hilbert space H, and The following lemmas shall be used in the later section.
is equivalent to solving (4) in the sense that Then, for any ( In addition, φ is affine on V, with and The function φ in Lemma 2.4 is called decomposable separators.
Let { } a n be a sequence of nonnegative real numbers satisfying the following relation: Let ∈ x y H , . If H is a real Hilbert space, then the following inequality holds: Lemma 2.7.
[33] Let C be a closed and convex subset of a real Hilbert space H and ∈ x H be given. The metric projection of H onto C, P C , is characterized by the following: , for all ∈ z C; (ii) P C is firmly nonexpansive and hence nonexpansive.

Main results
In this section, we introduce an algorithm for finding a point in , which will lead us to a solution of the sum of a finite family of maximally monotone mappings in a Hilbert H space and discuss its strong convergence.
In what follows, let H be a real Hilbert space and We now propose the following algorithm which basically uses Algorithm 3 of [28].
and compute where is a contraction mapping with constant α. Set ≔ + n n 1 and go to step 1. (see, e.g., [34]). By rearranging the equation in step 1, one has the following equation: where for each = … k m 1, 2, , and ≥ n 0, exists and is unique. Thus, the decomposable separator function φ in Lemma 2.4 is well defined.
i is affine on V (see Lemma 2.4) and the half space H i is closed and convex subspace of V for all ≥ i 1, the projection of u n onto H i given by (17) exists and is firmly nonexpansive (see, e.g., [20,28] Now, the fact that T i is nonexpansive and (18) Thus, it follows that Proof. We proceed with the following steps.
Step 1. First we show that  Step 2. We show that || − || → T u u 0 i n n as → ∞ n . Take = ( ( )) * * u P f u . Note that which yields , .
i n n n i n n 2 (25) Furthermore, from (18) and (25), we immediately obtain
Proof. By Lemma 3.4, there exists > η 0 such that This implies that ( ) φ u n n is always nonnegative, and from (17), we obtain . The proof is complete. □ Remark 3.6. We observe that Algorithm 3.1 is equivalent to the following scheme: .
be a contraction mapping with constant α. For arbitrary define an iterative algorithm by We note that if in Corollary 3.9 we assume that = ( … ) ∈ u V 0, 0, , 0 and we get the following theorem for approximating the minimum-norm point of the extended solution set of the sum of a finite family of maximally monotone mappings in Hilbert spaces.  with. In addition, we applied our main results to study the convex minimization problem. Finally, we provided a numerical example to support our results. Our results extend the results of [28] in the sense that our theorems provide strong convergence in arbitrary Hilbert spaces. In particular, Theorem 3.5 extends Proposition 7 of Svaiter [28] from weak to strong convergence. Moreover, our theorems improve and unify most of the results that have been proved for this important class of nonlinear mappings.