Strong convergence of an inertial extrapolation method for a split system of minimization problems


 In this article, we propose an inertial extrapolation-type algorithm for solving split system of minimization problems: finding a common minimizer point of a finite family of proper, lower semicontinuous convex functions and whose image under a linear transformation is also common minimizer point of another finite family of proper, lower semicontinuous convex functions. The strong convergence theorem is given in such a way that the step sizes of our algorithm are selected without the need for any prior information about the operator norm. The results obtained in this article improve and extend many recent ones in the literature. Finally, we give one numerical example to demonstrate the efficiency and implementation of our proposed algorithm.


Introduction
Throughout this article, unless otherwise stated, we assume that H 1 , H 2 and H are real Hilbert spaces, → A H H : 1 2 is nonzero bounded linear operator and I denotes the identity operator on a Hilbert space. Assume C i ( = … i N 1, , ) and Q i ( = … i M 1, , ) are nonempty closed convex subsets of H 1 and H 2 , respectively. The multiple-set split feasibility problem (MSSFP) which was introduced by Censor et al. [1] is formulated as finding a point In particular, if = = N M 1, then the MSSFP (1) is reduced to the problem known as the split feasibility problem (SFP) which was first introduced by Censor and Elfving [2] for modeling inverse problems in finitedimensional Hilbert spaces. The SFP and MSSFP arise in many fields in the real world, and numerous methods have been proposed to solve the SFP, see for example [3][4][5] and references therein, and MSSFP, see for example [6][7][8] and references therein. Moreover, there are some studies of fixed point problems in the framework of the MSSFP, see for example [9][10][11][12][13][14].
One of the most important problems in optimization theory and nonlinear analysis is the problem of approximating a solution of the unconstrained minimization problem. This can be stated as follows. Find ∈ x H such that x H (2) where → ∪ {+∞} f H : is proper, lower semicontinuous convex function. Our goal is to introduce a strong convergence iterative algorithm with inertial effect solving the MSSFP (1), where C i and Q j are solution sets of minimization problems of the form (2) for proper, lower semicontinuous convex functions f i and g j , respectively. We denote by f arg min the set of all minimizers of f on H, i.e., x H f x f x arg min¯:¯,¯:¯min .
x H If f is a smooth function (mostly if f is twice continuously differentiable), one of the numerical methods for finding approximate solutions of (2) is the Newton method, see [15,16]. Analogous method for solving (2) with better properties for the non-smooth case is based on the notion of proximal mapping introduced by Moreau [17], i.e., the proximal operator of the function f with scaling parameter > λ 0 is a mapping → H H prox : λf given by The minimizers of f (points solving problem (2)) are precisely the fixed points of the proximal operator of f. Thus, solving the optimization problem (2) can be interpreted as finding fixed points of a proximal operator of f and proximal operators are firmly nonexpansive operators. This immediately suggests the most popular method which is called the proximal minimization or the proximal point algorithm introduced by Martinet [18,19] and later by Rockafellar [20]. Let : 2 be two proper, convex, lower-semicontinuous functions, where g λ is the Moreau-Yosida approximate [17]  . In [21], Moudafi and Thakur introduced a weakly convergent algorithm solving the following minimization problem: arg min 1 . It should be noted that (3) is equivalent to the split minimization problem (SMP): finding a point ∈ x H 1 with the property Operator norm is a global invariant and is often difficult to estimate, see for example the Theorem of Hendrickx and Olshevsky in [22]. However, in the several split inverse problem types in the literature, the implementation of the proposed iterative method requires the prior knowledge of operator norm to determine the step sizes. To overcome this difficulty, López et al. [4] introduced a new way of selecting the step sizes for solving the SFP such that the information of the operator norm is not necessary. Moudafi and Thakur [21] used the idea of López et al. [4] to introduce a new way of selecting the step sizes, given by proposed the following split proximal algorithm, which generates, from an initial point ∈ is a solution of SMP (4) and the iterative process stops; otherwise, we set ≔ + n n 1 and go to (5). Based on Moudafi and Thakur [21] many iterative algorithms are proposed for solving SMP (4), see for example those by Abbas et al. in [23], Shehu et al. in [24], Shehu and Iyiola in [25][26][27][28] and Shehu and Ogbuisi in [29].
An inertial term is a two-step iterative method, and the next iterate is defined by making use of the previous two iterates. An inertial extrapolation type algorithm, i.e., an algorithm combining an inertial term, was first introduced by Polyak [30] as an acceleration process in solving a smooth convex minimization problem. It is well known that combining an algorithm with inertial term speeds up or accelerates the rate of convergence of the sequence generated by the algorithm. Consequently, a lot of research interest is now devoted to the inertial extrapolation-type algorithm, see [31][32][33][34] and references therein. Very recently, Shehu and Iyiola [25] proposed an inertial extrapolation-type algorithm for solving the SMP (4) using the setting n They proposed the following weak convergence result.
Then the sequence { } x n generated by the iterative algorithm weakly converges to a point x solving the SMP (4).
Note that the proximal operator is a natural extension of the notion of a metric projection onto a closed convex set, i.e., = P prox λf Q , where = f δ Q (f is the indicator function of a closed convex subset Q of H), and this perspective suggests various properties that we expect proximal operators to obey. However, there is a property that holds for the case of projection operators but not for the case of proximal operators in general. For example, consider a function h defined on H 2 given by , see [35]. Motivated by the above theoretical views, and inspired by results in [1,21,25], in this article we introduce the strong convergence theorem of an inertial extrapolation-type algorithm that incorporates a proximal operator, a viscosity method and an inertial term to solve the so-called split system of minimization problem (SSMP), given as a task finding a point ∈ x H 1 with the property are proper, lower semicontinuous convex functions for ∈ i Φ, ∈ j Ψ. Let Γ be the solution set of SSMP (7), i.e., for all ∈ j Ψ, then problem (7) reduces to the SMP (4) that is the problem considered in [21,[23][24][25][26][27][28][29]. The aims of this study are twofold: to improve the weak convergence result of an inertial extrapolation-type algorithm proposed by Shehu and Iyiola [25] to a strong convergence result for an approximation of a solution of the SMP (4), and to accelerate and improve the results in [9,10] in solving the SSMP (7).
This article is organized in the following way. In Section 2, we collect some basic and useful definitions, lemmata, and theorems for further study. In Section 3, we propose an iterative method for the SSMP and analyze the strong convergence theorem of the proposed iterative method. In Section 4, we give a numerical example to discuss the performance of the proposed method. Finally, we give some conclusions.

Preliminary
In this section, in order to prove our result, we collect some facts and tools in a real Hilbert space H. The symbols " ⇀ " and "→" denote weak and strong convergence, respectively. Let C be a nonempty closed convex subset of H. The metric projection on C is a mapping → P H C : . Then, . If ∈ ( ) L 0, 1 , then we call T a contraction with constant L. If = L 1, then T is called a nonexpansive mapping.
, is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if, and only if, for all ( : 2 H is a maximal monotone set-valued mapping, then we define the resolvent operator J λ T associated with T and > λ 0 as follows: It is well known that J λ T is single-valued, nonexpansive (see, for example [37,36]) and 1-inverse strongly monotone (firmly nonexpansive). Moreover, ∈ ( ) T x 0¯if and only if x is a fixed point of the resolvent operator J λ T for all > λ 0; see [38].
. We denote the subdifferential of f at ∈ x H by ∂ ( ) f x , and is given by It is the classical result in operator theory that the subdifferential ∂f is a maximal monotone operator and = ( + ∂ ) − I λ f prox , λf 1 namely, for ∈ x H we have the following equivalence between the subdifferential and proximal operator: Hence, the convex minimization problem (2) can be formulated as finding fixed point of proximal operator.
Lemma 2.4. [40] Let { } Γ n be a sequence of real numbers that does not decrease at infinity. Also consider the sequence of integers { ( )} ≥ φ n n n 0 defined by , and for all ≥ n n 0 , the following two estimates hold: Let D be a nonempty closed convex subset of H. Then we say that the bifunction × → h D D : satisfies Condition CO on D if the following assumptions are satisfied: The following lemma was given by Combettes and Hirstoaga in [42].
satisfies the following conditions: (a) T r h is single-valued and firmly nonexpansive; is closed and convex.

Main result
First we extend the settings introduced by Moudafi and Thakur [21]. Let > λ 0. Then, for ∈ x H 1 , Consider the parameter sequences satisfying the following conditions. Using ∇l i , l i , l, ∇l, h j , ∇h j , θ j given in (I)-(IV) and step sizes given in Assumption 1, we are now in a position to state our inertial extrapolation-type algorithm and prove its strong convergence to the solution of the SSMP (7) assuming that solution set Γ is nonempty. ξ n j ( ∈ j Ψ) be real sequences satisfying Assumption 1.
Step 1. Given the iterates − x n 1 and x n ( ≥ n 1), choose β n such that ≤ ≤ β β 0n n , where  Step 7. Set ≔ + n n 1 and go to Step 1.
Remark. From Assumption 1 and Step 1 of Algorithm 1, we have that Remark. The solution set Γ of problem (7) is closed convex set, because the set of minimizers of any proper, lower semicontinuous function is closed convex and A is bounded linear operator. Therefore, the metric projection P Γ is well defined as we also assume that Γ is nonempty. Proof. Let ∈ x Γ. From the definition of y n , we get n n n n n n n n n 1 1 Since prox λf i and prox λg j are firmly nonexpansive, − I prox λf i and − I prox λg j are also firmly nonexpansive, and since x verifies (7) (since minimizers of any function are exactly fixed points of its proximal mapping), we have for all ∈ x H 1 and for all Ψ.
From (10) and (11) In view of (12), (13) and (14)  Observe that by (C1) of Assumption 1 and Remark 3, we see that From (18) and (19) and since ≤ < β 0 1 n , we get Using the definition of + x n 1 and Lemma 2.2(ii), we have   (22), we obtain Thus, (28) together with (26) gives Moreover, using the definition of y n and Remark 3, we have Therefore, from (35), we have it follows that for all ≥ n n 0 and (23), we have for all ≥ n n 0 Tables 1 and 2 illustrate the execution time in second (CPU(s)) and the number of iterations (Iter(n)) of our algorithm when applied to this particular example. The stopping criterion in Tables 1 and 2 is defined Table 3 present the numerical results of our algorithm (Algorithm 1) in comparison with ProxAL-A and ProxAL-B. Figures 4 and 5 show the = ∥ ∥ x error n versus number of iterations, while Table 2 shows the CPU time exclusion (CPU(s)) and the number of iterations (Iter(n)) of  From this preliminary numerical experiment, we observe that our algorithm crucially depends on step sizes, starting points and dimensions. Moreover, our proposed algorithm is efficient and easy to implement and outperforms the proposed algorithms in [9] and [10].

Conclusions
In this article, we introduce a strong convergence theorem for an inertial extrapolation-type algorithm for solving a SSMP (7). The problem we considered in this article is general for many of the problems considered in the literature concerning approximation of an unconstrained minimization problem, see for example [25][26][27][28]24,23]. Our result can also be applied to find a solution of the split system of inclusion problem, the MSSFP, and the split system of equilibrium problem. Furthermore, our result improves an inertial extrapolation-type algorithm proposed in [25] and also improves and accelerates algorithms in [9,10].