Convergence analysis of M-iteration for (cid:2) -nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems

: In this article, weak and strong convergence theorems of the M-iteration method for (cid:2) -nonexpan- sive mapping in a uniformly convex Banach space with a directed graph were established. Moreover, weak convergence theorem without making use of Opial ’ s condition is proved. The rate of convergence between the M-iteration and some other iteration processes in the literature was also compared. Speci ﬁ cally, our main result shows that the M-iteration converges faster than the Noor and SP iterations. Finally, the numerical examples to compare convergence behavior of the M-iteration with the three-step Noor iteration and the SP-iteration were given. As application, some numerical experiments in real-world problems were provided, focused on image deblurring and signal recovering problems.


Introduction
The renowned fixed-point result of a contractive mapping in complete metric spaces, which is known as Banach's contraction principle, was introduced in 1922 by Banach [1], an important instrument for solving the existence problems of nonlinear mappings. Since then, various generalizations have been studied in many directions of this principle.
Jachymski [2] proposed a novel notion of -contraction in 2008, establishing that it was a realistic extension of the Banach contraction principle in a metric space involving a directed graph. Using this notion, he simply proved the Kelisky-Rivlin theorem [3]. By combining graph theory and fixed-point theory, Aleomrainejad et al. [4] provided some iterative scheme findings for -contractive and -nonexpansive mappings on graphs. In [5], Alfuraidan and Khamsi introduced the concept of -monotone nonexpansive multivalued mappings on a metric space endowed with a graph. The existence of fixed-points of monotone nonexpansive mappings on a Banach space endowed with a directed graph was investigated by Alfuraidan [6].
In 2015, Tiammee et al. [7] presented Browder's convergence theorem and Halpern iteration process for -nonexpansive mappings in Hilbert space involving a directed graph. After that, Tripak [8] introduced the Ishikawa iterative scheme to approximate common fixed-points of -nonexpansive mappings defined on nonempty closed convex subsets of a uniformly convex Banach space endowed with a graph. Recently, various fixed-point iteration processes for -nonexpansive mappings have been studied extensively by many authors (see, e.g., [9,10] and references cited therein).
In 2000, Noor [11] studied the convergence criteria of the following three-step iteration method for solving general variational inequalities and related problems. The three-step Noor iteration is defined by: where { } η n , { } ϱ n , and { } ξ n are in ( ) 0, 1 . Glowinski and Tallec [12] used the three-step iterative approaches to find solutions for the problem of elastoviscoplasticity, eigenvalue computation, and the theory of liquid crystals. In [12], it was shown that the three-step iterative process yields better numerical results than the estimated iterations in two and one steps. In 1998, Haubruge et al. [13] studied the convergence analysis of three-step methods of Glowinski and Tallec [12] and applied these methods to obtain new splitting-type algorithms for solving variation inequalities, separable convex programming, and minimization of a sum of convex functions. They also proved that three-step iterations lead to highly parallelized algorithms under certain conditions. As a result, we conclude that the three-step approach plays an important and substantial role in the solution of numerous problems in pure and applied sciences.
In 2011, Phuengrattana and Suantai [14] introduced the following new three-step iteration process known as the SP-iteration: where { } η n , { } ϱ n , and { } ξ n are in ( ) 0, 1 . In addition, they showed that the SP-iteration (2) converges faster than the Noor iteration (1) for the class of continuous nondecreasing functions.
Recently in 2018, Ullah and Arshad [15] introduced an iteration, called M-iteration, defined by: where { } ξ n is in ( ) 0, 1 . Ullah and Arshad [15] showed that the iteration process (3) is faster than the Picard-S iteration [16] and the S-iteration [17] for Suzuki generalized nonexpansive mappings. In this direction, some of the notable studies were enhanced and conducted by many works, as seen in [18][19][20].
The main purpose of this article is to prove some weak and strong convergence theorems of the Miteration method (3) for -nonexpansive mapping in a uniformly convex Banach space endowed with a graph. We also show the numerical experiment for supporting our main results and comparing the rate of convergence of the M-iteration (3) with the three-step Noor iteration (1) and the SP-iteration (2). Furthermore, we apply the M-iteration method to solve image deblurring and signal recovering problems.

Preliminaries
In this section, we recall a few basic notions concerning the connectivity of graphs. All of them can be found, e.g., in [21].
Let be a nonempty subset of a real Banach space . We identify the graph with the pair ( ( ) ( )) V E , , where the set ( ) V of its vertices coincides with set and the set of edges ( ) . Also, is such that no two edges are parallel. A mapping → : is said to be -contraction if preserves the edges of (or is edge-preserving), i.e., and decreases the weights of edges of in the following way: there exists ( ) ∈ ψ 0, 1 such that A mapping → : is said to be -nonexpansive (see [5], Definition 2.3 (iii)) if preserves edges of , i.e., and T non-increases weights of edges of in the following way: If w and s are the vertices in a graph , then a path in from w to s of length N ( . In this article, we use → and ⇀ to denote the strong convergence and weak convergence, respectively. A mapping → : is said to be -demiclosed at 0 if, for any sequence { } w n in such that with ( ) ≠ ∅ is said to satisfy Condition (A) [23] if there is a nondecreasing function for all ∈ w . Let be a subset of a metric space ( ) d , . A mapping → : is semi-compact [24] if for a sequence { } w n in , with Let be a nonempty subset of a normed space and let be a directed graph such that . Then, is said to have Property (see [25]) if for each sequence in converging weakly to ∈ w and ( )

Main results
Throughout the section, let be a nonempty closed convex subset of a Banach space endowed with a directed graph such that ( ) = V and ( ) E is convex. We also suppose that the graph is transitive. The mapping is -nonexpansive from to with ( ) ≠ ∅. For an arbitrary ∈ w , 0 define the sequence { } w n by (3).
We start with proving the following useful results.
. Then, since is edge-preserving, , and hence, ( and so Therefore, n n n n n n Therefore, the result follows. Suppose that > c 0. Taking the lim sup on both sides in Inequality (5), we obtain Since Letting → ∞ n in Inequality (6), we have Taking the lim sup on both sides in the Inequality (4), we obtain In addition, by -nonexpansiveness of , we have ‖ ‖ ‖ ‖ − ≤ − r q r q˜, n n taking the lim sup on both sides in this inequality and using (9), we obtain gives that From (9) and (11), we have By (4) and (12), we have This completes the proof. □ We now prove the weak convergence of the sequence generated by the M-iteration method (3) for a -nonexpansive mapping in a uniformly convex Banach space satisfying Opial's condition. Theorem 3.3. Let be a uniformly convex Banach space that satisfies Opial's condition and has Property .
, then { } w n converges weakly to a fixed-point of .
Since is uniformly convex and { } w n is bounded, we may assume that ⇀ w u n as → ∞ n , without loss of generality. By Lemma 2.2, we have ∈ u .
Suppose that subsequences { } w nk and { } w nj of { } w n converge weakly to u and v, respectively. By Lemma 3.2 (ii), It is worth noting that Opial's condition has remained crucial in proving weak convergence theorems. However, each l p ( ≤ < ∞ p 1 ) satisfies Opial's condition, while all L p do not have the property unless = p 2. Next, we deal with the weak convergence of the sequence { } w n generated by (3) for -nonexpansive mapping without assuming Opial's condition in a uniformly convex Banach space with a directed graph.
In addition, n n n n n n n n n , using Lemma 3.2 (ii), we have Using Lemma 3.2 (ii) and (15), we have Using (3) and (16), we have In addition, Using (16) and (17), we have It follows from (14) that Now from (3) and (20), Using (16) and (21), we have Now from (3) and (22), Also, from (18) and (23), we have It follows from (19) and (24) that Therefore, the sequence { } w n satisfies the hypothesis of Lemma 2.6, which in turn implies that { } w n weakly converges to q so that = p q. This completes the proof. □ The strong convergence of the sequence generated by the M-iteration method (3) for -nonexpansive mapping in a uniformly convex Banach space with a directed graph is discussed in the rest of this section. . Also, by This shows that { } w n is a Cauchy sequence and so is convergent since is complete. Let . This completes the proof. □ Then, n n exists by Theorem 3.5. We note that ( ) ( ) It follows, as in the proof of Theorem 3.5, that { } w n converges strongly to a fixed-point of . This completes the proof. □

Rate of convergence and numerical examples
In this section, we show that the M-iteration process converges faster than the iterative scheme due to Phuengrattana et al. and Noor for the class -contraction mappings. Furthermore, we provide a concrete example, including numerical results, and compare the proposed algorithm (3) with Noor (1) and SP (2) algorithms to declare that our algorithm is more effective. All codes were written in Matlab 2019b.
The following definitions about the rate of convergence are due to Berinde [30].
In 2011, Phuengrattana and Suantai [14] showed that the Ishikawa iteration converges faster than the Mann iteration for a class of continuous functions on the closed interval in a real line. In order to study the order of convergence of a real sequence { } ζ n converging to ζ , we usually use the well-known terminology in numerical analysis (see [31], for example).
Let be -nonexpansive from to with ( ) nonempty, where is a nonempty closed convex subset of a Banach space endowed with a directed graph. Let ( ) = V , ( ) E is convex and the graph is transitive.
The following propositions will be useful in this context. Proof. We proceed by induction. Since is edge-preserving and ( , by the convexity of ( ) . Again, by the convexity of ( ) , and hence, ( Using a similar argument, we can show that ( . This completes the proof. □

Proof. First, by the Noor iteration
This implies that Repetition of the aforementioned processes gives the following inequality: Finally, by the SP-iteration (2),  By the same argument as earlier, we can show that the sequence generated by the M-iteration (3) converges faster than the SP-iteration (2). This completed the proof. □ Now, we will discuss a numerical experiment that supports our main results. if and only if ≤ ≠ ≤ w s 0.50 1.70 or = ∈ w s . In this example, we will present the numerical results of three possible mappings. Define mappings → , , : for any ∈ w . It is easy to show that , 1 2 , and 3 are -nonexpansive but , 1 2 , and 3 are not non-   Figures 1 and 2 show the numerical solution and relative error behavior of three comparative methods with operators , 1 2 , and 3 . It can be seen that all sequences generated by these three methods converge to = w 1. The errors of these three comparative methods are also decreased to zero when the number of iterations increased. Figure 3 shows the tendency of the asymptotic error constant σ for sequence { } ζ n results from the formula | || of the three-step Noor, SP, and M-iterations. Figure 3 shows that all methods are linearly convergent. This message is being made more confident by using Definition 4.3.
The asymptotic error constants of three comparative methods with the operators , 1 2 , and 3 on Figure 3 show that the M-iteration has the smallest asymptotic error constant in all cases. And, the smaller of asymptotic error constant gives us the faster convergence of the considering sequence. Figure 4 shows that the M-iteration consumes the least amount of time while producing results that are consistent with those obtained earlier. As we evaluate our M-iteration, we observe that by changing only one parameter, we may improve the method's convergence rate. The relative error and asymptotic error constants of the M-iteration impacted by the controlled parameter η n and operators , 1 2 , and 3 are shown in Figures 5 and 6. We noted from the aforementioned two figures that bringing the parameter η n closer to 1 enhances the efficiency of our proposed technique for each operator under consideration.  The minimization problem of the sum of two functions is to find a solution of where { } → ∪ ∞ h : n is the proper convex and lower semicontinuous function, and → f : n is the convex differentiable function, with gradient ∇f being L-Lipschitz constant for some > L 0. The solution of (26) can be characterized by using Fermat's rule, Theorem 16.3 of Bauschke and Combettes [32] as follows:    where ∂h is the subdifferential of h and ∇f is the gradient of f . The subdifferential of h at * w , denoted by ( ) ∂ * h w , is defined by: , f o r a l l .
It is also well known that the solution of (26) is characterized by the following fixed-point problem: 2 (see [33] for more details). It is also known that is a nonexpansive mapping when ( ) ∈ c 0, . The key to obtaining the image restoration model is to rearrange the elements of the images B and W into the column vectors by stacking the columns of these images into two long vectors b and w, where , both of length = n mñ˜. The image restoration problem can be modeled in one-dimensional vector by the following linear equation system: where ∈ w n is an original image, ∈ b n is the observed image, ∈ × M n n is the blurring operation, and = n mñ˜. In order to solve Problem (27), we aim to approximate the original image, vector b, which is known as the following least-squares problem: where ‖ ‖ . 2 is defined by ‖ ‖ | | = ∑ = w w i n i 2 1 2 . By setting ( ) q w as equation (28), we will apply our main results for solving the image deblurring problem (27) by setting as follows: Let ∈ × M n n be a degraded matrix and ∈ b n . By applying the M-iteration (3), we obtain the following proposed method to find the common solution of the image deblurring problem:   ] for all ∈ n and for some δ in ( ) 0, 1 . The proposed algorithm (29) is used in solving the image deblurring problem (27) with the default parameter (25) and Then, { } w n converges to its solution. The goal on image deblurring problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known in applying algorithm (29). The original RGB format for color image shown in Figure 7  mance of the comparing algorithms at w n on image deblurring process is measured quantitatively by means of the peak signal-to-noise ratio (PSNR) defined by: Figure 8).
Next, we present the restoration of images that have been corrupted by the following blur types: Type I Gaussian blur of filter size × 9 9 with standard deviation = σ 4 (the original image has been degraded by the blurring matrix M G ). Type II Out of focus blur (disk) with radius = r 6 (the original image has been degraded by the blurring matrix M O ). Type III Motion blur specifying with motion length of 21 pixels (len = 21) and motion orientation ∘ 11 ( = θ 11) (the original image has been degraded by the blurring matrix M M ).
The red-green-blue component represents images W and three different kinds of blurring image B (See Figure 8) After that, we apply the proposed algorithms in obtaining the solution of deblurring problem with these three blurring matrices. Figures 9-11 show the reconstructed RGB image using the proposed algorithms in obtaining the solution of the deblurring problem with three blurring matrices M M , G O , and M M for 50th, 1,000th, 20,000th iterations. It can be seen from these figures that the quality of restored image using the proposed algorithms in solving the deblurring problem obtain the quality improvements for the three different types of the degraded images.
Moreover, the behavior of Cauchy error, relative error, and the PSNR of the degraded RGB image by using the proposed algorithms with 100,000th iterations is demonstrated. It is remarkable that the Cauchy and relative error plot of the proposed method is decreased as the number of iterations increased. Thus, the Cauchy and relative error plot shows the validity and confirms the convergence of the proposed methods. Based on the PSNR plots shown in Figure 12, their graphs are also increased as the number of iteration is

Application to signal recovering problems
In signal processing, compressed sensing can be modeled as the following under the determinated linear equation system:   where w n is the recovered signal at nth iteration by using the proposed method. The Cauchy error, signal relative error, and SNR quality of the proposed methods for recovering the degraded signal are shown in Figure 17.
It is remarkable that the Cauchy's error plot of the proposed method decreases as the number of iterations increases. And the signal relative error plot will decrease until it converges to a constant value. Thus, the Cauchy and relative error plots show the validity and confirm the convergence of the proposed methods. For the SNR quality plot, it can be seen that the SNR value increases until it converges to a constant value. Through these results, it can be concluded that the solution of the signal recovering problem solved by the proposed  algorithm gets the quality improvements of the observed signal. Figures 18-20 shows the restored signal by using the proposed algorithms with the group of operator and noise A i and = ν i , 1, 2, 3 i . The improvement of SNR quality for the recovering signals based on 5,000th, 10,000th, and 20,000th number of iterations is also shown in Figures 18-20. It can be seen from these figures that the quality of recovering signal using the proposed algorithms in solving the signal recovering problem gets the quality improvements for the three different types of the degraded signals.

Conclusion
In this article, we have proved weak and strong convergence theorems of the M-iteration method for -nonexpansive mapping in a uniformly convex Banach space with a directed graph. Also, we have proved the weak Recovering Signal with SNR = 36 (20,000 Iteration) Figure 19: Recovering signals based on SNR quality for the degraded signal with operator A 2 and noise ν 2 .
convergence theorem without using Opial's condition (see Theorem 3.4). The conditions for convergence of the method are established by systematic proof. The M-iteration algorithm was found to be faster than the Noor and SP iterations for the class -contraction mappings (see Theorem 4.6). A numerical example illustrating the performance of the suggested algorithm was provided. All numerical experiments for a fixed-point solution by using the M-iteration, the three-step Noor iteration, and the SP-iteration methods with the three operators are shown in Figures 1-4. The M-iteration technique was shown to be more efficient than the three-step Noor iteration and the SP-iteration approaches. As applications, we applied the M-iteration algorithm to solve the image deblurring problems (Figures 7-12). We also apply the M-iteration algorithm for solving signal recovery in situations where the type of noise is unknown (Figures 13-17). We found that the M-iteration algorithm is flexible and has good quality for use with common types of blur and noise effects in image deblurring and signal recovery problems.
Acknowledgement: The authors sincerely thank the anonymous reviewers for their valuable comments and suggestions that improved the original version of this article.
Funding information: This project is funded by National Research Council of Thailand (NRCT) and University of Phayao Grant No. N42A660382. Also, the authors acknowledge the partial support provided by the University of Phayao and Thailand Science Research and Innovation under Project FF66-UoE015.
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

Conflict of interest:
The authors state no conflict of interest.
Ethical approval: The conducted research is not related to either human or animal use.
Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during this study. 20  Chonjaroen Chairatsiripong et al.