Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year


IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2016: 0.454
Source Normalized Impact per Paper (SNIP) 2016: 0.850

Mathematical Citation Quotient (MCQ) 2016: 0.23

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 14, Issue 1 (Jan 2016)

Issues

New modification of Maheshwari’s method with optimal eighth order convergence for solving nonlinear equations

Somayeh Sharifi / Massimiliano Ferrara
  • Department of Law and Economics, University Mediterranea of Reggio Calabria, Italy and CRIOS – Center for Research in Innovation, Organization and Strategy, Department of Management and Technology, Bocconi University, Milano, Italy
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Mehdi Salimi
  • Corresponding author
  • MEDAlics, Research Centre at the University Dante Alighieri, Reggio Calabria, Italy E-mail: ,
  • Center for Dynamics, Department of Mathematics, Technische Universität Dresden, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Stefan Siegmund
Published Online: 2016-07-08 | DOI: https://doi.org/10.1515/math-2016-0041

Abstract

In this paper, we present a family of three-point with eight-order convergence methods for finding the simple roots of nonlinear equations by suitable approximations and weight function based on Maheshwari’s method. Per iteration this method requires three evaluations of the function and one evaluation of its first derivative. These class of methods have the efficiency index equal to 8141.682. We describe the analysis of the proposed methods along with numerical experiments including comparison with the existing methods. Moreover, the attraction basins of the proposed methods are shown with some comparisons to the other existing methods.

Keywords: Multi-point iterative methods; Maheshwari’s method; Kung and Traub’s conjecture; Basin of attraction

MSC 2010: 65H05; 37F10

1 Introduction

Finding roots of nonlinear functions f (x) = 0 by using iterative methods is a classical problem which has interesting applications in different branches of science, in particular, physics and engineering. Therefore, several numerical methods for approximating simple roots of nonlinear equations have been developed and analyzed by using various techniques based on iterative methods in the recent years. The second order Newton-Raphson’s method xn+1=xnf(xn)f(xn) is one of the best-known iterative methods for finding approximate roots and it requires two evaluations for each iteration step, one evaluation of f and one of f′ [1, 2].

Kung and Traub [3] conjectured that no multi-point method without memory with n evaluations could have a convergence order larger than 2n–1. A multi-point method with convergence order 2n–1 is called optimal. The efficiency index provides a measure of the balance between those quantities, according to the formula p1/n, where p is the convergence order of the method and n is the number of function evaluations per iteration.

Many methods are described of which we note e.g. [2], [4-7]. Using inverse interpolation, Kung and Traub [3] constructed two general optimal classes without memory. Since then, there have been many attempts to construct optimal multi-point methods, utilizing e.g. weight functions [8-16].

In this paper, we construct a new class of optimal eight order of convergence based on Maheshwari’s method. This paper is organized as follows: Section 2 is devoted to the construction and convergence analysis of the new class. In Section 3, the new methods are compared with a closest competitor in a series of numerical examples. In addition, comparisons of the basin of attraction with other methods are illustrated in Section 3. Section 4 contains a short conclusion.

2 Description of the method and convergence analysis

2.1 Three-point method of optimal order of convergence

In this section we propose a new optimal three-point method based on Maheshwari’s method [6] for solving nonlinear equations. The Maheshwari’s method is given by yn=xnf(xn)f(xn),xn+1=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),(n=0,1,...),(1)

where x0 is an initial approximation of x*. The convergence order of (1) is four with three functional evaluations per iteration such that this method is optimal. We intend to increase the order of convergence of method (1) by an additional Newton’s step. So we have yn=xnf(xn)f(xn),zn=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),xn+1=znf(zn)f(zn).(2)

Method (2) uses five function evaluations with order of convergence eight. Therefore, this method is not optimal. In order to decrease the number of function evaluations, we approximate f′ (zn) by an expression based on f (xn), f (yn), f(zn) and f′(xn). Therefore f(zn)f(xn)F(xn,yn,zn)H(sn),

where F(xn,yn,zn)=f3(yn)(f(xn)10f(yn))+4f2(xn)(f2(yn)+f(xn)f(yn))f(xn)(2f(xn)f(yn))2(f(yn)f(zn)),(3)

and H(.) is a weight function which will be specified later, and sn=f(zn)f(xn).

We have yn=xnf(xn)f(xn),zn=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),xn+1=znf(zn)f(xn)F(xn,yn,zn)H(sn),(4)

where F (xn; yn; zn) and sn are defined as above.

2.2 Convergence analysis

In the following theorem we provide sufficient conditions on the weight function H(sn), which imply that method (4) has convergence order eight.

Assume that function f : D → ℝ is eight times continuously differentiable on an interval D ⊂ ℝ and has a simple zero x* ∈ D. Moreover, H is one time continuously differentiable. If the initial approximation x0 is sufficiently close to x* then the class defined by (4) converges to x* and the order of convergence is eight under the conditions H(0)=1,H(0)=2,

with the error term en+1=12c22(4c22c3)(c238c2c3+2c4)en8+O(en9),

where en := xnx* for n ∈ ℕ and ck:=f(k)(x)k!f(x) for k = 2, 3, ….

In what follows we give some concrete explicit representations of (4) by choosing different weight functions satisfying the provided condition for the weight function H(sn) in Theorem 2.1.

Choose the weight function H(sn) as: H(sn)=1+2sn,(12)

where sn=f(zn)f(xn).

The function H(sn) in (12) satisfies the assumptions of Theorem 2.1 and we get yn=xnf(xn)f(xn),zn=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),xn+1=znf(zn)f(xn)(f(xn)+2f(zn)f(xn))F(xn,yn,zn),(13)

where F(xn, yn, zn) is evaluated by (3).

Choose the weight function H(sn) as: H(sn)=1+4sn1+2sn,(14)

where sn=f(zn)f(xn).

The function H(sn) in (14) satisfies the assumptions of Theorem 2.1 and we obtain yn=xnf(xn)f(xn),zn=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),xn+1=znf(zn)f(xn)(f(xn)+4f(zn)f(xn)+2f(zn))F(xn,yn,zn),(15)

where F(xn, yn, zn) is evaluated by (3).

Choose the weight function H(sn) as: H(sn)=112sn,(16)

where sn=f(zn)f(xn).

The function H in (16) satisfies the assumptions of Theorem 2.1 and we get yn=xnf(xn)f(xn),zn=xn+1f(xn)f2(xn)f(yn)f(xn)f2(yn)f(xn),xn+1=znf(zn)f(xn)(f(xn)f(xn)2f(zn))F(xn,yn,zn),(17)

where F(xn, yn, Zn) is evaluated by (3).

We apply the new methods (13), (15) and (17) to several benchmark examples and compare them with the existing three-point methods which have the same convergence order r = 8 and the same computational efficiency index equal to rθ=1.682, which is optimal for θ = 4 function evaluations per iteration [1, 2].

3 Numerical performance

In this section we test and compare our proposed methods with some existing methods. We compare methods (13), (15) and (17) with the following related three-point methods.

The method by Bi et al. [8] is given by yn=xnf(xn)f(xn),zn=yn+f(yn)f(xn)f(xn)+βf(yn)f(xn)+(β2)f(yn),xn+1=znf(zn)f[zn,yn]+f[zn,xn,xn](znyn)H(tn),(18)

where β=12 and weight function H(tn)=1(1αtn)2,α=1,(19)

where tn=f(zn)f(xn).

If x0; x1, …, xn are points of D, the divided difference of order 1 can be expressed by f[x0,x1]=f(x1)f(x0)x1x0,

and in general, the divided difference of order n is obtained by f[x0,x1,...,xn]=f[x1,x2,...,xn]f[x0,x1,...,xn1]xnx0.

In addition, for x0 = x1 = … = xn = x, we write f[x,x,...,x]=f(n+1)(x)(n+1)!.

The method by Chun and Lee [9] is given by yn=xnf(xn)f(xn),zn=ynf(yn)f(xn)11f(yn)f(xn)2,xn+1=znf(zn)f(xn)1(1H(tn)J(sn)P(un))2,(20)

with weight functions H(tn)=βγ+tn+tn22tn32,J(sn)=βsn2,p(un)=γ+un2,(21)

where tn=f(yn)f(xn),sn=f(zn)f(xn),un=f(zn)f(yn) and β, γ ∈ ℝ

The Sharma and Sharma method [16] is given by yn=xnf(xn)f(xn),zn=ynf(yn)f(xn)f(xn)f(xn)2f(yn),xn+1=Znf[xn,yn]f(zn)f[xn,zn]f[yn,zn]W(tn),(22)

with weight function W(tn)=1+tn1+αtn,α=1,(23)

where tn=f(zn)f(xn).

The three point method (4) is tested on a number of nonlinear equations. To obtain a high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 7000 significant decimal digits in the programming package of Mathematica 8 [17].

In order to test our proposed methods (13), (15) and (17), and also to compare them with the methods (18), (20) and (22), we compute the error and the approximated computational order of convergence (ACOC) that was introduced by Cordero et al. [18] ACOCIn(xn+1xn)/(xnxn1)In(xnxn1)/(xn1xn2).

In Tables 1, 2, 3 and 4, the proposed methods (13), (15) and (17) with the methods (18), (20) and (22) have been tested on different nonlinear equations. It is clear that these methods are in accordance with the developed theory.

Table 1

Comparison for f(x)=In(1+x2)+ex23xsin(x), x* = 0, x0 = 0.35, for different methods (M) and weight functions (W-F).

Table 2

Comparison for f(x) = In(1 − x + x2) + 4 sin(1 − x),x* = 1,x0 = 1.1, for different methods (M) and weight functions (W-F).

Table 3

Comparison for f(x)=x4+sin(πx2)5,x=2, x0 = 1.5, for different methods (M) and weight functions (W-F).

Table 4

Comparison for f(x) = (x − 2)(x10 + x + 1)ex−1, x* = 2, x0 = 2.1, for different methods (M) and weight functions (W-F).

3.1 Graphical comparison by means of attraction basins

We have already observed that all methods converge if the initial guess is chosen suitably. From the numerical point of view, the dynamical behavior of the rational function associated with an iterative method gives us important information about convergence and stability. Therefore, we now investigate the stability region. In other words, we numerically approximate the domain of attraction of the zeros as a qualitative measure of stability. To answer the important question on the dynamical behavior of the algorithms, we investigate the dynamics of the new methods and compare them with common and well-performing methods from the literature. In the following, we recall some basic concepts such as basin of attraction. For more details one can consult [19-22].

Let G : ℂ → ℂ be a rational map on the complex plane. The orbit of a point z ∈ ℂ is defined as orb(z)={z,G(z),G2(z),}.

A point z0 ∈ ℂ is called a periodic point with minimal period m if Gm(z0) = z0, where m is the smallest integer with this property. A periodic point with minimal period 1 is called a fixed point. Moreover, a point z0 is called attracting if |G′(z0)| < 1, repelling if |G′(z0)| > 1, and neutral otherwise. The Julia set of a nonlinear map G(z), denoted by J(G), is the closure of the set of its repelling periodic points. The complement of J(G) is the Fatou set F(G), where the basin of attraction of the different roots lie [23]. For the dynamical point of view, in fact, we take a 256 X 256 grid of the square [−3,3] × [−3,3] ∈ ℂ and assign a color to each point z0D according to the simple root to which the corresponding orbit of the iterative method starting from z0 converges, and we mark the point as black if the orbit does not converge to a root, in the sense that after at most 100 iterations it has a distance to any of the roots, which is larger than 10−3. In this way, we distinguish the attraction basins by their color for different methods.

We use the basins of attraction for comparing the iteration algorithms. The basin of attraction is a method to visually comprehend how an algorithm behaves as a function of the various starting points. In the following figures, the roots of each functions are drawn with a different color. In the basin of attractions, the number of iteration needed to achieve the solution is showed in darker or brighter colors. Black color denotes lack of convergences to any of the roots or convergence to the infinity.

We have tested several different examples, and the results on the performance of the tested methods were similar. Therefore we report the general observation here for test problems p1(z) = z2 − 1 with roots −1,1 and p2 (z) = z(z2 + 1) with roots 0, i, −i.

In Figures 1 and 2, basins of attractions of methods (13), (18), (20) and (22) with two test problems p1 (z) and p2(z) are illustrated from left to right respectively. As a result, in Figure 1 the basin of attraction of method (13) is similar to other methods, however in Figure 2, first two figures seem to produce larger basins of attraction than the last two figures.

Comparison of basin of attraction of methods (13), (18), (20) and (22) for test problem p1 (z) = z2 − 1
Fig. 1

Comparison of basin of attraction of methods (13), (18), (20) and (22) for test problem p1 (z) = z2 − 1

Comparison of basin of attraction of methods (13), (18), (20) and (22) for test problem p2(z) = z3 + z
Fig. 2

Comparison of basin of attraction of methods (13), (18), (20) and (22) for test problem p2(z) = z3 + z

4 Conclusion

We presented a new optimal class of three-point methods without memory for approximating a simple root of a given nonlinear equation. Our proposed methods use five function evaluations for each iteration. Therefore they support Kung and Traub’s conjecture. Numerical examples show that our methods work and can compete with other methods in the same class, as well as we used the basin of attraction for comparing the iteration algorithms.

Acknowledgement

The authors would like to express their appreciations from the MEDAlics, Research Centre at the University Dante Alighieri, Reggio Calabria, Italy for financial support.

References

  • [1]

    Traub J.F., Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, N.J., 1964 Google Scholar

  • [2]

    Ostrowski A.M., Solution of Equations and Systems of Equations, Academic Pres, New York, 1966 Google Scholar

  • [3]

    Kung H.T., Traub J.F., Optimal order of one-point and multipoint iteration, J. Assoc. Comput. Mach., 1974, 21, 634-651 Google Scholar

  • [4]

    Jarratt P., Some efficient fourth order multipoint methods for solving equations, BIT Numer. Math., 1969, 9, 119-124 Google Scholar

  • [5]

    King R.F., A Family of Fourth Order Methods for Nonlinear Equations, SIAM J. Numer. Anal., 1973, 10, 876-879Google Scholar

  • [6]

    Maheshwari A.K., A fourth-order iterative method for solving nonlinear equations, Appl. Math. Comput., 2009, 211, 383-391 Google Scholar

  • [7]

    Ferrara M., Sharifi S., Salimi M., Computing multiple zeros by using a parameter in Newton-Secant method, SeMA J., 2016,  CrossrefGoogle Scholar

  • [8]

    Bi W., Ren H., Wu Q., Three-step iterative methods with eighth-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 2009, 225, 105-112 Google Scholar

  • [9]

    Chun C., Lee M.Y., A new optimal eighth-order family of iterative methods for the solution of nonlinear equations, Appl. Math. Comput., 2013, 223, 506-519 Google Scholar

  • [10]

    Petković M.S., Neta B., Petković L.D., Džunić J., Multipoint Methods for Solving Nonlinear Equations, Elsevier/Academic Press, Amsterdam, 2013Google Scholar

  • [11]

    Lotfi T., Sharifi S., Salimi M., Siegmund S., A new class of three-point methods with optimal convergence order eight and its dynamics, Numer. Algor., 2015, 68, 261-288Google Scholar

  • [12]

    Salimi M., Lotfi T., Sharifi S., Siegmund S., submitted for publication, Optimal Newton-Secant like methods without memory for solving nonlinear equations with its dynamics, 2016Google Scholar

  • [13]

    Matthies G., Salimi M., Sharifi S., Varona J.L., submitted for publication, An optimal three-point eighth-order iterative method without memory for solving nonlinear equations with its dynamics, 2016Google Scholar

  • [14]

    Sharifi S., Salimi M., Siegmund S., Lotfi T., A new class of optimal four-point methods with convergence order 16 for solving nonlinear equations, Math. Comput. Simulation, 2016, 119, 69-90Google Scholar

  • [15]

    Sharifi S. Siegmund S., Salimi M., Solving nonlinear equations by a derivative-free form of the King’s family with memory, Calcolo, 2016, 53, 201-215Google Scholar

  • [16]

    Sharma J.R., Sharma R., A new family of modified Ostrowski’s methods with accelerated eighth order convergence, Numer. Algor., 2010, 54, 445-458 Google Scholar

  • [17]

    Hazrat R., Mathematica, A Problem-Centered Approach, Springer-Verlag, 2010 Google Scholar

  • [18]

    Cordero A., Torregrosa J.R., Variants of Newton method using fifth-order quadrature formulas, Appl. Math. Comput., 2007, 190, 686-698 Google Scholar

  • [19]

    Amat S., Busquier S., Plaza S., Dynamics of the King and Jarratt iterations, Aequationes Math., 2005, 69, 212-223 Google Scholar

  • [20]

    Neta B., Chun C., Scott M., Basin of attractions for optimal eighth order methods to find simple roots of nonlinear equations, Appl. Math. Comput., 2014, 227, 567-592Google Scholar

  • [21]

    Varona J.L., Graphic and numerical comparison between iterative methods, Math. Intelligencer, 2002, 24, 37-46Google Scholar

  • [22]

    Vrscay E.R., Gilbert W.J., Extraneous fixed points, basin boundaries and chaotic dynamics for Schroder and Konig rational iteration functions, Numer. Math., 1988, 52, 1-16 Google Scholar

  • [23]

    Babajee D.K.R., Cordero A., Soleymani F., Torregrosa J.R., On improved three-step schemes with high efficiency index and their dynamics, Numer. Algor., 2014, 65, 153-169 Google Scholar

About the article

Received: 2016-01-06

Accepted: 2016-05-06

Published Online: 2016-07-08

Published in Print: 2016-01-01


Citation Information: Open Mathematics, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2016-0041.

Export Citation

© 2016 Sharifi et al., published by De Gruyter Open. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Comments (0)

Please log in or register to comment.
Log in