Explicit Runge–Kutta Methods Combined with Advanced Versions of the Richardson Extrapolation

: Richardson Extrapolation is a very general numerical procedure, which can be applied in the solu-tion of many mathematical problems in an attempt to increase the accuracy of the results. It is assumed that this approach is used to handle non-linear systems of ordinary differential equations (ODEs) which arise often in the mathematical description of scientific and engineering models either directly or after the discretization of the spatial derivatives of partial differential equations (PDEs). The major topic is the analysis of eight advanced implementations of the Richardson Extrapolation. Two important properties are analyzed: (a) the possibility to achieve more accurate results and (b) the possibility to improve the stability properties of eight advanced versions of the Richardson Extrapolation. A two-parameter family of test-examples was constructed and used to check both the accuracy and the absolute stability of the different versions of the Richardson Extrapolation when these versions are applied together with several Explicit Runge–Kutta Methods (ERKMs).


Introduction
It is well known that the Richardson Extrapolation (introduced in [23], see also [7, 32-34, 36, 37]) is a very general approach, which can successfully be used during the approximate solution of different classes of mathematical problems in the efforts (a) to increase the accuracy of the selected numerical methods and/or (b) to control the stepsize when the treated mathematical problems are time-dependent. On the other hand, it is also well known that the application of the Richardson Extrapolation in connection with time-depending problems is sometimes causing problems related to the absolute stability during the computational process ( [9,34,36]). Therefore, the absolute stability properties of the new numerical methods, which are combinations of the applied numerical algorithms for treating different mathematical problems with several versions of the Richardson Extrapolation, must be studied very carefully. In this paper we shall (A) introduce a unified approach for the application of the Richardson Extrapolation, (B) derive general formulae representing different highly accurate versions of the Richardson Extrapolation, (C) show that the application of more advanced versions of the Richardson Extrapolation leads to improved absolute stability properties when Explicit Runge-Kutta Methods, ERKMs, are used, (D) present a two-parameter family of meaningful examples which can and will be used in the verification of both the accuracy of the calculated results and the absolute stability, (E) discuss the obtained numerical results as well as the possibilities for achieving further improvements, (F) finish the presentation with several general conclusions. We shall first introduce some convenient notations, which will after that be consistently used in this paper.

Notation
Assume that the mathematical problem solved is a non-linear initial value problem for systems of ordinary differential equations (ODEs) defined in the following manner: The exact solution of (2.1) is y(t) and we shall first consider the case where approximated values of this solution have to be calculated by applying some arbitrary one-step method ([5, 6, 13, 14, 18, 20, 22]) for solving systems of ODEs either directly or in combination with some version of the Richardson Extrapolation. We shall assume that approximations of the exact solution of (2.1) are calculated on a set of equidistant grid-points: t 0 = a, t n = t n−1 + h (n = 1, 2, . . . , N), t N = b, h = b − a N , (2.2) but this is done mainly in order to facilitate the presentation of the results. Most of the results are also valid when non-equidistant grids are used. Under the above assumptions, the following notations will be consistently used in the remaining part of this paper: (A) The value of the numerical solution of (2.1) calculated by using directly the selected one-step method at the grid-point t n (n = 1, 2, . . . , N) of (2.2) will always be denoted by y n . It must be mentioned here that the fact that a one-step method is used in the computations means that the approximation y n ≈ y(t n ) can be calculated, for any n = 1, 2, . . . , N, by using only the previous approximation y n−1 ≈ y(t n−1 ). (B) The value of the numerical solution of (2.1), which is found by the Classical Richardson Extrapolation (introduced in [23], see also [36,37,40]) and which is calculated at the grid-point t n (n = 1, 2, . . . , N) of (2.2), will always be denoted by n ≈ y(t n ).
The q-times repeated Richardson Extrapolation will be defined and discussed in the next section. Moreover, it will be shown there how the terms in the right-hand side of the following relationship can be obtained (also for q = 0):

Description of Advanced Versions of the Richardson Extrapolation
Definition 3.1. Advanced versions of the Richardson Extrapolation can be introduced in the following manner. Let q ∈ {0, 1, 2, . . . } be some fixed integer, and assume that the approximation y [q] n−1 ≈ y(t n−1 ) is available. Consider the application of an arbitrary one-step method in the numerical solution of (2.1) on the grid-points defined by (2.2) and calculate q + 2 approximations z [r] n , r = 0, 1, 2, . . . , q + 1, under the following three conditions: (a) the starting approximation used in the calculation of all approximations z [r] n is y [q] n−1 , (b) the number of steps needed to calculate z [r] n is 2 r , (c) the stepsize used in the calculation of z [r] n is h 2 r , where h is defined by (2.2). Use the approximations z [r] n , r = 0, 1, 2, . . . , q + 1, to calculate y [q] n ≈ y(t n ) and apply after that the same approach in the computation of the next approximation y Two comments are needed: (a) Definition 3.1 is applicable for q > 7, too, but only versions of the Richardson Extrapolation for 0 ≤ q ≤ 7 will be studied here, (b) it is necessary to explain how y [q] n ≈ y(t n ) can be calculated by using z [r] n , r = 0, 1, 2, . . . , q + 1. The answer to this important question will be postponed to the next section, where the order of accuracy of y [q] n will be studied. Example 1. The well-known Classical Richardson Extrapolation, [10], is obtained by setting q = 0. It is necessary to calculate z n by carrying out one step with a stepsize h and then z [1] n by performing two steps with a stepsize h 2 . The starting approximation in the calculation of z [0] n and z [1] n is y n and z [1] n are used to calculate y [0] n ≈ y(t n ).

Example 2.
The Repeated Richardson Extrapolation, [32], see also [7] and [34], is obtained by setting q = 1. It is necessary to calculate first z [0] n and z [1] n as in Example 1 and z [2] n by performing four steps with a stepsize h 4 . The starting approximation in the calculation of z n , z [1] n and z [2] n is y [1] n−1 ≈ y(t n−1 ). The approximations z [0] n , z [1] n and z [2] n are used to calculate y [1] n ≈ y(t n ).

Example 3.
The Two-times Repeated Richardson Extrapolation, [33], is obtained by setting q = 2. It is necessary to calculate the three approximations z [0] n , z [1] n and z [2] n as in Example 2 and z [3] n by performing eight steps with a stepsize h 8 . The starting approximation in the calculation of z n , z [1] n , z [2] n and z [3] n is y [2] n−1 ≈ y(t n−1 ). The approximations z [0] n , z [1] n , z [2] n and z [3] n are used to calculate y [2] n ≈ y(t n ).
Two important conclusions can be drawn from the above examples: (a) the process can easily be continued in an obvious way to calculate successively the Three-, Four-, Five-, Six-and Seven-times Repeated Richardson Extrapolation and (b) the computational work is increasing very quickly, but it will be proved in the next section that the accuracy is also increased very considerably.

Accuracy of the Different Versions of the Richardson Extrapolation
Theorem 4.1. Consider an arbitrary one-step method for solving the system of ODEs (2.1). Assume that (a) its order of accuracy is p ≥ 1, (b) it is used together with some version of the Richardson Extrapolation with 0 ≤ q ≤ 7.
Then the order of accuracy of the approximation calculated by the selected version of the Richardson Extrapolation is at least p + q + 1 when the right-hand-side f(t, y) of (2.1) is at least p + q + 1 times continuously differentiable.
Proof. The following q + 2 equalities hold when the conditions of Theorem 4.1 are satisfied: where K j , j = 1, 2, . . . , q + 1, are constants which do not depend on the stepsize h, and the theorem will be proved if all terms in the right-hand side of (4.1) that contain these constants are eliminated. It will be explained below how the elimination of the constants K j , j = 1, 2, . . . , q + 1, can be performed for q = 0, 1, 2, . . . , 7.
Classical Richardson Extrapolation. The q + 2 relationships (4.1) are reduced to the following two relationships in the special case where q = 0: Multiply (4.3) by 2 p and subtract (4.2) from the so-modified relationship (4.3). The result is Then Equality (4.4) shows how the approximation y n can be calculated by using the already computed approximations z [0] n and z [1] n , while equality (4.5) is telling us that the order of accuracy of the Classical Richardson Extrapolation is p + 1.

Repeated Richardson Extrapolation.
The q + 2 relationships (4.1) are reduced to the following three relationships in the special case where q = 1: Multiply (4.7) with 2 p and subtract (4.6) from the so-modified relationship (4.7). Multiply after that (4.8) with 4 p and subtract (4.6) from the so-modified relationship (4.8). In this way the first constant K 1 will be eliminated and two equalities which depend only on K 2 will be obtained. A quite similar procedure can be applied to eliminate also the second constant K 2 . The final result is Denote y [1] n = 2 2p+1 z [2] n − 3 ⋅ 2 p z [1] n + z [1] . (4.10) Then y(t n ) − y [1] n = O(h p+2 ). Equality (4.10) shows how the approximation y [1] n can be calculated by using the already computed approximations z [0] n , z [1] n and z [2] n , while equality (4.11) is telling us that the order of accuracy of the Repeated Richardson Extrapolation is p + 2.
Two-times Repeated Richardson Extrapolation. The q + 2 relationships (4.1) are reduced to the following four relationships in the special case where q = 2: 14) Multiply (4.13) with 2 p and subtract (4.12) from the so-modified relationship (4.13). Multiply after that (4.14) with 4 p and subtract (4.12) from the so-modified relationship (4.14). Finally, multiply (4.15) with 8 p and subtract (4.12) from the so-modified relationship (4.15). In this way the first constant K 1 will be eliminated and three equalities which depend only on K 2 and K 3 will be obtained. A quite similar procedure can be applied twice in order to eliminate first the second constant K 2 and then the third constant K 3 . The final result is Denote y [2] n = 2 3p+3 z [3] n − 7 ⋅ 2 2p+1 z [2] n + 7 ⋅ 2 p z [1] n − z [2] . (4.16) Then As in the previous two cases, equality (4.16) shows how y [2] n can be calculated by using the already computed approximations z [0] n , z [1] n , z [2] n and z [3] n , while equality (4.17) is telling us that the order of accuracy of the Two-times Repeated Richardson Extrapolation is p + 3.
Five More Advanced Versions of the Repeated Richardson Extrapolation. The number of constants is increased if these versions are used, being four when q = 3, five when q = 4, six when q = 5, seven when q = 6 and eight when q = 7, but the main idea remains the same: all these constants must be eliminated and formulae for performing Three-, Four-, Five-, Six-and Seven-times Repeated Richardson Extrapolation will be derived at the end of the elimination process for the appropriate value of q. It should be noted here that the main principle is very clear, quite straight-forward and extremely simple, but the computational process becomes more and more complicated when the value of q is increased. The constants in (4.1) were successively eliminated for q = 3, 4, 5, 6, 7 and it has been shown that (2.3), which was satisfied for q = 0, 1, 2, is also valid for q = 3, 4, 5, 6, 7. Moreover, it is obvious that if we know S [q] in some formula (2.3), then we can easily construct the numerator P n by multiplying successively the terms in the denominator by z n , respectively. This is why only the denominators S [q] of formulae (2.3) for q = 3, 4, 5, 6, 7 that are obtained after the elimination of the constants in (4.1) are listed below: This completes the proof of Theorem 4.1.
Formulae (4.18)-(4.22) were derived by eliminating directly constants in (4.1). The major idea is, as mentioned above, straight-forward, but the computational process is becoming too long and very difficult when q is large. However, there is also a much easier way for obtaining the above results.

Theorem 4.2.
Consider any version of the Richardson Extrapolation with 0 ≤ q ≤ 7. Then its denominator can be calculated by using the following generic formula: Moreover, the following relationships follow immediately from (4.23): Proof. It can be seen from (4.4) that The application of formula (4.24) with q = 1 leads to The last expression is precisely the denominator of the Repeated Richardson Extrapolation, see (4.9), which means that (4.24) is true for q = 1. Continuing in a quite similar manner, one can successively prove that (4.24) holds also for q = 2, 3, . . . , 7.

Corollary 4.3. If we know S [q]
for some q = 0, 1, . . . , 6, then S [q+1] (and after that also P [q] n and the final formula y ) can be calculated by using (4.24). Conjecture 1. The assertion of Theorem 4.1 is valid not only for 0 ≤ q ≤ 7, but also for any q > 7.

Conjecture 2. Theorem 4.2 and Corollary 4.3 are true for any
The results presented at the end of this paper and obtained by performing runs with the Eight-times Repeated Richardson Extrapolation indicate that the two conjectures may be correct. The denominator of the Eighttimes Repeated Richardson Extrapolation used in these experiments was calculated by using the generic formula S [8] = (2 p+8 − 1)S [7] , and it is given by the expression

Stability Properties of the Advanced Versions of the Richardson Extrapolation
Germund Dahlquist proposed in [9] to use a simple linear and scalar equation: solved on the equidistant grid in order to study the absolute stability properties of numerical methods for solving ODEs. The exact solution of (5.1) is y(t) = ηe λt and it is bounded when the condition α ≤ 0 is satisfied. This implies that it is necessary to require that the selected numerical method produces arbitrarily long sequences of approximations {y 1 , y 2 , . . . , y N , . . . } which are also bounded when α ≤ 0. The application of an arbitrary one-step numerical method in the treatment of (5.1) leads to the following recursive relations (see, for example, [22]): The function R(ν) from (5.2) is called stability function. This function is a polynomial when the selected one-step numerical method is explicit. R(ν) is a rational function (a ratio of two polynomials) when implicit one-step numerical methods are used. The selected one-step method will produce a bounded sequence of approximations {y 1 , y 2 , . . . , y n , . . . } to the solution of (5.1) for the applied value h of the time-stepsize if the important inequality |R(ν)| ≤ 1 is satisfied for ν = hλ ∈ ℂ − , see (5.2). The one-step numerical method is called absolutely stable for this value of parameter ν ∈ ℂ − , while the set of all values ν ∈ ℂ − for which the relationship |R(ν)| ≤ 1 holds is forming the absolute stability region of the chosen one-step numerical method ( [9], see also [11,13,14,20,22,26,27]). It is important now to answer the following question: will it be possible to calculate the stability function R [q] (ν) of some version of the Richardson Extrapolation obtained by q ≥ 0 if we select some underlying one-step method with a known stability function R(ν)? The answer to this important question is positive for q ≤ 7. More precisely, the following theorem holds: Assume that an arbitrary one-step numerical method with a stability function R(ν) is used in the solution of (5.1). Then the stability functions R [q] (ν) of any version of the Repeated Richardson Extrapolation with q = 0, 1, . . . , 7 can be calculated by using the formula where S [q] are defined as in the previous section, while the functionsP [q] (ν) are listed below for the eight values of parameter q used in (5.3): Proof. We shall prove the theorem for the Four-times Repeated Richardson Extrapolation, i.e., when q = 4. It is clear, see formula (5.2), that the following six relationships hold for the Four-times Repeated Richardson Extrapolation: n = R(ν)y [4] n−1 , z n−1 , z [4] n = [R( 16 y [4] n−1 , z [5] n = [R( n−1 . (5.12) Multiply equalities (5.12) with −1, 31 ⋅ 2 p , −155 ⋅ 2 2p+1 , 155 ⋅ 2 3p+3 , −31 ⋅ 2 4p+6 and 2 5p+10 , respectively. Form the sum of the modified relationships and divide both sides of the obtained equality with the expression n =P [4] (ν) S [4] y [4] n−1 = R [4] (ν)y [4] n−1 .
In this way the theorem is proved for q = 4.The same approach can obviously be applied to prove the theorem in the other seven cases.
Remark 5.2. The above proof of Theorem 5.1 for q = 4 shows that the stability polynomial of the Four-times Repeated Richardson Extrapolation is expressed by the stability polynomial of the underlying one-step numerical method, but it is different, which means that the two numerical methods, the underlying one-step numerical method and its combination with the Four-times Repeated Richardson Extrapolation, will in general have different absolute stability properties. This implies that it is necessary to study carefully the absolute stability properties of the Four-times Repeated Richardson Extrapolation. The same conclusion can also be drawn for the other versions of the Richardson Extrapolation when q ≤ 7.
Conjecture 3. The assertion of Theorem 5.1 remains true for q > 7.
Remark 5.3. The stability results presented in this section were obtained by using, as in [9], the linear and scalar problem (5.1). Many attempts to generalize the original Dahlquist stability theory have been carried out after 1963 (see, for example, [6,13,14,20,22]). It can be shown that many of these extended stability definitions and results remain valid also for the different versions of the Richardson Extrapolation.

Using Explicit Runge-Kutta Methods with the Richardson Extrapolation
The class of the Runge-Kutta Methods (RKMs), introduced more than 100 years ago in [19,21,25] (by K. Heun, W. Kutta and C. Runge, respectively), is a sub-class of the one-step methods. This means that all results proved in the previous sections for the one-step methods are also valid for the RKMs. Explicit Runge-Kutta Methods (ERKMs) will be used in the remaining part of this paper in combination with the advanced versions of the Richardson Extrapolation to handle numerically a two-parameter family of numerical examples.

Introduction of the ERKMs
The m-stage Explicit Runge-Kutta Methods are one-step numerical methods for solving systems of ODEs (2.1) that are based on the following formula (see more details in [6,13] and [22]): The coefficients c i are constants, while the stage vectors k n i are defined by where i = 2, 3, . . . , m and b ij are constants depending on the particular ERKM.

Stability Polynomials of Some ERKMs
We shall assume that the order of accuracy p of the chosen ERKM is equal to the number m of the stage vectors k n i , i = 1, 2, . . . , m. This is possible only if m ≤ 4 (the relationship p < m is always satisfied when m > 4). If p = m, then the stability polynomial R(ν) associated with the selected Explicit Runge-Kutta Method is given (see again [22, p. 202]) by The stability polynomials R [q] (ν) related to the Classical Richardson Extrapolation when q = 0 or to different versions of the Repeated Richardson Extrapolation when 1 ≤ q ≤ 7 can easily be obtained by substituting the expression of R(ν) from (6.1) on the right-hand sides of formulae (5.4)-(5.11).

Absolute Stability Regions of ERKMs with p = m
The . . } located in the negative part of the complex plane and over the real axis will be calculated in this manner. Moreover, all these points are inside a part of the absolute stability region which is located to the left of the imaginary axis and over the negative part of the real axis. Therefore, we shall obtain, by connecting successively these points, a domain in which the selected RKM will produce stable results for the chosen stepsize when (5.1) is handled.
Remark 6.1. If the selected RKM is explicit, then there is as a rule no need to include requirements (b) and (c) in the definition of Algorithm 1 and the boundary of the part of the absolute stability region which is located to the left of the imaginary axis and over the negative part of the real axis will be obtained when the increment ε is sufficiently small. If the RKM is implicit (such methods will be studied in a paper which is in preparation), then Algorithm 1 with the application of requirements (b) and (c) may result in a huge domain in which the computational process will be stable when ( The absolute stability regions are symmetric with regard to the real axis, [22], and there is no need to repeat the computations for negative values of the imaginary partβ of ν = hλ =ᾱ +β i. There is no need to require bounded approximations of the solution ifᾱ > 0, because for positive values ofᾱ the exact solution of (5.1) is unbounded. Therefore, it is relevant to introduce the requirement |R(ν)| ≤ 1 and to study the absolute stability properties of the numerical methods only whenᾱ ≤ 0.
The results obtained for the ERKMs with p = m and m = 1, 2, 3, 4 by using Algorithm 1 are given in the four plots in Figure 1 (the boundaries of the absolute stability regions of the ERKMs are the red curves in these plots).
The same procedure can be used to obtain the absolute stability regions of all versions of the Richardson Extrapolation. It is only necessary to replace the stability polynomial R(ν) with the stability polynomials R [q] (ν), q = 0, 1, . . . , 7. The boundaries of the absolute stability regions of all eight versions of the Richardson Extrapolation are also given in Figure 1.
The results shown in Figure 1 can be considered as a graphical proof of the following theorem:

The Forward Euler Formula
The ERKM with p = m = 1 can be written in the following form: y n = y n−1 + hf(t n−1 , y n−1 ). (6.2)

The Improved Euler Formula
The ERKM with p = m = 2 can be written as

The Most Popular Runge-Kutta Method
This method (an ERKM with p = m = 4) is given below: (6.5)

Introduction of a Two-Parameter Family for Testing Both the Accuracy and the Absolute Stability Properties
Consider the system of three linear ODEs: The elements of matrix A from (7.1) are given below: The curves representing the three components of the solution for γ = −750 and β = 8 are given in Figure 2. It should be mentioned here that the eigenvalues of matrix A from (7.1)-(7.4) are In order to ensure stable computations in the solution of this example, one should require all three points hμ 1 , hμ 2 and hμ 3 to be inside the absolute stability region of the used method. It is clear that by increasing the absolute value of γ and keeping the value of β small, we are increasing the stability requirements. Moreover, the stability interval on the negative part of the real axis is important in this case (hμ 1 should be inside the absolute stability interval on the negative part of the real axis). Increasing the value of β leads to an increase of the oscillations and, thus to an increase of the accuracy requirements. However, large values of β will also cause stability problems related to the height of the absolute stability region (the influence of the complex eigenvalues will become dominant when β is much larger in absolute value than γ). This short analysis shows clearly that it is possible to steer both the accuracy requirements and the stability requirements by selecting appropriate values of each of the two parameters.

Organization of the Computations
We used the four ERKMs listed in (6.2)-(6.5) and their combinations with all eight versions of the Richardson Extrapolation. Twelve runs with different stepsizes were successively performed for each ERKM and for each version of the Richardson Extrapolation. The first run was always performed with a stepsize h = 0.02048 and we halved the stepsize after each run. This means that the last (and also the smallest) stepsize was h = 10 −5 . The integration interval [0, 13.1072] was divided into 128 equal sub-intervals and the accuracy of the results obtained by any of the selected numerical methods was evaluated at the end of each sub-interval. Assume that (A)t j , j ∈ {1, 2, . . . , 128}, are the end-point of any of the 128 sub-intervals, (B)ȳ ij ≈ y i (t j ) is an approximation calculated by any of the used numerical methods. Then the following formula was used to evaluate the accuracy achieved by the selected method at this point: The global error is computed by using the formula ERROR = max j=1,2,...,128 (ERROR j ).

Moderate Accuracy and Absolute Stability Requirements
The ERKMs defined by (6.2)-(6.5) were used in the solution of the example defined with setting with β = 32 and γ = −750 in (7.1)-(7.4). The errors are given in Tables 1-4, while the corresponding convergence rates can be found in Tables 5-8. It should be mentioned here that a requirement to achieve at least two correct significant digits in the calculated solution was imposed in all tables given below, which implies that ERROR must be less than 10 −2 . Four important conclusions can be drawn from Tables 1-8 bination with the Two-stages Second-order ERKM is five. The order which is actually achieved is clearly six (which is seen from the results given in the column under "Two-times REP" in Table 6). This means that the coefficient of the first term in O(h 5 ) is equal to zero for this particular example. Similar situation arises when the Six-times Repeated Richardson Extrapolation is run together with the Two-stages Second-order ERKM (the actually achieved order of accuracy for his particular example is approximately ten (not nine as it should be). We use the following notation in Tables 1-8: • "N.S." means that the method is not stable.
• "V.L." means that the rate is very large (due presumably to the fact that ν is very close to the boundary of the absolute stability region).

Increasing the Accuracy and Stability Requirements
It is interesting to investigate what will happen if the accuracy requirements are increased, which can be done by simply increasing the value of β. We set β = 8192 instead of β = 32 and kept γ = −750 in a second series of runs. In this way the oscillations are increased and, thus, the accuracy requirements are also increased. It is necessary to point out, however, that not only are the accuracy requirements increased. The absolute stability requirements are also increased because the absolute value of the complex eigenvalues of matrix A is becoming more than ten times larger than the absolute value of the real eigenvalue when this choice is made. Therefore, both stability and accuracy problems will occur in this case. Results for this choice of the two parameters are given in Tables 9-12 for the computational errors and in Tables 13-16 for the convergence rates. Four additional conclusions can also be drawn by studying the results given in Tables 9-16: (e) The results are becoming more and more accurate when q is increased. However, this problem cannot be solved by using the largest stepsizes even when the Fourth-order Four-stage ERKM is used in combination with the Seven-times Repeated Richardson Extrapolation. The stability is causing greater problems in the treatment of this test-example. (f) The expected convergence rates are nearly always achieved when the runs were successful (i.e., neither the accuracy nor the absolute stability is causing problems), but now the results are not as accurate as the results presented in Tables 5-8. (g) The underlying ERKMs are always unstable or not accurate (excepting the Fourth-order Four-stage ERKM when it is used with the smallest stepsize). (h) Also in these runs, the convergence rate actually achieved is sometimes greater than the expected convergence rate (see the results presented in Tables 14-16). We use the following notation in Tables 9-16: • "N.S." means that the method is not stable.
• "V.L." means very large (which is caused presumably by the fact that ν i = hμ i for i = 2, 3 is very close to the boundary of the absolute stability region). • The rounding errors are dominant in the grey zone.

Major Achievements and Plans for Future Research
The basic properties and the performance of eight versions of the Richardson Extrapolation were studied in the previous sections. The following important topics were mainly discussed.

Accuracy Properties of the Different Versions of the Richardson Extrapolation
It was shown (Theorem 4.1) that the order of accuracy of the combinations of eight versions of the Richardson Extrapolation with different one-step methods for solving systems of ODEs dy dt = f(t, y) is always higher than that of the underlying method when the right-hand-side vector f is sufficiently smooth.

Stability Functions
It was proved (Theorem 5.1) that if we know the stability function of the selected one-step method for solving systems of ODEs that are represented by formula (2.1), then it will be possible to construct the stability functions of any of the studied versions of the Richardson Extrapolation.

Absolute Stability Regions Related to the ERKMs
If ERKMs with m = p and m = 1, 2, 3, 4 are used together with some version of the Richardson Extrapolation, then it was shown that the combined numerical methods have always better stability properties (i.e., larger absolute stability regions) than the underlying ERKM (see Figure 1 and Theorem 6.2).

Development of a Two-Parameter Family of Test-Examples
A two-parameter family, by which both the accuracy and the stability properties of the numerical methods can easily be controlled by varying in an appropriate way the parameters, was developed and a long series of experiments was carried out. All eight versions were very systematically tested in the cases where the underlying methods were Explicit Runge-Kutta Methods (ERKMs).

Use of More Complicated Examples
We are planning to apply advanced versions of the Richardson Extrapolation in the numerical treatment of an atmospheric chemical scheme. This scheme is described by a non-linear system of ODEs, which is very stiff, badly scaled and extremely ill-conditioned (it was found, by using subroutines from [3,28,38] that the condition numbers of the involved Jacobian matrices of the non-linear system of ODEs varied in the interval [4.56 ⋅ 10 8 , 9.27 ⋅ 10 12 ]. The atmospheric chemical scheme is used in the Unified Danish Eulerian Model, which has been used in many studies related to the pollution levels and to the increases of some dangerous pollution levels as a results of the climatic changes both in Europe as a whole ( [4,12,[29][30][31]) and in different parts of Europe, such as the UK ([1]), the Balkan Peninsula ( [39], Bulgaria ( [35,43,44]), Denmark ( [42]), Hungary ( [17,41]), several countries in Eastern and Central Europe [8] and the North Sea [15]. The Unified Danish Eulerian Model and other large-scale European air pollution models were also studied in [16,24].

Use of Implicit Runge-Kutta Methods
The Implicit Runge-Kutta Methods deserve some special treatment, because it is not easy to study the absolute stability properties of the numerical methods resulting after the application of the advanced versions of the Richardson Extrapolation together with IRKMs. In fact, that is extremely difficult. The following example explains the difficulties. The investigation of the Seven-times Repeated Richardson Extrapolations together with the simplest IRKMs, the first-order Backward Differentiation Formula, implies study of stability functions, which contain polynomials of degree 256 both in the numerator and in the denominator. Most of the difficulties are now successfully resolved and we are preparing a paper about the use of IRKMs together with the Advanced Versions of the Richardson Extrapolation, which will focus on the absolute stability properties of the resulting methods.

Linear Multistep Methods
The Richardson Extrapolation is not suitable for application together with linear multistep methods. Assume that a linear k-step method is to be used in the calculation of y n . Then approximations y n−1 , y n−2 , . . . , y n−k of the unknown function are needed at the points x n−1 = x n − h, x n−2 = x n − 2h, . . . , x n−k = x n − kh when the linear k-step method is used. The application of the Classical Richardson Extrapolation will require (when computations with a stepsize 0.5h are to be carried out) additional approximations y n− 1 2 , y n− 3 2 , . . . , y n+ 1 2 −k at the points x n− 1 2 = x n − 0.5h, x n− 3 2 = x n − 1.5h, . . . , x n−k+ 1 2 = x n − (0.5h)k. This means that many extra evaluations of the unknown function are needed. The situation becomes much more complicated when repeated versions of the Richardson Extrapolation are to be used together with linear k-step methods. However, it should be mentioned that it is possible to use another efficient approach, predictor-corrector schemes, when linear multistep methods are to be handled. This possibility is discussed in Chapter 3 of [34].

Efficient Implementation
We have not discussed the problems related to parallel implementations of the Richardson Extrapolation in computer codes, because the paper is rather long and we left this issue for a future investigation. If the problem solved is large, then it will be necessary to implement the algorithms in an efficient way and/or to carry out the most time-consuming operations in parallel. Note that the most time-consuming operations used when Explicit Runge-Kutta Methods are applied are Basic Linear Algebra Operations (mainly matrixvector multiplications). Well-developed kernels for performing Basic Linear Algebra Operations in parallel are available on many computers, see, for example, [3] and can easily be called in order to achieve parallelism and to improve the performance. Furthermore, the implementation of Explicit Runge-Kutta Methods should be made very carefully when large-scale problems are to be handled. For example, k n 1 has to be used eight times when the Seven-time Repeated Richardson Extrapolation is selected. It will be appropriate to develop a code where this quantity is calculated only once and used every times when this is needed. Some other similar approaches are to be used during the development of an efficient code for the treatment of the different versions of the Richardson Extrapolation described in this paper.

Plans for Future Research
Accuracy and absolute stability results related to the application of eight versions of the Richardson Extrapolation used together with Explicit Runge-Kutta methods were established and verified in this paper by using appropriate sets of test-examples. The correctness of the proved results was illustrated for the case where explicit numerical methods were used. The next step will be the use of some implicit methods (the paper being in preparation). Furthermore, it will be useful, as mentioned above, to develop a reliable and efficient software, which will allow the users to solve their problems by automatically changing both the stepsize and the used versions of the Richardson Extrapolation according to the accuracy and absolute stability requirements of the solved by them scientific and engineering problems.

Conjectures
Several conjectures were formulated in this paper. Some experiments with First-order One-stage Explicit Runge-Kutta Method in combination with the Eight-times Repeated Richardson Extrapolation have been carried out. We found that two conditions are satisfied when the Eight-times Repeated Richardson Extrapolation is used: (a) the coefficients of this version can be calculated by using the generic formula (4.24), (b) the derived in this way version of the Richardson Extrapolation behaves as a numerical method the order of accuracy of which is ten (as should be expected according to Theorem 4.1 and Conjecture 1). This indicates that the statement made in this conjecture is true, but an attempt to prove strictly all conjectures made above will be useful and will be made.