Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access December 31, 2019

A preconditioned AOR iterative scheme for systems of linear equations with L-matrics

  • Hongjuan Wang EMAIL logo
From the journal Open Mathematics

Abstract

In this paper we investigate theoretically and numerically the new preconditioned method to accelerate over-relaxation (AOR) and succesive over-relaxation (SOR) schemes, which are used to the large sparse linear systems. The iterative method that is usually measured by the convergence rate is an important method for solving large linear equations, so we focus on the convergence rate of the different preconditioned iterative methods. Our results indicate that the proposed new method is highly effective to improve the convergence rate and it is the best one in three preconditioned methods that are revealed in the comparison theorems and numerical experiment.

MSC 2010: 15-xx; 15A06; 15A24

1 Introduction

With the development of natural and social sciences, we always encounter some big data problems which are related to the sparse linear equations. For instance, in numerical weather forecasting, simulated nuclear explosion, oil and gas resource development, partial differential equations are used to establish mathematical models, which generate large sparse linear equations by proper difference or finite element. However, the traditional Gaussian elimination method is no longer applicable because it requires a lot of storage space. So the iterative method is presented to solve the approximate solution of the large sparse linear equations, and some effective iterative schemes are developed, such as the Gauss-Seidel method, Jacobi method, AOR method, SOR and SOR-like method [1, 2] etc.

Usually the large sparse linear system can be expressed as:

Ax=b, (1)

where ARn×n, bRn are given and xRn should be solved. A can be expressed in terms of the identity matrix I, strictly lower and upper triangular matrices L and U, respectively, namely A = ILU. Then the iterative matrix of Gauss-Seidel method [3] for solving the linear system (1) is

H=(IL)1U. (2)

In order to improve convergence of the iterative method, AOR iterative scheme is demonstrated [4, 5, 6]. The iteration matrix of AOR is

Lrw=(IrL)1[(1w)I+(wr)L+wU], (3)

where w and r are real parameters with w ≠ 0. If w = r, it is SOR iterative scheme. When the spectral radius of iterative matrix is less than 1, the iterative method converges. The smaller spectral radius results in faster convergence speed of the iterative matrix.

In the calculation process, convergence is not only dependent on the iteration matrix and parameters in the iterative methods, but also closely related to the changes of the equations themselves. So we can multiply both sides of the system (1) by a nonsingular matrix to improve the efficiency of solving the equations. Then the original linear system (1) is equivalent to the following preconditioned linear system (e.g., see [7, 8, 9, 10, 11]):

PAx=Pb,

where P is a non-singular matrix. In order to increase calculate, the preconditioned matrix P can be adopted as different forms [12, 13, 14, 15, 16].

The following preconditioned matrix is proposed by Evans et al. [17]

P=I+S~,

where

S~=000000an100.

Then the system (1) is equivalent to the following preconditioned system:

A~x=b~, (4)
A~=(I+S~)A

and

b~=(I+S~)b.

Another preconditioned form is presented by Gunawardena et al. [18]

P=I+S¯,

where

S¯=0a120000a230000an1,n0000.

So the system (1) becomes the following:

A¯x=b¯, (5)

where A = (I + S)A and b = (I + S)b.

In the spirit of previous work, we in this paper consider the following preconditioned linear system:

Ax=b, (6)

where A′ = (I + S′)A and b′ = (I + S′)b with

S=S~+S¯=0a120000a230000an1,nan1000.

In the present work, we are studying the modified preconditioned method mentioned above via theoretical proof and numerical experiment. We describe the preconditioned approaches, including the AOR and SOR schemes in Section 2. Our results and discussions are presented in Section 3. Our conclusions are summarized in Section 4.

2 Methods

2.1 Preconditioned AOR scheme

In (4), Ã is

A~=D~L~U~,

where

D~=diag(1,1,,1,1a1nan1), (7)
L~=0a210a31a3200an1a12an2an1a1,n1an,n10, (8)
U~=U=0a12a13a1n0a23a2n0an1,n0. (9)

So the AOR scheme becomes

L~rw=(D~rL~)1[(1w)D~+(wr)L~+wU~]. (10)

In (5), A is

A¯=D¯L¯U¯,

where

D¯=diag(1a12a21,,1an1,nan,n1,1), (11)
L¯=0a21+a23a310an1,1+an1,nan1an1,2+an1,nan20an1an2an,n10, (12)
U¯=00a13+a12a23a1n+a12a2n00a2n+a23a3n000. (13)

Then the corresponding AOR scheme is

L¯rw=(D¯rL¯)1[(1w)D¯+(wr)L¯+wU¯]. (14)

In (6), the coefficient matrix can be stated

A=DLU,

where

D=1a12a211a23a321an1,nan,n11an1a1n, (15)
L=0a21+a23a310an1,1+an1,nan1an1,2+an1,nan200an2+an1a12an,n1+an1a1,n10, (16)
U=U¯=00a13+a12a23a1n+a12a2n00a2n+a23a3n000. (17)

Here AOR scheme becomes

Lrw=(DrL)1[(1w)D+(wr)L+wU]. (18)

2.2 Our lemma

Lemma 1

Let A and A ′ be the coefficient matrices of the linear system (1) and (6), respectively. If 0 ≤ rw ≤ 1 (w ≠ 0, r ≠ 1), A is an irreducible L-matrix with 0 < a1nan1 < 1 and 0 < ai,I + 1aI + 1,i < 1, i = 1, 2, ⋯, n – 1. (This condition implies A is irreducible.)

Then the iterative matrices Lrw and Lrw associated to the A OR method applied to the linear system (1) and (6), respectively, are nonnegative and irreducible.

Proof

From that A is a L-matrix (i.e., aij > 0; i = j = 1, ⋯, n and aij ⩽ 0, for all i, j = 1, 2, ⋯, n; ij [19]), we have L ≥ 0 is a strictly lower triangular matrix and U ≥ 0 is a strictly upper triangular matrix. So (IrL)–1 = I + rl + r2L2 + ⋯ + rn – 1 Ln – 1 ≥ 0.

By (3), we have

Lrw=(IrL)1[(1w)I+(wr)L+wU]=[I+rL+r2L2++rn1Ln1][(1w)I+(wr)L+wU]=(1w)I+(wr)L+wU+rL(1w)I+rL[(wr)L+wU]+(r2L2++rn1Ln1)[(1w)I+(wr)L+wU]=(1w)I+w(1r)L+wU+T,

where

T=rL[(wr)L+wU]+(r2L2++rn1Ln1)[(1w)I+(wr)L+wU]0.

So Lrw is nonnegative. Because 0 < ai,I + 1aI + 1,i < 1, i = 1, 2, ⋯, n – 1, A is irreducible (i.e., the directed graph of A is strongly connected). Thus, we can also get that (1 – w)I + w(1 – r)L + wU is irreducible when A is irreducible. So Lrw is irreducible.

As to Lrw , by (18), we have

Lrw=(DrL)1[(1w)D+(wr)L+wU]=(IrD1L)1[(1w)I+(wr)D1L+wD1U]=(1w)I+w(1r)D1L+wD1U+T,

where

T=rD1L[(wr)D1L+wD1U]+[r2(D1L)2++rn1(D1L)n1][(1w)I+(wr)D1L+wD1U]0.

and from D′ ≥ 0, L′ ≥ 0 and U′ ≥ 0, we can get Lrw0.

Let

C=L+U=00a1,n1+a12a2,n1a1n+a12a2na21+a23a310a2,n1+a23a3,n1a2n+a23a3nan1,1+an1,nan,1an1,2+an1,nan2000an2+an1a12an,n1+an1a1,n10.

Because of 0 < a1nan1 < 1 and 0 < ai,I+1aI+1,i < 1, i = 1, 2, ⋯, n – 1, there exist at least the following elements that are not equal to null in the matrix C:

ci,i+2=ai,i+2+ai,i+1ai+1,i+20,i=1,2,,n2,
cn1,1=an1,1+an1,nan10,

and

cn2=an2+an1a120.

This is to say that L′ + U′ is irreducible. w ≠ 0, r ≠ 1 and L′ + U′ is irreducible, So w(1 – r)D–1 L′ + wD–1 U′ is irreducible. From Lrw = (1 – w)I + w(1 – r)D–1 L′ + wD–1 U′ + T′ and T′ ≥ 0, we get Lrw is irreducible.

3 Results and discussion

Theorem 1

Let Lrw and Lrw be defined by (3) and (18), respectively. Under the hypotheses in Lemma 1, we have

  1. ρ(Lrw)<ρ(Lrw), if ρ(Lrw) < 1 ;

  2. ρ(Lrw)=ρ(Lrw), if ρ(Lrw) = 1;

  3. ρ(Lrw)>ρ(Lrw), if ρ(Lrw) > 1.

Proof

From Lemma 1, we know that Lrw and Lrw are nonnegative and irreducible matrices. Thus, from results (If A is a nonnegative and irreducible matrix, there exists a positive real eigenvalue that equals to its spectral radius ρ (A), and an eigenvector x > 0 corresponding to ρ(A).) in [20], so there is a positive vector x such thatLrwx = λ x, λ = ρ(Lrw),

[(1w)I+(wr)L+wU]x=λ(IrL)x. (19)

Therefore, for this x > 0,

Lrwxλx=(DrL)1[(1w)D+(wr)L+wUλ(DrL)]x. (20)

Based on (15), (16) and (17), we get that

DL=ILS¯L+S~S~U, (21)
U=S¯US¯+U. (22)

Because of

λ(DrL)x=λ(1r)Dx+λr(DL)x, (23)

we obtain the following formula from (21), (22), (23) and (20)

Lrwxλx=(DrL)1[(1w)D+(wr)(DI+L+S¯LS~+S~U)+w(S¯US¯+U)λ(1r)Dλr(ILS¯L+S~S~U)]x=(DrL)1[(1r)(1λ)(DI)+(wr)S¯L(wr)S~+(wr)S~U+wS¯UwS¯+λrS¯LλrS~+λrS~U]x.

From (19), we have

wUx=(λ1+w)x+(rwλr)Lx. (24)

Using (24), we get

Lrwxλx=(DrL)1[(1r)(1λ)(DI)(1r)(1λ)S~rS~U+λrS~U+(rwλr)S~L+(λ1)S¯]x.

Since L = 0, we can write as

Lrwxλx=(DrL)1[(1r)(1λ)(DI)(1r)(1λ)S~(1λ)S¯r(1λ)S~U]x. (25)

Because

DI=a12a21a23a32an1,nan,n1an1a1n,
S~U=0000000000000an1a12an1a13an1a1n, (26)

we have

DI0,S~0,S¯0,S~U0.

Let B = (D′–rL′) – 1[(1 – r)(D′ – I) – (1 – r)SrU]x, then B ≤0. So (25) becomes

Lrwxλx=(1λ)B.

At the same time, from the results (A is a nonnegative matrix, if αxAx for some nonnegative vector x, x ≠ 0, then αρ(A); if Axβx for some positive vector x, then ρ (A) ⩽ β. Furthermore, if A is irreducible and 0 ≠ αxAxβx for some nonnegative vector x, then αρ(A) ⩽ β and x i s a positive vector.) in [21], we can get the following results:

  1. If 0 < λ < 1, then ρ(Lrw)<λ=ρ(Lrw);

  2. If λ = 1, then ρ(Lrw)=λ=ρ(Lrw);

  3. λ > 1, then ρ(Lrw)>λ=ρ(Lrw).

Now the following theorem is shown to compare the convergence rate of the AOR iterative scheme with two different preconditioned methods.

Theorem 2

Let rw and Lrw be the iterative matrices of the A OR method defined by (10) and (18), respectively. If 0rw ≤ 1 (w ≠ 0, r ≠ 1), A is an irreducible L - matrix with 0 < a1nan1 < 1 and there exists a non-empty set of βN = {1, 2, ⋯, n – 1} such that

0<ai,i+1ai+1,i<1,iβ,ai,i+1ai+1,i=0,iNβ.

We obtain

  1. ρ(Lrw)<ρ(L~rw), if ρ(rw) < 1;

  2. ρ(Lrw)=ρ(L~rw), if ρ(rw) = 1;

  3. ρ(Lrw)>ρ(L~rw), if ρ(rw) > 1.

Proof

From Lemma 3.4. in [22] and Lemma 1, we know that rw and Lrw are nonnegative and irreducible matrices. So there exists a positive vector x such that rwx = λ x, λ = ρ(rw) (from results in [20].).

From (10), we have

(D~rL~)1[(1w)D~+(wr)L~+wU~]x=λx,

i.e.,

[(1w)D~+(wr)L~+wU~]x=λ(D~rL~)x. (27)
Lrwxλx=(DrL)1[(1w)D+(wr)L+wUλ(DrL)]x, (28)

where x is a positive vector.

Based on (7), (8) and (16), we get

DL=D~L~S¯L.

By (9) and (17), we can obtain

U=S¯U~S¯+U~.

From (27), we have

wU~=(λ1+w)D~+(rwλr)L~

i.e.,

λ(DrL)x=λ(1r)Dx+λr(DL)x,

(28) can be written as

Lrwxλx=(DrL)1[(1w)D+(wr)(DD~+L~+S¯L)+w(S¯U~S¯+U~)λ(1r)Dλr(D~L~S¯L)]x=(DrL)1[(1r)(1λ)(DD~)+(wr)S¯L+wS¯U~wS¯+λrS¯L]x.

We can get the following from (9) and (24)

Lrwxλx=(DrL)1[(1w)D+(wr)(DD~+L~+S¯L)+w(S¯U~S¯+U~)λ(1r)Dλr(D~L~+S¯L)]x=(DrL)1[(1r)(1λ)(DD~)+(wr)S¯L+wS¯U~wS¯+λrS¯L]x.

By (7) and (15), we have D′ – D~ ≤ 0. Let Q = (D′ – rL′) – 1 [(1 – r)(D′ – D~) – S]x. It is obvious that Q ≤ 0. The following proof is similar to Theorem 1.

Based on Theorem 3.5. in [22] and Theorem 2, we obtain the following corollary.

Corollary 1

Let Lrw, rw and Lrw be defined by (3), (10) and (18), respectively. Under the hypotheses in Theorem 4, we have

  1. ρ(Lrw)<ρ(L~rw)<ρ(Lrw), if ρ(Lrw) < 1;

  2. ρ(Lrw)=ρ(L~rw)=ρ(Lrw), if ρ(Lrw) = 1;

  3. ρ(Lrw)>ρ(L~rw)>ρ(Lrw), if ρ(Lrw) > 1.

If w = r in Corollary 1, we can obtain the results of SOR method, and if w = 1, r = 0, we can get the corresponding Jacobi results.

Theorem 3

Let Lrw and Lrw be defined by(14) and (18), respectively. under the conditions in Theorem 4, we have

  1. ρ(Lrw)<ρ(L¯rw), if ρ(Lrw) < 1;

  2. ρ(Lrw)=ρ(L¯rw), if ρ(Lrw) = 1;

  3. ρ(Lrw)>ρ(L¯rw), if ρ(Lrw) > 1.

Proof

From Lemma 4 in [23] and Lemma 1, it is clear that Lrw and Lrw are nonnegative andirreducible matrices. So there exists a positive vector x such that Lrwx = λ x, where λ = ρ(Lrw). From (11), (12), (15) and (16), the following equality is easily proved:

DL=D¯L¯+S~S~U

and

Lrwxλx=(DrL)1[(1w)D+(wr)L+wUλ(DrL)]x=(DrL)1[(1w)D+(wr)(DD¯+L¯S~+S~U)+wU¯λ(1r)Dλr(D¯L¯+S~S~U)]x=(DrL)1[(1w)D+(wr)(DD¯+L¯S~+S~U)+(λ1+w)D¯+(rwλr)L¯λ(1r)Dλr(D¯L¯+S~S~U)]x=(DrL)1[(1r)(1λ)(DD¯)+(w+rλr)S~+(λ1+w)S~D~+(rwλr)S~L~+(λrr)S~U]x,

where x is a positive vector.

Since ~D~ = and ~ = 0,

Lrwxλx=(DrL)1[(1r)(1λ)(DD¯)(1r)(1λ)S~r(1λ)S~U]x.

Let K = (D′–rL′)–1[(1 – r)(D′–D)–(1 – r)rU]x. Obviously, K ≤0. The following proof is similar to Theorem 1.

From Theorem 2 in [23 and Theorem 3], we have the following corollary.

Corollary 2

Let Lrw, Lrw and Lrw be defined by (3), (14) and (18), respectively. under the hypotheses, we get

  1. ρ(Lrw)<ρ(L¯rw)<ρ(Lrw), if ρ(Lrw) < 1;

  2. ρ(Lrw)=ρ(L¯rw)=ρ(Lrw), if ρ(Lrw) = 1;

  3. ρ(Lrw)>ρ(L¯rw)>ρ(Lrw), if ρ(Lrw) > 1.

Same as Corollary 1, Corollary 2 can also be applied to SOR and Jacobi iterative methods.

Remark 1

From these results, we can conclude that the spectral radius of our preconditioned AOR (SOR, Jacobi) iterative matrix is the smallest. It is to say that the convergence of our modified AOR (SOR, Jacobi) scheme is the fastest in the above three preconditioned methods.

4 Example

We show the numerical example to verify the theorems.

The coefficient matrix A of (1) is the following:

A=11n×1001(n1)×10013×1001101n×10+1113×10+21(n1)×10+21n×10+21(n1)×10+112×10+311(n1)×10+31n×10+313×10+11(n2)×10+(n1)1(n3)×10+(n1)11n×10+(n1)1051(n1)×10+n1(n2)×10+n12×10+n1.

By applying three preconditioned methods to the linear system, we can get Table 1, Table 2 and Table 3.

Table 1

Comparison of spectral radius of AOR scheme.

n r w ρ(Lrw) ρ(rw) ρ(Lrw) ρ(Lrw)
10 0.8 0.9 0.3696 0.1723 0.3690 0.1614
20 0.8 0.95 0.3340 0.1525 0.3337 0.1420
50 0.75 0.8 0.4507 0.3230 0.4506 0.3166
100 0.85 0.9 0.3518 0.2369 0.3517 0.2306

Table 2

Comparison of spectral radius of SOR scheme.

n r w ρ(Lrw) ρ(rw) ρ(Lrw) ρ(Lrw)
10 0.7 0.7 0.5302 0.3662 0.5298 0.3579
30 0.8 0.8 0.4388 0.2988 0.4387 0.2910
40 0.95 0.95 0.2739 0.1388 0.2737 0.1290

Table 3

Comparison of spectral radius of Jacobi scheme.

n r w ρ(B) ρ() ρ(B) ρ(B′)
20 0 1 0.4513 0.2131 0.4511 0.2042
60 0 1 0.4501 0.2824 0.4500 0.2769

Remark 2

From the tables, we can find that numerical results are in accordance with the above theorems. These results imply that the improved preconditioned method (I + S′) is the most effective to accelerate convergence of AOR (SOR, Jacobi) iterative scheme in these three preconditioned methods (I + ), (I + S) and (I + S′).

5 Conclusions

In this work, we study the improved preconditioned AOR (SOR, Jacobi) iterative scheme. In order to explore the most effective method to improve the convergence speed, we provide some comparison theorems and the numerical example in three preconditioned methods. Our main conclusions are summarized below.

  1. From the comparison theorems, we find that the spectral radius of our new preconditioned AOR (SOR, Jacobi) iterative matrix is less than 1, and it is the smallest in three different preconditioned methods. Our results suggest that the convergence speed of the improved preconditioned AOR (SOR, Jacobi) scheme is the fastest.

  2. Our numerical results indicate that the new preconditioned method is the most effective to accelerate convergence speed of AOR (SOR, Jacobi) iterative scheme.

Acknowledgements

This work was supported by the National Natural Science Foundation (Grant No. 11603004), Beijing Natural Science Foundation (Grant No. 1173010), and Beijing Education Commission Project (Grant No. KM201710015004).

References

[1] Ke Y.F., Ma C.F., SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 2017, 311, 195–202.10.1016/j.amc.2017.05.035Search in Google Scholar

[2] Zhang C.Y., Xu F., Xu Z., Li J., General H-matrices and their Schur complements, Front. Math. China, 2014, 9, 1141–1168.10.1007/s11464-014-0395-1Search in Google Scholar

[3] Edalatpour V., Hezari D., Salkuyeh D., A generalization of the Gauss–Seidel iteration method for solving absolute value equations, Appl. Math. Comput., 2017, 293, 156–167.10.1016/j.amc.2016.08.020Search in Google Scholar

[4] Hadjimos A., Accelerated overrelaxation method, Math. Comp., 1978, 32, 149–157.10.1090/S0025-5718-1978-0483340-6Search in Google Scholar

[5] Huang Z.G., Wang L.G, Xu Z., Cui J.J., Some new preconditioned generalized AOR methods for solving weighted linear least squares problems, Comp. Appl. Math., 2018, 37, 415–438.10.1007/s40314-016-0350-8Search in Google Scholar

[6] Li C., A preconditioned AOR iterative method for the absolute value equations, Int. J. Comput. Methods, 2017, 14(2), 1750016.10.1142/S0219876217500165Search in Google Scholar

[7] Saberi Najafi H., Edalatpanah S.A., Refahisheikhani A.H., An analytical method as a preconditioning modeling for systems of linear equations, Comp. Appl. Math., 2018, 37, 922–931.10.1007/s40314-016-0376-ySearch in Google Scholar

[8] Dai P., Li J., Li Y., Bai J., A general preconditioner for linear complementarity problem with an M-matrix, J. Comput. Appl. Math., 2017, 317, 100–112.10.1016/j.cam.2016.11.034Search in Google Scholar

[9] Salkuyeh D.K., Hasani M., Beik F.P.A., On the preconditioned AOR iterative method for Z-matrices, Comp. Appl.Math., 2017, 36, 877–883.10.1007/s40314-015-0266-8Search in Google Scholar

[10] Liu Q., Huang J., Zeng S., Convergence analysis of the two preconditioned iterative methods for M-matrix linear systems, J. Comput. Appl. Math., 2015, 281, 49–57.10.1016/j.cam.2014.11.034Search in Google Scholar

[11] Saberi Najafi H., Edalatpanah S.A., A new family of (I+S)-type preconditioner with some applications, Comp. Appl. Math., 2015, 34, 917–931.10.1007/s40314-014-0161-8Search in Google Scholar

[12] Yun J., Comparison results of the preconditioned AOR methods for L-matrices, Appl. Math. Comput., 2011, 218, 3399–3413.10.1016/j.amc.2011.08.085Search in Google Scholar

[13] Yuan J., Zontini D., Comparison theorems of preconditioned Gauss-Seidel methods for M-matrices, Appl. Math. Comput.,2012, 219, 1947–1957.10.1016/j.amc.2012.08.037Search in Google Scholar

[14] Wang H., Li Y., A new preconditioned AOR iterative method for M-matrices, J. Comput. Appl. Math., 2009, 229, 47–56.10.1007/s12190-010-0423-6Search in Google Scholar

[15] Wu S., Li C., A note on parameterized block triangular preconditioners for generalized saddle point problems, Appl. Math. Comput., 2013, 219, 7907–7916.10.1016/j.amc.2013.01.040Search in Google Scholar

[16] Wu S., Li C., Eigenvalue estimates of indefinite block triangular preconditioner for saddle point problems, J. Comput. Appl. Math., 2014, 260, 349–355.10.1016/j.cam.2013.10.009Search in Google Scholar

[17] Evans D.J., Martins M.M., Trigo M.E., The AOR iterative method for new preconditioned linear systems, J. Comput. Appl. Math., 2001, 132, 461–466.10.1016/S0377-0427(00)00447-7Search in Google Scholar

[18] Gunawardena A.D., Jain S.K., Snyder L., Modified iteration methods for consistent linear systems, Linear Algebra Appl.,1991, 154, 123–143.10.1016/0024-3795(91)90376-8Search in Google Scholar

[19] Young D.M., Iterative Solution of Large Linear Systems, Academic Press, New York, London, 1971.Search in Google Scholar

[20] Varga R.S., Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, New York, 1962.Search in Google Scholar

[21] Berman A., Plemmons R.J., Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, PA, 1994.10.1137/1.9781611971262Search in Google Scholar

[22] Yun J.H., A note on precondetioned AOR method for L-matrices, J. Comput. Appl. Math., 2008, 220, 13–16.10.1016/j.cam.2007.07.009Search in Google Scholar

[23] Li Y.T., Li C., Wu S., Improving AOR method for consistent linear systems, Appl. Math. Comput., 2007, 186, 379–388.10.1016/j.amc.2006.07.097Search in Google Scholar

Received: 2018-10-24
Accepted: 2019-10-21
Published Online: 2019-12-31

© 2019 Hongjuan Wang, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Downloaded on 29.3.2024 from https://www.degruyter.com/document/doi/10.1515/math-2019-0125/html
Scroll to top button