Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo


IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2018: 0.34

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

Issues

Volume 13 (2015)

Multipreconditioned GMRES for simulating stochastic automata networks

Chun Wen
  • School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Ting-Zhu Huang
  • School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Xian-Ming Gu
  • Corresponding author
  • School of Economic Mathematics/Insitutue of Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China
  • Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, Nijenborgh 9, P.O. Box 407, 9700 AK Groningen, The Netherlands
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Zhao-Li Shen
  • School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
  • Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, Nijenborgh 9, P.O. Box 407, 9700 AK Groningen, The Netherlands
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Hong-Fan Zhang
  • School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Chen Liu
Published Online: 2018-08-24 | DOI: https://doi.org/10.1515/math-2018-0083

Abstract

Stochastic Automata Networks (SANs) have a large amount of applications in modelling queueing systems and communication systems. To find the steady state probability distribution of the SANs, it often needs to solve linear systems which involve their generator matrices. However, some classical iterative methods such as the Jacobi and the Gauss-Seidel are inefficient due to the huge size of the generator matrices. In this paper, the multipreconditioned GMRES (MPGMRES) is considered by using two or more preconditioners simultaneously. Meanwhile, a selective version of the MPGMRES is presented to overcome the rapid increase of the storage requirements and make it practical. Numerical results on two models of SANs are reported to illustrate the effectiveness of these proposed methods.

Keywords: Preconditioner; GMRES; Arnoldi process; Stochastic automata network

MSC 2010: 65F10; 90B15

1 Introduction

The use of Stochastic Automata Networks (SANs) is of interest in a wide range of applications, including modeling queueing systems [1, 2, 3], communication systems [4, 5], manufacturing systems and inventory control [6]. The generator matrices of these SANs are block tridiagonal matrices with each block diagonal being a sum of tensor products of matrices. To analyze the system performance of a SAN, it is required to find the steady state probability distribution of the SAN. The solution can be obtained by solving a large linear system involving its generator matrix.

Iterative procedures are often employed to compute the steady state probability distribution of SANs, such as the classical Jacobi, Gauss-Seidel, SOR, SSOR and TSS iterative methods [3, 6, 7, 8]. These methods are simple and easy to implement, but they usually suffer from slow convergence and they do not appear to be practical when the size of the generator matrix is huge [6, 9, 10]. Hence, Krylov subspace methods which use the matrix-vector multiplications as their main arithmetic operations are developed. For instance, the popular Arnoldi/FOM, GMRES, CGS, BiCGSTAB and QMR methods and so on [7, 11, 12, 13, 14, 15, 16, 17, 18]. Based on a small-dimension Krylov subspace, such iterative methods are able to efficiently approximate the steady state probability distribution of SANs [7, 10, 19]. However, they may also suffer from slow convergence, even though lots of numerical experiments indicate that they generally outperform the classical iterative methods, refer, e.g., to [3, 7] for details.

It is well known that preconditioning techniques are necessary to improve the convergence of iterative methods by modifying the eigenvalue distribution of the iteration matrix, while the solution remains unchanged. Preconditioners based on incomplete LU factorizations are possible choices and have been discussed in [3, 7, 20]. A SAN preconditioner (based on the Neumann) series was proposed by Stewart et al. [10]. Numerical experiments show that the SAN preconditioner is expensive and needs more executing time. For taking use of the tensor structure of the generator matrix of a SAN, an approximate inverse preconditioner for the SAN was proposed by Langville and Stewart [19]. It is called the Nearest Kronecker Product (NKP) preconditioner. Numerical tests on some SANs have shown the effectiveness of the NKP preconditioner, however it is not always appropriate for general Markov chains [21]. Taking circulant approximations of the tensor blocks of the generator matrix, circulant preconditioners were constructed for a SAN [2, 4]. These are simple to construct and as since circulant matrices can be easily diagonalized by using the fast Fourier transforms [22]. However, the successful applications of these circulant preconditioners depend on whether the tensor blocks of the generator matrix of a SAN are near-Toeplitz matrices. In fact, different preconditioners have different efficiencies for some model problems. Which one is better and how to develop more efficient preconditioners, especially for very large systems, remains to be done in the future.

In this paper, motivated by the idea of [23], our main contribution is to find the steady state probability distribution of a SAN by employing a variant of GMRES where two or more preconditioners are applied simultaneously, and denote it as multipreconditioned GMRES (MPGMRES). Compared to the standard GMRES, one advantage of the MPGMRES is that it is able to find an approximate solution over a rich search space that has been enlarged by collecting all possible search direction generated by different preconditioners at each iteration. However, its disadvantage is that more memory requirements are required since the search space has an exponential growth at each iteration. Fortunately, a selective version of the MPGMRES (sMPGMRES) is considered by discarding some search directions such that the search space only has a linear growth with the size of problems increasing at each iteration. In fact, the idea to use multiple preconditioners is not new, e.g., Bridson et al. [24] has proposed the multipreconditioned conjugate gradients (MPCG) for symmetric positive definite linear systems. Even though the short-term recurrence of the standard CG does not hold in general, the MPCG is very useful in terms of determining what the most effective preconditioner is by executing a few iterations of the MPCG. On the other hand, for linear systems with a nonsymmetric coefficient matrix, a combined preconditioning strategy has been analyzed in [25] and shows that an optimal linear combination of two preconditioners could obtain the good numerical performance.

The rest of this paper is organized as follows. In Section 2, a brief introduction of two models of SANs is given. The derivation of the MPGMRE and its selective version are discussed in Section 3. In Section 4, numerical experiments on two SANs are employed to illustrate the effectiveness of these methods. Finally, some conclusions are given in Section 5.

2 Two models of SANs

To introduce certain terminologies and notations of SANs, two models will be described in this section; a two-queue overflow network and another one is a telecommunication system.

2.1 A two-queue overflow network

The early research on the two-queue overflow network can be found in [2, 26]. This network consists of two independent queues with the number of servers and waiting spaces being si and lisi – 1 (i = 1, 2) respectively, where li is the total number of customers in the queue i. For queue i, i = 1, 2, let λi be the arrival rate of customers, and let μi be the service rate of the servers, then the state space of the queueing system can be represented by the elements in the following set:

S={(i,j)|0il1,0jl2},

where i represents the state that there are i customers in the queue 1, and j represents the state that there are j customers in the queue 2. Thus it is a two-dimensional queueing system.

Generally speaking, the queueing discipline is First-come-first-served. When a customer arrives and finds all the serves are busy, the customer has two choices: one is to wait in the queue, provided that there is still a waiting space available, another is to leave the system. But for now we allow the overflow of customers from queue 2 to queue 1 whenever it is full and queue 1 is not, see Fig. 1. The generator matrix can be shown to be the following l1 l2 × l1 l2 matrix in the tensor product form

The two-queue overflow network.
Fig. 1

The two-queue overflow network.

A=Q1Il2+Il1Q2+Rel2Tel2,(1)

where

Qi=λiμi0λiλi+μi2μiλiλi+(si1)μisiμiλiλi+siμisiμiλiλi+siμisiμi0λisiμi,(2)

R=λ20λ2λ2λ2λ20λ20,(3)

and Ili is the identity matrix of size li, i = 1, 2, and el2 is the unit vector (0, …, 0, 1)T, see [2, 26] for instance. Let n = l1l2, then the steady state probability distribution vector p = (p1, p2, …, pn)T is the solution of the linear system Ap = 0 with constraints

i=1npi=1,andpi0,1in.(4)

According to the properties of the tensor product [9, 10, 27], the matrix Rel2Tel2 is a lower triangular matrix which describes the overflow of customers from the queue 2 to the queue 1, and the matrix A in (1) is a singular and block tridiagonal matrix. Particularly, the matrix A is irreducible, has zero column sums, positive diagonal elements and nonpositive off-diagonal elements. According to [28, 29], matrix A has a one-dimensional null space with a positive null vector. Hence, the existence of the steady state probability distribution vector p can be guaranteed.

2.2 A telecommunication system

Here an (MMPP/M/s/s+m) network arising in telecommunication systems is considered. It is known that the Markov Modulated Poisson Process (MMPP) is a generalization of the Poisson process and is often regarded as the input model of telecommunication system [4, 5]. To construct the (MMPP/M/s/s+m) network, first define the system parameters as follows:

  1. 1/λ, the mean arrival time of the exogenously originating calls of the main queue,

  2. 1/μ, the mean service time of each server of the main queue,

  3. s, the number of servers in the main queue,

  4. ls–1, the number of waiting spaces in the main queue,

  5. q, the number of overflow,

  6. (Qj, Λj), 1 ≤ jq, the parameters of the MMPP’s modeling overflow parcels, where

    Qj=σj1σj2σj1σj2,Λj=λj000.

    Here σj1, σj2 and λj, 1 ≤ jq, are positive MMPP parameters.

The input of the main queue comes from the superposition of several independent MMPPs, which is still an MMPP and is parameterized by two 2q × 2q matrices (Q, Ω). Here

Q=(Q1I2I2)+(I2Q2I2I2)+(I2I2Qq)Λ=(Λ1I2I2)+(I2Λ2I2I2)+(I2I2Λq)

and Ω = Λ + λI2q, where I2 and I2q are the 2 × 2 and 2q × 2q identity matrices, respectively.

Now the (MMPP/M/s/l) network can be regarded as a Markov process on the state space

{(i,j)|0il1,1j2q},

where the number i corresponds to the number of calls at the destination, and the number j corresponds to the state of the Markov process with generator matrix Q. Hence, it is a two-dimensional queueing system, and the generator matrix can be shown to be the following l2q × l2q tridiagonal block matrix:

A=Q+ΩμI0ΩQ+Ω+μI2μIΩQ+Ω+sμIsμIΩQ+Ω+sμIsμI0ΩQ+sμI.

It can be rewritten as in the tensor product form,

A=IlQ+BI2q+RΩ,(5)

where matrices B and R has the same form as Qi and R in (2) and (3), respectively. Let n = l2q, then steady state probability distribution vector p = (p1, p2, …, pn)T is the solution of the linear system Ap = 0 with the same constraints as (4).

Similarly, the matrix A in (5) is irreducible, has zero column sums, positive diagonal elements and nonpositive off-diagonal elements. According to the famous Perron-Frobenius theory, the matrix A has a one-dimensional null space with a positive null vector [28, 29]. Hence, the steady state probability distribution vector p exists.

Remark 2.1

To solve the linear system Ap = 0 efficiently, it is necessary to make certain modifications to the matrix A sine it is singular. One possible way to obtain the steady state probability distribution vector p is to normalize the solution x of the following nonsingular linear system:

Ax=(A+ϵI)x=b,(6)

where b = (0, …, 0, 1)T is an n × 1 vector, I is as n × n identity matrix, and ϵ > 0 can be chosen as small as possible. The matrix 𝓐 is nonsingular because it is irreducible strictly diagonally dominant. The linear system (6) can be solved by GMRES. It is clear that solving the linear system (6) may induce a small error of O(ϵ), but fortunately Ching et al. have proved that the 2-norm of the error induced by the regularization tends to zero [2, 4].

3 Multipreconditioned GMRES

This section will give a brief description of the preconditioned GMRES, and then discuss the GMRES with multiple preconditioners (MPGMRES). At last, to decrease the computing cost of the MPGMRES, a selective version of the MPGMRES is considered.

3.1 Preconditioned GMRES

GMRES is often used to solve the non-symmetric linear system (6). Given an initial vector x0, then the corresponding initial residual is r0 = b – 𝓐x0, and the Krylov subspace is:

Km(A,r0)=span{r0,Ar0,A2r0,,Am1r0}.

An orthonormal basis for the Krylov subspace 𝓚m(𝓐, r0) can be computed by the modified Gram-Schmidt orthogonalization procedure with the first vector being r0/∥r02. This generates a useful relation:

AVm=Vm+1H~m,(7)

where Vm ∈ 𝓡n×m and Vm+1 ∈ 𝓡n×(m+1) has orthonormal columns, and m ∈ 𝓡(m+1)×m is an upper Hessenberg matrix. Thus at the k-th step, an approximate solution of (6) is computed as:

xk=x0+Vmymx0+Km(A,r0).(8)

where ym = minyRmβe1my2 with β = ∥r02 and e1 = (1, 0, …, 0)T.

The preconditioning technique is a key ingredient for the successful applications of GMRES. Let M be a preconditioner, then for the linear system (6), the right preconditioned GMRES is considered in Algorithm 1, refer to Ref. [7].

In line 11 of Algorithm 1, the approximate solution xm is expressed as a linear combination of the preconditioned vectors zi = M–1vi, i = 1, 2, …, m, where the preconditioner M is fixed, i.e., it does not change from step to step. Now suppose that the preconditioner is able to change at every step:

zj=Mj1vj,j=1,,m.

Then the approximate solution is computed by xm = x0 + Zmym, where Zm = (z1, …, zm). Such kind of GMRES is called Flexible GMRES (FGMRES) since different preconditioners are allowed to be used at each iteration [30].

3.2 Multiple preconditioned GMRES

Based on the idea of using different preconditioners in the FGMRES, the multipreconditioned GMRES with two or more preconditioners being applied simultaneously was proposed by Greif et. al [23]. Assume there are k preconditioners, i.e., M1, …, Mk, k ≥ 2, then at the beginning, for the initial residual r0, we have v1 = r0/∥r02 and

Z1=(M11v1,,Mk1v1)Rn×k,(9)

such that the first iteration is computed by x1 = x0 + Z1y1, where the vector y1 ∈ 𝓡k is chosen to minimize ∥b – 𝓐x12. From (9), it is easy to see that using all preconditioners simultaneously has enlarged the space where the solution is sought. Similar to Algorithm 1, the multipreconditioned GMRES algorithm (i.e., Algorithm 2) is given as follows.

Comparing with Algorithm 1, the search space increases at each iteration, and the relation in the equation (7) has been replaced by:

AZ~m=V~m+1H~m,(10)

where

Z~m=(Z1,,Zm),V~m+1=(V1,,Vm+1),

and

H~m=H1,1H1,2H1,mH2,1H2,2H2,mHm,m1Hm,mHm+1,m,

in which the matrices Vj+1 and Hj+1,j (1 ≤ jm) are computed by QR factorization in the line 8 of Algorithm 2. Note that matrices Zj and Vj+1 have kj columns, j = 1, …, m. Thus the matrix m+1 has:

θm=j=0mkj=km+11k1(11)

columns, the matrix m has θm – 1 = (km+1k)/(k – 1) columns, and the size of the upper hessenberg matrix m is θm × (θm – 1).

Let 𝓟m–1 = 𝓟m–1(X1, …, Xk) be the space of all possible polynomials of matrices in k variables of at most degree m – 1, then at the j-th step of the MPGMRES, the approximate solution can be represented by:

xj=x0+i=1kωji(M11A,,Mk1A)Mi1r0,(12)

where ωji ∈ 𝓟m–1(X1, …, Xk), see [23] for details. Furthermore, from (12), the corresponding residual can be computed as:

rj=r0+i=1kωji(AM11,,AMk1)AMi1r0=i=1k(τi+ωji(AM11,,AMk1AMi1))r0=i=1kβj+1i(AM11,,AMk1)r0,(13)

where βj+1i ∈ 𝓟m(X1, …, Xk), and τi satisfies

i=1kβj+1i(0,,0)=i=1kτi=1,

and

j+1βj+1iXsj+1=0,1i,sk,is,

which implies that, in the matrix polynomial βj+1i, only the i-th variable may have the highest degree j, the degrees of all other variables are less than or equal to j – 1. From (13), the following result is established.

Theorem 3.1

Let r0 be the initial residual of the linear system (6), and rj be the residual at the j-th step of the MPGMRES with k preconditioners M1, …, Mk. Then it has:

rjr0minβj+1iPm(X1,,Xk),1iki=1kβj+1i(0,,0)=1,j+1βj+1iXsj+1=0,isi=1kβj+1i(AM11,,AMk1).(14)

Therefore, it is important to find an optimal combination of all the different preconditioners as they are used simultaneously at each iteration [23, 24, 25]. This problem still requires further research.

3.3 Selective MPGMRES and computing cost

The idea of the MPGMRES is to enlarge the Krylov subspace over which the GMRES minimizes the residual norm by using two or more preconditioners simultaneously at each iteration. From (14), it can be seen that the reason why the search space is so rich is that it not only has the polynomials of higher order of the variables Xi, but it also has many mixed terms [23]. For instance, when k = 2, the entire space of the matrix polynomials is

P2(X1,X2)=span{I,X1,X2,X1X2,X2X1,X12,X22}.(15)

It is natural to think of reducing the dimension of the entire space. Specially, for the case when 𝓐 = M1 + M2, the following result has been proved [23].

Theorem 3.2

If the variables X1, X2 satisfy the condition that X1X2 = X2X1 = X1 + X2, then

Pk(X1,X2)=Pk(X1)+Pk(X2),k=1,2.

The mixed terms have been eliminated, which indicates that the dimension of the entire search space has been reduced successfully. Hence, it is a possible choice to construct an approximate search space by discarding some search directions at each iteration. The computing cost of the MPGMRES will then be decreased.

Several different strategies can be applied to obtain the approximate search space at each iteration, see [23, 31]. In this paper, a one simple method shown in [23], where line 11 of Algorithm 2 has been replaced by

Zj+1=(M1Vj+11,M2Vj+12,,MkVj+1k),

i.e., the preconditioner M1 is used in the first column of Vj+1, M2 is used in the second column of Vj+1, and so on. This is called selective MPGMRES (sMPGMRES). Correspondingly, the column of the matrix Zj, j = 1, …, m, remains k rather than kj. For comparisons, at the j-th iteration, the computing cost of the MPGMRES and sMPGMRES is listed as follows [23].

Table 1, it can be seen that the computing cost of the MPGMRES has an exponential growth, while that of the sMPGMRES only grows linearly.

Table 1

Comparisons of computing cost.

4 Numerical experiments

In this section, Algorithms 1-2 are tested and the variant (sMPGMRES) introduced in this paper on two models of SANs. One example is from queueing systems, the other from telecommunication systems. In particular, we compare the numerical performance of the MPGMRES, sMPGMRES, standard right preconditioned GMRES and unpreconditioned GMRES (with restart being 10) in terms of the total computing time and iteration steps. For convenience, GMRES(M) is used to denote the right preconditioned GMRES method with the preconditioner M in this section. Convergence histories are shown in figures with the number of iterations on the horizontal axis, and Relres (defined as log10 of the relative residual 2-norms, i.e., log10(∥rj2/∥r02)) on the vertical axis. All experiments are carried out with a serial MATLAB implementation of the code, in which parts of the MATLAB code are from https://github.com/tyronerees/krylov_solvers/tree/master/mpgmres.

Throughout our numerical experiments, the stopping criteria is set as ∥rj2/∥r02 ≤ 10–6, where rj is the current residual and r0 is the initial residual. The initial vector for all methods is chosen to be x0 = (0, …, 0, 1)T. The parameter ϵ in the equation (6) is given as ϵ = 0.5, other numbers can also be used if they can make the coefficient matrix 𝓐 be nonsingular.

Example 4.1

The two-queue overflow network [2, 26]. This test problem was introduced in Section 2.1. The corresponding matrices can be found in the equations (1), (2) and (3). Here set λ1 = μ1 = s1 = 1, and λ2 = μ2 = s2 = 2. The size of the matrix 𝓐 is n = l1l2. To find the steady state probability distribution, different ways can be applied to construct different preconditioners in order to solve the linear system (6).

Case one: Consider the incomplete LU factorization with “droptol = 0.001” for the coefficient matrix 𝓐, i.e., 𝓐 = LU + E, where L is an unit lower triangular matrix and U is an upper triangular matrix. Hence the matrices L and U are used as two preconditioners for the GMRES. Numerical results for this case are shown in Table 2.

Table 2

The number of iterations and computing time in seconds (in brackets) for Example 4.1 when L and U are used as preconditioners. The last column Ratio is defined as (computing time of sMPGMRES)/(computing time of GMRES).

Table 2 shows that the iteration counts of the MPGMRES and sMPGMRES with two preconditioners L and U being used simultaneously are less than those of the unpreconditioned GMRES. In particular, MPGMRES gives the best iteration steps, even though its computing time is the worst. However, it can be seen that the computing time of the sMPGMRES is superior to that of the unpreconditioned GMRES. The last column in Table 2 further confirms the fast convergence of the sMPGMRES. For instance, when l1 = 512, l2 = 512, the computing time needed by the sMPGMRES is almost only half of that needed by the unpreconditioned GMRES. Moreover, Fig. 2 (left) plots their convergent histories when l1 = 128 and l2 = 128.

Convergent histories of the MPGMRES, sMPGMRES and unpreconditioned GMRES for Example 4.1 when l1 = 128 and l2 = 128 (left). Convergence histories of the sMPGMRES, GMRES(T) and GMRES(J) for Example 4.1 when l1 = 512 and l2 = 512 (right).
Fig. 2

Convergent histories of the MPGMRES, sMPGMRES and unpreconditioned GMRES for Example 4.1 when l1 = 128 and l2 = 128 (left). Convergence histories of the sMPGMRES, GMRES(T) and GMRES(J) for Example 4.1 when l1 = 512 and l2 = 512 (right).

Case two: Let D and T be the diagonal and tridiagonal matrices of the matrix 𝓐, respectively. Then, correspondingly, the matrices J (equals to D) and T are considered as the Jacobi preconditioner and the tridiagonal preconditioner for the GMRES. Numerical results for this case are listed in Table 3.

Table 3

The number of iterations and computing time in seconds (in brackets) for Example 4.1 when J and T are used as preconditioners. Ratio1 and Ratio2 are defined as (computing time of sMPGMRES)/(computing time of GMRES(T)) and (computing time of sMPGMRES)/(computing time of GMRES(J)), respectively.

From Table 3, it can see that the number of iterations required by the sMPGMRES is the best. In terms of the computing time, it is clear to find that the sMPGMRES is superior to the GMRES(T) and GMRES(J) when the size of this test problem becomes larger. While the sMPGMRES needs more computing time than the GMRES(T) and GMRES(J) when the size of this test problem is small. The last two columns of Table 3 illustrate the performance of the sMPGMRES with the size of the test problem increasing. Hence, it shows that, by discarding some search direction at each iteration, using different preconditioners for the GMRES simultaneously can be more effective than using a single preconditioner at each step. Fig. 2 (right) shows their convergent curves when l1 = 512 and l2 = 512.

Example 4.2

The telecommunication system [4, 5]. This example has been introduced in Section 2.2. The corresponding matrices can be found in the equations (2), (3) and (5). Here we set λ = μ = s = 2, q = 4, σj2 = 1/3 and λj = 1/q, j = 1, …, q. The size of the coefficient matrix 𝓐 is n = l2q. For finding the steady state probability distribution, similar ways to Example 4.1 have been used to construct different preconditioners.

Case one: Using the same incomplete LU factorization as given above for the coefficient matrix 𝓐 generated by (6). Then again, the matrices L and U are used as two preconditioners for the GMRES. The corresponding numerical results for this case are presented in Table 4.

Table 4

The number of iterations and computing time in seconds (in brackets) for Example 4.2, when L and U are used as preconditioners.

As seen from Table 4, in the sense of iteration counts, the MPGMRES and sMPGMRES with two preconditioners being applied simultaneously are superior to the GMRES with single preconditioner. And in terms of the computing time, the MPGMRES and sMPGMRES cost more than the GMRES(L) and GMRES(U) when the size of this test problem is small. The reason may be that the dimension of the Krylov subspaces has an increase when two preconditioners are used simultaneously. However, when the size of the test problem increases, it is not difficult to see that the sMPGMRES has given the best results, e.g., when l = 16384, the computing time of the GMRES(L) and GMRES(U) have been reduced by 95% and 40%, respectively. Fig. 3 (left) shows their convergent histories when l = 8192. Note that the computing time of the sMPGMRES is always less than that of the MPGMRES, which indicates that appropriately discarding some search direction from MPGMRES can lead to fast convergence, even though the iteration steps have a little increase. Therefore, the GMRES with multiple preconditioners is a competitive and robust choice for the large test problem.

Convergent histories of the MPGMRES, sMPGMRES, GMRES(L) and GMRES(U) for Example 4.2 when l = 8192 (left). Convergent histories of the MPGMRES, sMPGMRES, GMRES(G) and GMRES(J) for Example 4.2 when l = 8192 (right).
Fig. 3

Convergent histories of the MPGMRES, sMPGMRES, GMRES(L) and GMRES(U) for Example 4.2 when l = 8192 (left). Convergent histories of the MPGMRES, sMPGMRES, GMRES(G) and GMRES(J) for Example 4.2 when l = 8192 (right).

Case two: Let us split the coefficient matrix 𝓐 as 𝓐 = DEF, where D is the diagonal of 𝓐, –E andF are the strict lower triangular and strict upper triangular of 𝓐, respectively. There is also the Gauss-Seidel (GS) preconditoner G = DF and the Jacobi preconditioner J (equals to D). Numerical results for this case are supplied in Table 5.

Table 5

The number of iterations and computing time in seconds (in brackets) for Example 4.2, when G and J are used as preconditioners.

Again, from Table 5, it is shown that the iteration counts of the MPGMRES and sMPGMRES are less than those of the GMRES(G) and GMRES(J). With the increase of the size of this test problem, the computing time of the MPGMRES and sMPGMRES are superior to that of the GMRES(J), although the computing time of the MPGMRES is inferior to that of the GMRES(G). In particular, the sMPGMRES shows the best results, e.g., when l = 16384, the computing time of the GMRES(G) and GMRES(J) are reduced by 55% and 81%, respectively. Furthermore, Fig. 3 (right) plots their convergent histories when l = 8192, with G and J being two preconditioners.

Case three: Three preconditioners are applied to the GMRES. Observe the equation (5), it can be seen that the matrix A consists of three parts. According to the properties of the tensor product [9, 10, 27], the last part of the matrix A, i.e., RΩ, is a lower triangular matrix. Since the last diagonal element of the matrix R is zero as shown in the form of (3), the tensor product RΩ is a singular matrix. To modify its singularity, let the last diagonal element of R be equal to λ, and denote the new matrix as . Then let M3 = Ω. It is nonsingular. On the other hand, using the tensor construct of the matrix A given in (5), we let M1 and M2 be the lower triangular matrices of the tensor product IlQ and BI2q, respectively. Then three preconditioners M1, M2 and M3 for the linear system (6) are considered for the GMRES. Numerical results for this case are provided in Table 6.

Table 6

The number of iterations and computing time in seconds (in brackets) for Example 4.2, when M1, M2 and M3 are used as preconditioners.

As seen from Table 6, it finds that the MPGMRES has shown the best iteration steps. In addition, the number of iterations of the sMPGMRES is also less than those of the GMRES(M1), GMRES(M2) and GMRES(M3). From the last column of Table 6, it also observes that the preconditioner M3 is not effective since its iteration counts have a large change and the computing time has a dramatic increase with the size of this test problem increasing. However, the preconditioner M3 does not affect the stability of the MPGMRES and sMPGMRES, i.e., their iteration steps have no changes and their computing time are acceptable. In particular, for large test problem, the sMPGMRES gives the best computing time, e.g., when l = 16384, the computing time of the GMRES(M1), GMRES(M2) and GMRES(M3) are reduced by 57%, 57% and 99%, respectively. Hence, the acceptable efficiency of the GMRES with multiple preconditioners is illustrated again.

5 Conclusions

In the current study, the main contribution is to consider the MPGMRES for computing the steady state probability distribution of SANs. The idea of the MPGMRES is to enlarge the Krylov subspace over which the GMRES minimizes the residual norm by using two or more preconditioners simultaneously at each iteration. Since the dimension of the search space has an exponential growth at each iteration, a practical sMPGMRES is given by discarding some search directions at each iteration, such that the dimension of the search space only has a linear growth with the problem size. Numerical experiments on two models of SANs have illustrated that the MPGMRES is more effective than the GMRES with a single preconditioner in reducing the number of iterations. Moreover, for large test problems, the computing time of the sMPGMRES is the best.

Here it is worth noting that only the MPGMRES has been tested on two models of SANs. In fact, there are several other models of SANs, e.g., manufacturing systems [6, 32], node mobility in wireless networks [33] and FTS projects [34]. Hence, it would be interesting to extend method to these models in the future.

Acknowledgement

The authors would like to thank the editor Dr. Agnieszka Bednarczyk-Dragthe and anonymous referees for their constructive comments and helpful suggestions in revising the paper. This research is supported by the National Natural Science Foundation of China (Nos. 11501085 and 61772003) and the Fundamental Research Funds for the Central Universities (No. JBK1809003).

References

  • [1]

    Buchholz P., A class of hierarchical queueing networks and their analysis, Queueing Sys., 1994, 15, 59-80. CrossrefGoogle Scholar

  • [2]

    Chan R.H., Ching W.K., Circulant preconditioners for stochastic automata networks, Numer. Math., 2000, 87, 35-57. CrossrefGoogle Scholar

  • [3]

    Stewart W.J., Probability, Markov Chains, Queues, and Simulation, 2009, Princeton University Press, Princeton, NJ. Google Scholar

  • [4]

    Ching W.K., Chan R.H., Zhou X.Y., Circulant preconditioners for Markov-modulated Poisson process and their applications to manufacturing systems, SIAM J. Matrix Anal. Appl., 1997, 18, 464-481. CrossrefGoogle Scholar

  • [5]

    Meier-Hellstern K., The analysis of a queue arising in overflow models, IEEE Trans. Commun., 1989, 37, 367-372. CrossrefGoogle Scholar

  • [6]

    Buzacott J., Shanthikumar J., Stochastic Models of Manufacturing Systems, 1993, Prentice-Hall International Editions, Upper Saddle River, NJ. Google Scholar

  • [7]

    Saad Y., Iterative Methods for Sparse Linear Systems, 2nd ed., 2003, SIAM, Philadelphia, PA. Google Scholar

  • [8]

    Wen C., Huang T.-Z., Wang C., Triangular and skew-symmetric splitting method for numerical solutions of Markov chains, Comput. Math. Appl., 2011, 62, 4039-4048. Web of ScienceCrossrefGoogle Scholar

  • [9]

    Langville A.N., Stewart W.J., The Kronecker product and stochastic automata networks, J. Comput. Appl. Math., 2004, 167, 429-447. CrossrefGoogle Scholar

  • [10]

    Stewart W.J., Atif K., Plateau B., The numerical solution of stochastic automata networks, Eur. J. Oper. Res., 1995, 86, 503-525. CrossrefGoogle Scholar

  • [11]

    Saad Y., Schultz M.H., GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 1986, 7, 856-896. CrossrefGoogle Scholar

  • [12]

    Zhang H.-F., Huang T.-Z., Wen C., Shen Z.-L., FOM accelerated by an extrapolation method for solving PageRank problems, J. Comput. Appl. Math., 2016, 296, 397-409. Web of ScienceCrossrefGoogle Scholar

  • [13]

    Shen Z.-L., Huang T.-Z., Carpentieri B., Gu X.-M., Wen C., An efficient elimination strategy for solving PageRank problems, Appl. Math. Comput., 2017, 298, 111-122. Web of ScienceGoogle Scholar

  • [14]

    Shen Z.-L., Huang T.-Z., Carpentieri B., Wen C., Gu X.-M., Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems, Commun. Nonlinear. Sci. Numer. Simulat., 2018, 59, 472-487. CrossrefGoogle Scholar

  • [15]

    Wu G., Wei Y., An Arnoldi-extrapolation algorithm for computing PageRank, J. Comput. Appl. Math., 2010, 234, 3196-3212. Web of ScienceCrossrefGoogle Scholar

  • [16]

    Wu G., Zhang Y., Wei Y., Accelerating the Arnoldi-type algorithm for the PageRank problem and the ProteinRank problem, J. Sci. Comput., 2013, 57, 74-104. Web of ScienceCrossrefGoogle Scholar

  • [17]

    Gu X.-M., Huang T.-Z., Yin G., Carpentieri B., Wen C., Du L., Restarted Hessenberg method for solving shifted nonsymmetric linear systems, J. Comput. Appl. Math., 2018, 331, 166-177. CrossrefWeb of ScienceGoogle Scholar

  • [18]

    Wen C., Huang T.-Z., Shen Z.-L., A note on the two-step matrix splitting iteration for computing PageRank, J. Comput. Appl. Math., 2017, 315, 87-97. CrossrefWeb of ScienceGoogle Scholar

  • [19]

    Langville A.N., Stewart W.J., A Kronecker product approximate preconditioner for SANs, Numer. Lnear. Algebra Appl., 2004, 11, 723-752. CrossrefGoogle Scholar

  • [20]

    Saad Y., Preconditioned Krylov subspace methods for the numerical solution of Markov chains, in Stewart W.J. (ed.), Computations with Markov Chains, 1995, 49-64, Springer, Boston, MA. Google Scholar

  • [21]

    Langville A.N., Stewart W.J., Testing the nearest Kronecker product preconditioner on Markov chains and stochastic automata networks, INFORMS J. Comput., 2004, 16, 300-315. CrossrefGoogle Scholar

  • [22]

    Davis P.J., Circulant Matrices, 1979, John Wiley & Sons, New York, NY. Google Scholar

  • [23]

    Greif C., Rees T., Szyld D.B., GMRES with multiple preconditioners, SeMA J., 2007, 74, 213-231. Google Scholar

  • [24]

    Bridson R., Greif C., A multipreconditioned conjugate gradient algorithm, SIAM J. Matrix Anal. Appl., 2006, 27, 1056-1068. CrossrefGoogle Scholar

  • [25]

    Dios B.A.D., Baker A.T., Vassilevski P.S., A combined preconditioning strategy for nonsymmetric systems, SIAM J. Sci. Comput., 2014, 36, A2533-A2556. CrossrefWeb of ScienceGoogle Scholar

  • [26]

    Kaufman L., Matrix methods for queueing problems, SIAM J. Sci. Stat. Comput., 1982, 4, 525-552. Google Scholar

  • [27]

    Van Loan C.F., The ubiquitous Kronecker product, J. Comput. Appl. Math., 2000, 123, 85-100. CrossrefGoogle Scholar

  • [28]

    Senta E., Non-Negative Matrices, 1973, John Wiley & Sons, New York, NY. Google Scholar

  • [29]

    Varga R.S., Matrix Iterative Analysis, 1963, Prentice-Hall, Inc., Englewood Cliffs, NJ. Google Scholar

  • [30]

    Saad Y., A flexible inner-outer preconditioned GMRES algorithm, SIAM J. Sci. Comput., 1993, 14, 461-469. CrossrefGoogle Scholar

  • [31]

    Sturler E.D., Nested Krylov methods based on GCR, J. Comput. Appl. Math., 1996, 67, 15-41. CrossrefGoogle Scholar

  • [32]

    Fernandes P., O’Kelly M.E.J., Papadopoulos C.T., Sales A., Analysis of exponential reliable production lines using Kronecker descriptors, Int. J. Prod. Res., 2013, 51, 4240-4257. Web of ScienceCrossrefGoogle Scholar

  • [33]

    Dotti F.L., Fernandes P., Nunes C.M., Structured Markovian models for discrete spatial mobile node distribution, J. Braz. Comp. Soc., 2011, 17, 31-52. CrossrefGoogle Scholar

  • [34]

    Santos A.R., Sales A., Fernandes P., Using SAN formalism to evaluate Follow-The-Sun project scenarios, J. Syst. Softw., 2015, 100, 182-194. CrossrefWeb of ScienceGoogle Scholar

About the article

x.m.gu@rug.nl


Received: 2017-10-29

Accepted: 2018-07-09

Published Online: 2018-08-24


Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 986–998, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0083.

Export Citation

© 2018 Wen et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in