Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo


IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2018: 0.34

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

Issues

Volume 13 (2015)

Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation

Julia Calatayud
  • Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Juan Carlos Cortés
  • Corresponding author
  • Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Marc Jornet
  • Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2018-12-31 | DOI: https://doi.org/10.1515/math-2018-0134

Abstract

This paper presents a methodology to quantify computationally the uncertainty in a class of differential equations often met in Mathematical Physics, namely random non-autonomous second-order linear differential equations, via adaptive generalized Polynomial Chaos (gPC) and the stochastic Galerkin projection technique. Unlike the random Fröbenius method, which can only deal with particular random linear differential equations and needs the random inputs (coefficients and forcing term) to be analytic, adaptive gPC allows approximating the expectation and covariance of the solution stochastic process to general random second-order linear differential equations. The random inputs are allowed to functionally depend on random variables that may be independent or dependent, both absolutely continuous or discrete with infinitely many point masses. These hypotheses include a wide variety of particular differential equations, which might not be solvable via the random Fröbenius method, in which the random input coefficients may be expressed via a Karhunen-Loève expansion.

Keywords: non-autonomous and random dynamical systems; computational uncertainty quantification; adaptive generalized Polynomial Chaos; stochastic Galerkin projection technique; random Fröbenius method

MSC 2010: 34F05; 60H35; 93E03

1 Introduction and Preliminaries

Many laws of Physics are formulated via differential equations. In practice, input parameters (coefficients, forcing/source term and initial/boundary conditions) of these equations are set from experimental data, thus containing the uncertainty involved in measurement errors. Furthermore, input parameters are often not exactly known because of insufficient information, limited understanding of some underlying phenomena, inherent uncertainty, etc. All these facts motivate that input parameters of classical differential equations are treated as random variables or stochastic processes rather than deterministic constants or functions, respectively. This approach leads to random differential equations (RDEs) [1, 2]. The random behavior of the solution stochastic process can be understood if one obtains its main statistical features, say expectation, variance, covariance, etc.

A powerful tool to deal with RDEs is generalized Polynomial Chaos (gPC) [3, 4]. Let (Ω, 𝓕, ℙ) be a complete probability space. We will work in the Hilbert space (L2(Ω), 〈⋅, ⋅〉) that consists of second-order random variables, i.e., random variables with finite variance, where the inner product is defined by 〈ζ1, ζ2〉 = 𝔼[ζ1ζ2], being 𝔼[⋅] the expectation operator. In its classical formulation, gPC consists in writing a random vector ζ : Ω → ℝn as a limit of multivariate polynomials evaluated at a random vector Z : Ω → ℝn : ζi=0P ζ̂iϕi(Z). Here {ϕi(Z)}i=0 is a sequence of orthogonal polynomials in Z : 𝔼[ϕi(Z)ϕj(Z)] = n ϕi(z)ϕj(z) dℙZ(z) = γiδij, where ℙZ = ℙ ∘ Z−1 is the law of Z and δij is the Kronecker delta symbol. A stochastic Galerkin method can be applied to approximate the solution to RDEs [3, Ch. 6]. For some applications of this theory, see for example [5, 6].

Given the random vector Z, the sequence {ϕi(Z)}i=0 of orthogonal polynomials is taken from the Askey-Wiener scheme of hypergeometric orthogonal polynomials, by taking into account the density function fZ of Z (if Z is absolutely continuous) or the discrete masses of Z (if Z is discrete), [3, 4].

In the recent articles [7, 8, 9], an adaptive gPC method has been developed to approximate the solutions of RDEs. Instead of taking the orthogonal polynomials from the Askey-Wiener scheme, the authors construct them directly from the random inputs that are involved in the corresponding RDE’s formulation.

More explicitly, in [7], it is considered the RDE F(t, y, ) = 0, y(t0) = y0, where F : ℝ2q+1 → ℝq and y(t) = (y1(t), …, yq(t)), where ⊤ denotes the transpose operator. The set {ζ1, …, ζs} represents independent and absolutely continuous random input parameters in the RDE.

For each 1 ≤ is, it is considered the canonical basis of polynomials in ζi of degree at most p : Cip = {1, ζi, (ζi)2, …, (ζi)p}. One defines the following inner product, with weight function given by the density of ζi : 〈g(ζi), h(ζi)〉ζi = g(ζi)h(ζi)fζi(ζi) dζi. Using a Gram-Schmidt orthonormalization procedure, one obtains a sequence of orthonormal polynomials in ζi with respect to ,ζi:Ξip={ϕ0i(ζi),,ϕpi(ζi)}. The authors build a sequence of orthonormal multivariate polynomials in ζ = (ζ1, …, ζs) of degree at most p with respect to the inner product 〈g(ζ), h(ζ)〉ζ = s g(ζ)h(ζ)fζ(ζ) dζ. To do so, they build the simple tensor product ϕj(ζ)=ϕp11(ζ1)ϕpss(ζs),1jP, where j is associated in a bijective manner to the multi-index (p1, …, ps) in such a way that 1 corresponds to (0, …, 0) (for example, a graded lexicographic ordering [3, p. 66]) and P = (p + s)!/(p!s!). By the independence between ζ1, …, ζs, the built sequence Ξ={ϕj(ζ)}j=1P is orthonormal with respect to 〈, 〉ζ.

Once the basis is constructed, one looks for an approximate solution y(t)j=1Pyj(t)ϕj(ζ). Then, F(t,j=1Pyj(t)ϕj(ζ),j=1Py˙j(t)ϕj(ζ))=0. To obtain the deterministic coefficients yj(t), one computes the inner products F(t,j=1Pyj(t)ϕj(ζ),j=1Py˙j(t)ϕj(ζ)),ϕk(ζ)ζ=0,k=1,,P. In this manner, one arrives at a deterministic system of P differential equations, which may be solved by standard numerical techniques. Once y1(t), …, yP(t) have been computed, the expectation of the actual solution y(t) is approximated by y1(t) and the covariance matrix is approximated by i=1P yi(t)yi(t).

In [8], the authors use the Random Variable Transformation technique [10, Th. 1] in case that some random input parameters appearing in the RDE come from mappings of absolutely continuous random variables, whose probability density function is known.

In [9], the authors focus on the case that the random inputs ζ1, …, ζs are not independent. They consider the canonical bases Cip = {1, ζi, (ζi)2, …, (ζi)p}, for 1 ≤ is, and construct a sequence of multivariate polynomials in ζ, via a simple tensor product: ϕj(ζ)=ζ1p1ζsps, where 1 ≤ jP corresponds to the multi-index (p1, …, ps) and P = (p + s)!/(p!s!). Notice that this new sequence {ϕj(ζ)}j=1P is not orthonormal with respect to 〈, 〉ζ. However, one proceeds with the RDE as in [7] and, in practice, one obtains good approximations of the expectation and covariance of y(t).

Based on ample numerical evidence, the gPC-based methods described in [3, 4, 7, 8, 9] converge in the mean square sense at spectral rate. Some theoretical results that justify this assertion are presented in [3, pp. 33–35, p. 73], [11, Th. 2.2], [12, 13, 14, 15].

In this paper we deal with an important class of differential equations with uncertainty often met in Mathematical Physics, namely general random non-autonomous second-order linear differential equations:

X¨(t)+A(t)X˙(t)+B(t)X(t)=C(t),tR,X(t0)=Y0,X˙(t0)=Y1.(1)

Our goal is to obtain approximations of the solution stochastic process X(t) as well as of its main statistical features, by taking advantage of the adaptive gPC techniques [7, 9]. Here, A(t), B(t) and C(t) are stochastic processes and Y0 and Y1 are random variables in an underlying complete probability space (Ω, 𝓕, ℙ). The term X(t) is the solution stochastic process to the random IVP (1) in some probabilistic sense. We will detail conditions for existence and uniqueness of solution in the following section.

Particular cases of (1) (with no random forcing term, C(t)) have been treated in the extant literature by using the random Fröbenius method. Specifically, Airy, Hermite, Legendre, Laguerre and Bessel differential equations have been randomized and rigorously studied in [16, 17, 18, 19, 20, 21], respectively. The study includes the computation of the expectation and the variance of the solution stochastic process.

In our recent contributions [22, 23], we have studied the general problem (1) when A(t), B(t) and C(t) are analytic stochastic processes in the mean square sense. As it has been proved there, the random power series solution converges in the mean square sense when A(t) and B(t) are analytic processes in the L(Ω) sense, C(t) is a mean square convergent random power series, and the initial conditions Y0 and Y1 belong to L2(Ω). Under those assumptions, the expectation and variance statistics of the solution process X(t) can be rigorously approximated.

In [24] the authors study RDEs by taking advantage of homotopy analysis and they provide a complete set of illustrative examples dealing with random second-order linear differential equations.

In this paper, we want to go one step further and we will perform a computational analysis based upon adaptive gPC, by showing its capability to deal with the general random IVP (1) that comprises Airy, Hermite, Legendre, Laguerre and Bessel differential equations, or any other formulation of (1) based on analytic data processes, just as particular cases. We will thus resolve the future line of research brought up in [23, Section 5].

The paper is organized as follows. Section 2 describes the application of adaptive gPC to solve the random IVP (1) and the computation of the expectation and covariance of X(t). The study is split into two cases depending on the probabilistic dependence of the random inputs. In Section 3, we show the algorithms corresponding to the theory previously developed in Section 2. Section 4 is addressed to show particular examples of (1) where adaptive gPC, Fröbenius method and Monte Carlo simulation are carried out to obtain approximations for the expectation, variance and covariance of the solution stochastic process. It is evinced that adaptive gPC provides the same results as the Fröbenius method with small orders of basis p, and, moreover, in cases where the Fröbenius method is not applicable, adaptive gPC might be successful. Finally, in Section 5, conclusions are drawn.

2 Method

Consider the random IVP (1), where

A(t)=a0(t)+i=1dAai(t)γi,B(t)=b0(t)+i=1dBbi(t)ηi,C(t)=c0(t)+i=1dCci(t)ξi,(2)

being γ1, …, γdA, η1, …, ηdB and ξ1, …, ξdC random variables (not necessarily independent) and a0(t), …, adA(t), b0(t), …, bdB(t) and c0(t), …, cdC(t) real functions. Representation (2) for the input stochastic processes includes truncated random power series [2, p. 99] and Karhunen-Loève expansions [3, Ch. 4], [25, Ch. 5]. This is an improvement with respect to the random Fröbenius method used in [16, 17, 18, 19, 20, 21, 22, 23], in which A(t), B(t) and C(t) are only expressed as random power series.

As we are interested in constructive computational aspects of uncertainty quantification, we will assume that there exists a unique solution stochastic process X(t) to IVP (1) in some probabilistic sense, for instance, sample path [1, SP problem] [2, Appendix A], or Lq(Ω) sense [2], in such a way that 𝔼[X(t)2] < ∞ for each t. We detail the conditions under which there exists a unique solution X(t) to (1) in the following propositions. The proofs are simple consequences of the references cited therein. Proposition 2.1, which is concerned with sample path solutions, is a direct consequence of the deterministic theory on ordinary differential equations (Carathéodory theory on the existence of absolutely continuous solutions [26, pp. 28–30]). Proposition 2.2 takes advantage of a natural generalization to Lq(Ω) random calculus of the classical Picard theorem for deterministic ordinary differential equations [2, Th. 5.1.2].

Proposition 2.1

(Sample path solution). [26, pp. 2830] If A(t), B(t) and C(t) have real integrable sample paths, then there exists a unique solution stochastic process X(t) to (1) with C1 sample paths and derivative Ẋ(t) with absolutely continuous sample paths (i.e., X(t) is a classical solution that belongs to the Sobolev space W2,1). Moreover, if A(t), B(t) and C(t) have continuous sample paths, then X(t) has C2 sample paths.

Proposition 2.2

(Lq(Ω) solution). [2, Ch. 5], [23] If A(t) and B(t) are continuous stochastic processes in the L(Ω) sense, and the source term C(t) is continuous in the Lq(Ω) setting, then there exists a unique solution X(t) to (1) in the Lq(Ω) sense.

Our goal is to approximate the solution stochastic process X(t) to the random IVP (1) by using adaptive gPC, which is described in [7, 9] and has been reviewed in Section 1. In the case that the random inputs γ1, …, γdA, η1, …, ηdB, ξ1, …, ξdC, Y0 and Y1 are independent, we will use the method from [7], whereas in the case that they are not independent, [7, 9], the random inputs are assumed to be absolutely continuous, so that the weights in the inner products are given by density functions. Notice, however, that a discrete distribution with infinitely many point masses can be given to the random inputs. Indeed, the corresponding inner product becomes an integral with respect to a discrete law, which is a series with weights being the probabilities of the point masses. Moreover, since the support has infinite cardinality, the corresponding canonical basis of polynomials has infinite dimension, so that its length p can grow up to infinity.

For ease of notation and to identify the notation with the one used in Section 1, we denote the random inputs γ1, …, γdA, η1, …, ηdB, ξ1, …, ξdC, Y0 and Y1 as ζ1, …, ζs, where s = dA + dB+dC + 2. The random variables ζ1, …, ζs are not necessarily independent, and they are absolutely continuous or discrete random variables with infinitely many point masses. We will denote ζ = (ζ1, …, ζs). The space of polynomials evaluated at ζi of degree at most p will be denoted by 𝓟p[ζi]. The space of multivariate polynomials evaluated at ζ of degree at most P will be written as PPs[ζ].

In the next development, we distinguish two cases depending on whether the random inputs ζ1, …, ζs are independent or not.

2.1 The random inputs are independent

In the notation from [7] and Section 1, let Cip={1,ζi,,ζip} be the canonical basis of 𝓟p[ζi], for i = 1, …, s. Let Ξip={ϕ0i(ζi),,ϕpi(ζi)} be the orthonormalization of Cip with respect to the inner product defined by the law ℙζi, via a Gram-Schmidt procedure. Let Ξ = {ϕ1(ζ), …, ϕP(ζ)} be the orthonormal basis of PPs[ζ] with respect to the law ℙζ = ℙζ1 × ⋅s × ℙζs, where P = (p + s)!/(p!s!).

We approximate the solution stochastic process X(t) ≈ i=1P xi(t)ϕi(ζ) by imposing the right-hand side to be a solution to random IVP (1):

i=1Px¨i(t)ϕi(ζ)+a0(t)+i=1dAai(t)γii=1Px˙i(t)ϕi(ζ)+b0(t)+i=1dBbi(t)ηii=1Pxi(t)ϕi(ζ)=c0(t)+i=1dCci(t)ξi.(3)

We apply the stochastic Galerkin projection technique. By multiplying by ϕk(ζ), k = 1, …, P, applying expectations, using the orthonormality of Ξ and the fact that ϕ1 = 1, we obtain:

x¨k(t)+a0(t)x˙k(t)+i=1dAj=1Pai(t)x˙j(t)E[γiϕj(ζ)ϕk(ζ)]+b0(t)xk(t)+i=1dBj=1Pbi(t)xj(t)E[ηiϕj(ζ)ϕk(ζ)]=c0(t)δ1k+i=1dCci(t)E[ξiϕk(ζ)].(4)

Let us put this equation in matrix form. Consider the P × P matrices M and N defined by

Mkj(t)=i=1dAai(t)E[γiϕj(ζ)ϕk(ζ)],Nkj(t)=i=1dBbi(t)E[ηiϕj(ζ)ϕk(ζ)],(5)

for k, j = 1, …, P. Consider the vector q of length P with

qk=i=1dCci(t)E[ξiϕk(ζ)],(6)

for k = 1 …, P. We rewrite (4) as a deterministic system of P differential equations:

x¨(t)+(M(t)+a0(t)IP)x˙(t)+(N(t)+b0(t)Ip)x(t)=q(t)+c0(t)e1,(7)

where x(t) = (x1(t), …, xP(t)), IP is the P × P identity matrix and e1 is the first vector of the canonical basis: (1, 0, …, 0). It remains to find the initial condition for (7). From i=1P xi(t0)ϕi(ζ) = Y0 and i=1P i(t0)ϕi(ζ) = Y1, we obtain that xk(t0) = 𝔼[Y0ϕk(ζ)] and k(t0) = 𝔼[Y1ϕk(ζ)], for k = 1, …, P. Thus, the initial conditions become x(t0) = y and (t0) = y′, where y = (y1, …, yP) and y=(y1,,yP),

yk=E[Y0ϕk(ζ)],yk=E[Y1ϕk(ζ)],(8)

for k = 1, …, P.

The system of deterministic differential equations can be solved by using standard numerical techniques. Once we have computed the solution (x1(t), …, xP(t)), we have obtained the approximation i=1P xi(t)ϕi(ζ) for the solution stochastic process X(t). Moreover, one can approximate the expectation and covariance of X(t):

E[X(t)]x1(t),Cov[X(t1),X(t2)]i=2Pxi(t1)xi(t2).(9)

2.2 The random inputs may not be independent

In the notation from [9] and Section 1, let Cip={1,ζi,,ζip} be the canonical basis of 𝓟p[ζi], for i = 1, …, s. We construct the basis Ξ = {ϕ1, …, ϕP} of PPs[ζ] as in [9]. This basis is not orthonormal with respect to the law ℙζ.

We approximate the solution stochastic process X(t) ≈ i=1P xi(t)ϕi(ζ) by imposing the right-hand side to be a solution to random IVP (1). One obtains (3). By multiplying by ϕk(ζ) and applying expectations, k = 1, …, P, we derive that

i=1Px¨i(t)E[ϕi(ζ)ϕk(ζ)]+a0(t)i=1Px˙i(t)E[ϕi(ζ)ϕk(ζ)]+i=1dAj=1Pai(t)x˙j(t)E[γiϕj(ζ)ϕk(ζ)]+b0(t)i=1Pxi(t)E[ϕi(ζ)ϕk(ζ)]+i=1dBj=1Pbi(t)xj(t)E[ηiϕj(ζ)ϕk(ζ)]=c0(t)E[ϕk(ζ)]+i=1dCci(t)E[ξiϕk(ζ)].(10)

Define the P × P matrix R and the vector h of length P as

Rik=E[ϕi(ζ)ϕk(ζ)],hk=E[ϕk(ζ)],(11)

for i, k = 1, …, P. Expression (10) can be written in matrix form as a deterministic system of P differential equations:

Rx¨(t)+(M(t)+a0(t)R)x˙(t)+(N(t)+b0(t)R)x(t)=q(t)+c0(t)h.(12)

The initial conditions are given by Rx(t0) = y and R(t0) = y′.

This system of deterministic differential equations is solvable by standard numerical techniques. Once we have computed the approximation i=1P xi(t)ϕi(ζ) of the solution stochastic process X(t), the expectation and covariance of X(t) can be approximated as follows:

E[X(t)]i=1Pxi(t)E[ϕi(ζ)],Cov[X(t1),X(t2)]i=1Pj=1Pxi(t1)xj(t2)Cov[ϕi(ζ),ϕj(ζ)].(13)

3 Algorithm

In this section we present the algorithm corresponding to Section 2. From the random inputs A(t), B(t) and C(t) having expression (2) and the initial conditions Y0 and Y1, we will show the steps to be followed in order to approximate the expectation and covariance of the solution stochastic process X(t). As in Section 2, denote the random input parameters by ζ1, …, ζs.

Case ζ1, …, ζs are independent:

  • Step 1

    Define the canonical basis Cip={1,ζi,,ζip},i=1,,s.

  • Step 2

    Via a Gram-Schmidt procedure, orthonormalize Cip to a new basis Ξip={ϕ0i(ζ),,ϕpi(ζ)} with respect to the probability law ℙζi of ζi. In the software Mathematica®, this can be readily done with the built-in function Orthogonalize. For example, if p = 3 and the probability distribution is dist, then the command could be:

    Expand[Orthogonalize[{1, Z, Z^2, Z^3},

    Integrate[#1 #2 PDF[dist, Z], {Z, -Infinity, Infinity}] &]]

  • Step 3

    By using a simple tensor product, define the orthonormal basis with respect to the joint law ℙζ = ℙζ1 × ⋅s × ℙζs, Ξ = {ϕ1(ζ), …, ϕP(ζ)}.

  • Step 4

    Construct the matrices M(t) and N(t) given by (5), the vector q(t) defined by (6), and the initial conditions y and y′ given by (8). All the involved expectations can be calculated with the built-in function Expectation from Mathematica®.

  • Step 5

    Solve numerically the deterministic system of P differential equations given by (7) with initial conditions x(t0) = y and (t0) = y′. This system does not pose serious numerical challenges. We thus integrate the equations over time with the standard NDSolve routine from Mathematica®: write the instruction

    NDSolve[eqns,function,{t,t0,T}]

    with automatic method, step size, etc. (the built-in function will automatically try to estimate the best method for a particular computation).

  • Step 6

    Approximate the expectation and covariance of the unknown solution stochastic process by using (9).

Case ζ1, …, ζs are not independent:

  • Step 1

    Define the canonical basis Cip={1,ζi,,ζip},i=1,,s.

  • Step 2

    By using a simple tensor product, define the basis Ξ = {ϕ1(ζ), …, ϕP(ζ)}.

  • Step 3

    Construct the matrices M(t) and N(t) given by (5), the vector q(t) defined by (6), the matrix R(t) and the vector h given by (11), and the vectors y and y′ expressed by (8). All the involved expectations can be calculated with the built-in function Expectation from Mathematica®.

  • Step 4

    Solve numerically the deterministic system of P differential equations given by (12) with initial conditions Rx(t0) = y and R(t0) = y′. This system does not pose serious numerical challenges. We thus integrate the equations over time with the standard NDSolve routine from Mathematica® with the option

    Method -> {"EquationSimplification" -> "Residual"}

    (to deal with the corresponding system of differential-algebraic equations): write the instruction

    NDSolve[eqns,function,{t,t0,T},

    Method -> {"EquationSimplification" -> "Residual"}]

    with automatic method, step size, etc. (the built-in function will automatically try to pick the best method for a particular computation).

  • Step 5

    Approximate the expectation and covariance of the unknown solution stochastic process by using (13).

4 Examples

In this section we show particular examples of the random IVP (1) to which we apply adaptive gPC to approximate the expectation and covariance of the solution stochastic process X(t).

We will compare the results with Monte Carlo simulation. This method is based on sampling. Sample from the probability distributions of A(t), B(t), C(t), Y0 and Y1 to obtain, say m realizations, for m large:

A(1)(t),,A(m)(t),B(1)(t),,B(m)(t),C(1)(t),,C(m)(t),Y0(1),,Y0(m),Y1(1),,Y1(m).

Then we solve the m deterministic initial value problems

X¨(i)(t)+A(i)(t)X˙(i)(t)+B(i)(t)X(i)(t)=C(i)(t),tR,X(i)(t0)=Y0(i),X˙(i)(t0)=Y1(i),

so that we obtain m realizations of X(t): X(1)(t), …, X(m)(t). The Law of Large Numbers permits approximating 𝔼[X(t)] and 𝕍[X(t)] by computing the sample mean and sample variance of X(1)(t), …, X(m)(t):

E[X(t)]μm(t)=1mi=1mX(i)(t),V[X(t)]1m1i=1m(X(i)(t)μm(t))2.

The results of adaptive gPC agree with Monte Carlo simulation, although the convergence rate of Monte Carlo is much slower (its error convergence rate is inversely proportional to the square root of the number of realizations [3, p. 53]).

The result of the expectation will also be compared with the dishonest method [27, p. 149]. It consists in estimating 𝔼[X(t)] by substituting A(t), B(t), C(t), Y0 and Y1 in (1) by their corresponding expected values. Denoting μX(t) = 𝔼[X(t)], the idea is that, since 𝔼[(t)] = d2dt2(μX(t)) and 𝔼[(t)] = ddt(μX(t)), because of the commutation between the mean square limit and the expectation operator (see [2, Ch. 4]), one solves:

d2dt2(μX(t))+E[A(t)]ddt(μX(t))+E[B(t)]μX(t)=E[C(t)],tR,μX(t0)=E[Y0],ddt(μX(t0))=E[Y1].

In our context, the dishonest method will work on cases where ℂov[A(t), (t)] and ℂov[B(t),X(t)] are small, but in general, there is no certainty that this may hold. Thus, this method is a naive approximation to the true expectation, with no theoretical support, although with a certain use in the literature [27].

When possible, the results obtained via adaptive gPC for the expectation and variance will be compared with the random Fröbenius method. The convergence of the random Fröbenius method will be guaranteed by previous studies, see [22, 23].

Several conclusions are drawn from these examples. Adaptive gPC allows for random inputs (2) more general than the random Fröbenius method: A(t), B(t) and C(t) may not be analytic, they may be represented via a truncated Karhunen-Loève expansion, etc. Moreover, with a small length p of the bases, accurate results are obtained (this is due to the well-known spectral convergence of gPC-based methods). In practical applications, a disadvantage of adaptive gPC is that random parameter inputs cannot have a finite number of point masses (otherwise the space of polynomials evaluated at them would have finite dimension). From a computational standpoint, a large number s of random input parameters may make the computations inviable, as the order P of the basis increases as P = (p + s)!/(p!s!).

Example 4.1

Airy-type differential equations appear in a variety of applications to Mathematical Physics, such as the description of the solution to the Schrödinger equation for a particle confined within a triangular potential, in the solution for the one-dimensional motion of a quantum particle affected by a constant force, or in the theory of diffraction of radio waves around the Earth’s surface [28]. Airy’s random differential equation is given by [16]:

X¨(t)+AtX(t)=0,tR,X(0)=Y0,X˙(0)=Y1,(14)

where A, Y0 and Y1 are random variables. It is well-known that the solution to the deterministic Airy’s differential equation is highly oscillatory, hence it is expected that, in dealing with its stochastic counterpart, differences between distinct methods will be highlighted.

Existence and uniqueness of sample path solution is guaranteed by Proposition 2.1. Concerning the existence and uniqueness of mean square solution, we refer to [16, 22] or Proposition 2.2, under the assumption that A is a bounded random variable. This assumption on boundedness is not a restriction in practice, as one may truncate the random variable A with a support as large as desired.

In [16], the following distributions for A, Y0 and Y1 are set: A ∼ Beta(2, 3), Y0 ∼ Normal(1, 1) and Y1 ∼ Normal(2, 1). They are assumed to be independent. Approximations for the expectation and variance via the random Fröbenius method and Monte Carlo simulation are obtained in [16]. We use adaptive gPC (independent case) with p = 3, p = 4, ζ1 = A, ζ2 = Y0 and ζ3 = Y1, η1 = A, A(t) = 0, C(t) = 0 and b1(t) = t. The results obtained are shown in Table 1 (expectation), Table 2 (variance) and Table 3 (covariance). The order of truncation in the random Fröbenius method is denoted by N. Observe that gPC expansions have converged for t ∈ [0, 2] with order p = 3. This rapid convergence shows the potentiality of this approach.

Table 1

Approximation of 𝔼[X(t)]. Example 4.1, assuming independent random data.

Table 2

Approximation of 𝕍[X(t)]. Example 4.1, assuming independent random data.

Table 3

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3 and p = 4. Example 4.1, assuming independent random data.

In Figure 1, we focus on the convergence of gPC expansions. The solid line reflects the expectations, while the dashed lines represent confidence intervals constructed with the rule mean ± deviation (the standard deviation stands for the square root of the variance). Observe that, as we move away from t = 0, larger orders of p are required to achieve good approximations of the statistics of X(t). Indeed, Galerkin projections deviate from the exact solution after a certain time. Realize also that larger orders of p are needed to get accurate results of the standard deviation than for the expectation (statistical moments of order 2 are harder to approximate than moments of order 1). For p = 3 and p = 4, the approximate expectations agree up to time t = 7, whereas the standard deviations up to t = 4.5. For p = 2 and p = 3, similar means are obtained until t = 6, and similar standard deviations up to t = 4. Notice that the convergence deteriorates for p = 1: the results for p = 1 and p = 2 agree until t = 4 for the expectation, but up to instant t = 1.5 for the standard deviation. As p grows, the approximation of the statistics will improve for larger t.

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming independent random data.
Figure 1

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming independent random data.

By using the random Fröbenius method from [16], an example of Airy’s differential equation with dependent random inputs is performed. It is set (A, Y0, Y1) to have a multivariate Gaussian distribution, with mean vector and covariance matrix given by

μ=0.412,Σ=0.040.00010.050.000110.50.0050.51,

respectively. In Table 4, Table 5 and Table 6, the results obtained via adaptive gPC with p = 3, p = 4 (dependent case) and [16] are shown. Adaptive gPC converges for small order of basis p.

Table 4

Approximation of 𝔼[X(t)]. Example 4.1, assuming dependent random data.

Table 5

Approximation of 𝕍[X(t)]. Example 4.1, assuming dependent random data.

Table 6

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3 and p = 4. Example 4.1, assuming dependent random data.

In Figure 2, we analyze the convergence of gPC expansions by depicting the expectation (solid line) and confidence interval (dashed lines) for X(t), where the confidence interval is constructed as mean ± deviation. Analogous comments to those from Figure 1 apply in this case again. For orders p = 3 and p = 4, the expectations agree up to time t = 6, while the standard deviations coincide until t = 4.6. For p = 2 and p = 3, the means are similar until t = 6, whereas the dispersion estimates start separating from t = 3.8. Finally, for p = 1 and p = 2, the approximations for the average statistic coincide till t = 4.5, and for the deviation statistic until t = 2.5.

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming dependent random data.
Figure 2

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming dependent random data.

Example 4.2

Consider the random differential equation

X¨(t)+(γ1+γ2t)X˙(t)+(η1+t)X(t)=ξ1cos(t)+g(t),tR,X(0)=Y0,X˙(0)=Y1,(15)

where γ1 ∼ Poisson(3), γ2 ∼ Uniform(0, 1), η1 ∼ Gamma(2, 2), Y0 = −1, Y1 ∼ Exponential(4), ξ1 ∼ Uniform(−8, 2) and g(t) = e−1/t1(0,∞)(t).

Proposition 2.1 ensures the existence and uniqueness of a sample path solution. To apply Proposition 2.2, one would need to truncate the supports of γ1 and η1. These truncations can be constructed on intervals as large as desired, in order to maintain the results.

The input random variables ζ1 = γ1, ζ2 = γ2, ζ3 = η1, ζ4 = ξ1 and ζ5 = Y1 are assumed to be independent. The involved functions are a1(t) = 1, a2(t) = t, b1(t) = 1, b2(t) = t, c0(t) = g(t) and c1(t) = cos(t). Notice that C(t) is not an analytic stochastic process, because g(t) is not a real analytic function. The random Fröbenius method is not applicable for the random IVP (15). However, we are going to see that adaptive gPC (independent case) with p = 6 and p = 7 provides reliable approximations of the expectation and covariance of X(t). We will compare the results with Monte Carlo simulation. In Table 7, Table 8 and Table 9 we show the estimates obtained.

Table 7

Approximation of 𝔼[X(t)]. Example 4.2, assuming independent random data.

Table 8

Approximation of 𝕍[X(t)]. Example 4.2, assuming independent random data.

Table 9

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 7. Example 4.2, assuming independent random data.

In Figure 3, we focus on the convergence of gPC expansions. We depict the estimates of the expectations (solid line) and confidence intervals (dashed lines), with the rule mean ± deviation, for orders p = 4, 5, 6, 7. Note that convergence is achieved for t ∈ [0, 10].

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 4, 5, 6, 7. Example 4.2, assuming independent random data.
Figure 3

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 4, 5, 6, 7. Example 4.2, assuming independent random data.

Example 4.3

Consider the random differential equation

X¨(t)+B(t)X(t)=C,t[0,1],X(0)=Y0,X˙(0)=Y1,(16)

where B(t) is a standard Brownian motion on [0, 1], C ∼ Poisson(2), and the initial conditions are distributed as Y0 ∼ Beta(1/2, 1/2) and Y1 = 0. These random inputs are assumed to be independent.

This stochastic system has a unique solution in the sample path sense, by Proposition 2.1. In principle, one cannot ensure the existence of a mean square solution, since the sample paths of Brownian motion are not bounded.

Consider the Karhunen-Loève expansion of Brownian motion [25, p. 216]:

B(t)=j=12j12πsinj12πtξj,

where ξ1, ξ2, … are independent and Normal(0, 1) random variables. The series is understood in L2([0, 1] × Ω). We truncate the Karhunen-Loève expansion so that B(t) will have the form in (2). If we take dB = 7, we are capturing more than 97% of the total variance of X. Thus, we take

B(t)=j=172j12πsin((j12)πt)ξj.

The random inputs become ζ1 = ξ1, …, ζ7 = ξ7, ζ8 = C and ζ9 = Y0, with functions bj(t)=2(j1/2)π sin((j−1/2)π t), 1 ≤ j ≤ 7, and c1(t) = 1.

Notice that, if one truncates ξ1, …, ξ7 to a large but bounded support, Proposition 2.2 entails that there exists a solution stochastic process in the mean square sense.

In Table 10, Table 11 and Table 12, we show the results obtained by adaptive gPC with p = 2, p = 3 and Monte Carlo simulation. Similar estimates are obtained for p = 2 and p = 3, which agrees with the convergence of gPC-based representations.

Table 10

Approximation of 𝔼[X(t)]. Example 4.3, assuming independent random data.

Table 11

Approximation of 𝕍[X(t)]. Example 4.3, assuming independent random data.

Table 12

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3. Example 4.3, assuming independent random data.

In Figure 4, we show graphically the convergence of gPC expansions on [0, 1]: we plot the approximate expectations (solid line) and confidence intervals (dashed lines) for X(t), where the confidence interval is constructed as mean ± deviation. For p = 1, 2, 3, no differences in the estimates are observed.

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3. Example 4.3, assuming independent random data.
Figure 4

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3. Example 4.3, assuming independent random data.

5 Conclusions

In this paper, we have quantified computationally the uncertainty of random non-autonomous second-order linear differential equations via adaptive gPC. After reviewing adaptive gPC from the extant literature, we have provided a methodology and an algorithm to approximate computationally the expectation and covariance of the solution stochastic process. The hypotheses from our algorithm allow both independent and dependent random parameter inputs, being both absolutely continuous or discrete with infinitely many point masses. The generality of our computational results allows the random input coefficients to be truncated random power series or truncated Karhunen-Loève expansions. The former case permits comparing our methodology with the random Fröbenius method, an approach already used in the literature with particular random second-order linear differential equations, and Monte Carlo simulation. A wide variety of examples show that adaptive gPC becomes successful quantifying the uncertainty of random non-autonomous second-order linear differential equations, even when the random Fröbenius method is not applicable.

Acknowledgement

This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. Marc Jornet acknowledges the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València. The authors are grateful for the valuable comments raised by the reviewer, which have improved the final version of the paper.

References

  • [1]

    Strand J.L., Random Ordinary Differential Equations, J. Differ. Equ., 1970, 7, 538–553. CrossrefGoogle Scholar

  • [2]

    Soong T.T., Random Differential Equations in Science and Engineering, 1973, New York: Academic Press. Google Scholar

  • [3]

    Xiu D., Numerical Methods for Stochastic Computations. A Spectral Method Approach, 2010, Princeton University Press. Google Scholar

  • [4]

    Xiu D., Karniadakis G.E, The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 2002, 24 (2), 619–644. CrossrefGoogle Scholar

  • [5]

    Williams M.M.R., Polynomial chaos functions and stochastic differential equations, Ann. Nucl. Energy, 2006, 33 (9), 774–785. CrossrefGoogle Scholar

  • [6]

    Chen-Charpentier B.M., Stanescu D., Epidemic models with random coefficients, Math. Comput. Model., 2010, 52 (7–8), 1004–1010. CrossrefWeb of ScienceGoogle Scholar

  • [7]

    Chen-Charpentier B.M., Cortés J.C., Licea J.A., Romero J.V., Roselló M.D., Santonja F.J., Villanueva R.J., Constructing adaptive generalized polynomial chaos method to measure the uncertainty in continuous models: A computational approach, Math. Comput. Simulat., 2015, 109, 113–129. CrossrefWeb of ScienceGoogle Scholar

  • [8]

    Cortés J.C., Romero J.V., Roselló M.D., Villanueva R.J., Improving adaptive generalized polynomial chaos method to solve nonlinear random differential equations by the random variable transformation technique, Commun. Nonlinear Sci. Numer. Simulat., 2017, 50, 1–15. CrossrefGoogle Scholar

  • [9]

    Cortés J.C., Romero J.V., Roselló M.D., Santonja F.J., Villanueva R.J., Solving Continuous Models with Dependent Uncertainty: A Computational Approach, Abstr. Appl. Anal., 2013. Web of ScienceGoogle Scholar

  • [10]

    Cortés J.C., Navarro-Quiles A., Romero J.V., Roselló M.D., Probabilistic solution of random autonomous first-order linear systems of ordinary differential equations, Rom. Rep. Phys., 2016, 68 (4), 1397–1406. Google Scholar

  • [11]

    Gottlieb D., Xiu D., Galerkin method for wave equations with uncertain coefficients, Commun. Comput. Phys., 2008, 3 (2), 505–518. Google Scholar

  • [12]

    Ernst O.G., Mugler A., Starkloff H.J., Ullmann E., On the convergence of generalized polynomial chaos expansions, ESAIM-Math. Model. Num., 2012, 46 (2), 317–339. CrossrefGoogle Scholar

  • [13]

    Shi W., Zhang C., Error analysis of generalized polynomial chaos for nonlinear random ordinary differential equations, Appl. Numer. Math., 2012, 62 (12), 1954–1964. CrossrefWeb of ScienceGoogle Scholar

  • [14]

    Shi W., Zhang C., Generalized polynomial chaos for nonlinear random delay differential equations, Appl. Numer. Math., 2017, 115, 16–31. CrossrefWeb of ScienceGoogle Scholar

  • [15]

    Calatayud J., Cortés J.C., Jornet M., On the convergence of adaptive gPC for non-linear random difference equations: Theoretical analysis and some practical recommendations, J. Nonlinear Sci. App., 2018, 11 (9), 1077–1084. CrossrefGoogle Scholar

  • [16]

    Cortés J.C., Jódar L., Camacho F., Villafuerte L., Random Airy type differential equations: Mean square exact and numerical solutions, Comput. Math. Appl., 2010, 60, 1237–1244. Web of ScienceCrossrefGoogle Scholar

  • [17]

    Calbo G., Cortés J.C., Jódar L., Random Hermite differential equations: Mean square power series solutions and statistical properties, Appl. Math. Comput., 2011, 218 (7), 3654–3666. Web of ScienceGoogle Scholar

  • [18]

    Calbo G., Cortés J.C., Jódar L., Villafuerte L., Solving the random Legendre differential equation: Mean square power series solution and its statistical functions, Comput. Math. Appl., 2011, 61 (9), 2782–2792. CrossrefWeb of ScienceGoogle Scholar

  • [19]

    Calatayud J., Cortés J.C., Jornet M., Improving the approximation of the first and second order statistics of the response process to the random Legendre differential equation, 2018, arXiv preprint, arXiv:1807.03141. Google Scholar

  • [20]

    Cortés J.C., Jódar L., Company R., Villafuerte L., Laguerre random polynomials: definition, differential and statistical properties, Utilitas Mathematica, 2015, 98, 283–295. Google Scholar

  • [21]

    Cortés J.C., Jódar L., Villafuerte L., Mean square solution of Bessel differential equation with uncertainties, J. Comput. Appl. Math., 2017, 309 (1), 383–395. CrossrefWeb of ScienceGoogle Scholar

  • [22]

    Calatayud J., Cortés J.C., Jornet M., Villafuerte L., Random non-autonomous second order linear differential equations: mean square analytic solutions and their statistical properties, Adv. Differ. Equ., 2018, 392, 1–29. Web of ScienceGoogle Scholar

  • [23]

    Calatayud J., Cortés J.C., Jornet M., Some notes to extend the study on random non-autonomous second order linear differential equations appearing in Mathematical Modeling, Math. Comput. Appl., 2018, 23 (4), 76–89. Google Scholar

  • [24]

    Golmankhaneh A.K., Porghoveh N.A., Baleanu D., Mean square solutions of second-order random differential equations by using homotopy analysis method, Rom. Rep. Phys., 2013, 65 (2). Google Scholar

  • [25]

    Lord G.J., Powell C.E., Shardlow T., An Introduction to Computational Stochastic PDEs, 2014, New York: Cambridge Texts in Applied Mathematics, Cambridge University Press. Google Scholar

  • [26]

    Hale J.K., Ordinary Differential Equations (2nd ed.), 1980, Malabar: Robert E. Krieger Publishing Company. Google Scholar

  • [27]

    Henderson D., Plaschko P., Stochastic Differential Equations in Science and Engineering, 2006, Singapore: World Scientific. Google Scholar

  • [28]

    Vallée O., Soares M., Airy Functions and Applications to Physics, 2004, London: Imperial College Press. Google Scholar

About the article

Received: 2018-04-26

Accepted: 2018-12-10

Published Online: 2018-12-31


Conflict of interestConflict of Interest Statement: The authors declare that there is no conflict of interests regarding the publication of this article.


Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 1651–1666, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0134.

Export Citation

© 2018 Calatayud et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in