Many laws of Physics are formulated via differential equations. In practice, input parameters (coefficients, forcing/source term and initial/boundary conditions) of these equations are set from experimental data, thus containing the uncertainty involved in measurement errors. Furthermore, input parameters are often not exactly known because of insufficient information, limited understanding of some underlying phenomena, inherent uncertainty, etc. All these facts motivate that input parameters of classical differential equations are treated as random variables or stochastic processes rather than deterministic constants or functions, respectively. This approach leads to random differential equations (RDEs) [1, 2]. The random behavior of the solution stochastic process can be understood if one obtains its main statistical features, say expectation, variance, covariance, etc.

A powerful tool to deal with RDEs is generalized Polynomial Chaos (gPC) [3, 4]. Let (*Ω*, 𝓕, ℙ) be a complete probability space. We will work in the Hilbert space (L^{2}(*Ω*), 〈⋅, ⋅〉) that consists of second-order random variables, i.e., random variables with finite variance, where the inner product is defined by 〈*ζ*_{1}, *ζ*_{2}〉 = 𝔼[*ζ*_{1}*ζ*_{2}], being 𝔼[⋅] the expectation operator. In its classical formulation, gPC consists in writing a random vector *ζ* : *Ω* → ℝ^{n} as a limit of multivariate polynomials evaluated at a random vector *Z* : *Ω* → ℝ^{n} : *ζ* ≈ $\begin{array}{}\sum _{i=0}^{P}\end{array}$ *ζ̂*_{i}*ϕ*_{i}(*Z*). Here $\begin{array}{}\{{\varphi}_{i}(Z){\}}_{i=0}^{\mathrm{\infty}}\end{array}$ is a sequence of orthogonal polynomials in *Z* : 𝔼[*ϕ*_{i}(*Z*)*ϕ*_{j}(*Z*)] = *∫*_{ℝn} *ϕ*_{i}(*z*)*ϕ*_{j}(*z*) dℙ_{Z}(*z*) = *γ*_{i}*δ*_{ij}, where ℙ_{Z} = ℙ ∘ *Z*^{−1} is the law of *Z* and *δ*_{ij} is the Kronecker delta symbol. A stochastic Galerkin method can be applied to approximate the solution to RDEs [3, Ch. 6]. For some applications of this theory, see for example [5, 6].

Given the random vector *Z*, the sequence $\begin{array}{}\{{\varphi}_{i}(Z){\}}_{i=0}^{\mathrm{\infty}}\end{array}$ of orthogonal polynomials is taken from the Askey-Wiener scheme of hypergeometric orthogonal polynomials, by taking into account the density function *f*_{Z} of *Z* (if *Z* is absolutely continuous) or the discrete masses of *Z* (if *Z* is discrete), [3, 4].

In the recent articles [7, 8, 9], an adaptive gPC method has been developed to approximate the solutions of RDEs. Instead of taking the orthogonal polynomials from the Askey-Wiener scheme, the authors construct them directly from the random inputs that are involved in the corresponding RDE’s formulation.

More explicitly, in [7], it is considered the RDE *F*(*t*, *y*, *ẏ*) = 0, *y*(*t*_{0}) = *y*_{0}, where *F* : ℝ^{2q+1} → ℝ^{q} and *y*(*t*) = (*y*^{1}(*t*), …, *y*^{q}(*t*))^{⊤}, where ⊤ denotes the transpose operator. The set {*ζ*_{1}, …, *ζ*_{s}} represents independent and absolutely continuous random input parameters in the RDE.

For each 1 ≤ *i* ≤ *s*, it is considered the canonical basis of polynomials in *ζ*_{i} of degree at most *p* : $\begin{array}{}{\mathcal{C}}_{i}^{p}\end{array}$ = {1, *ζ*_{i}, (*ζ*_{i})^{2}, …, (*ζ*_{i})^{p}}. One defines the following inner product, with weight function given by the density of *ζ*_{i} : 〈*g*(*ζ*_{i}), *h*(*ζ*_{i})〉_{ζi} = *∫*_{ℝ} *g*(*ζ*_{i})*h*(*ζ*_{i})*f*_{ζi}(*ζ*_{i}) d*ζ*_{i}. Using a Gram-Schmidt orthonormalization procedure, one obtains a sequence of orthonormal polynomials in *ζ*_{i} with respect to $\begin{array}{}\u3008,{\u3009}_{{\zeta}_{i}}:{\mathit{\Xi}}_{\mathit{i}}^{\mathit{p}}\mathit{=}\{{\varphi}_{\mathit{0}}^{\mathit{i}}\mathit{(}{\zeta}_{\mathit{i}}\mathit{)}\mathit{,}\dots \mathit{,}{\varphi}_{\mathit{p}}^{\mathit{i}}\mathit{(}{\zeta}_{\mathit{i}}\mathit{)}\}\mathit{.}\end{array}$ The authors build a sequence of orthonormal multivariate polynomials in *ζ* = (*ζ*_{1}, …, *ζ*_{s})^{⊤} of degree at most *p* with respect to the inner product 〈*g*(*ζ*), *h*(*ζ*)〉_{ζ} = *∫*_{ℝs} *g*(*ζ*)*h*(*ζ*)*f*_{ζ}(*ζ*) d*ζ*. To do so, they build the simple tensor product $\begin{array}{}{\varphi}_{j}(\zeta )={\varphi}_{{p}_{1}}^{1}({\zeta}_{1})\cdots {\varphi}_{{p}_{s}}^{s}({\zeta}_{s}),1\le j\le P,\end{array}$ where *j* is associated in a bijective manner to the multi-index (*p*_{1}, …, *p*_{s}) in such a way that 1 corresponds to (0, …, 0) (for example, a graded lexicographic ordering [3, p. 66]) and *P* = (*p* + *s*)!/(*p*!*s*!). By the independence between *ζ*_{1}, …, *ζ*_{s}, the built sequence $\begin{array}{}\mathit{\Xi}=\{{\varphi}_{j}(\zeta ){\}}_{j=1}^{P}\end{array}$ is orthonormal with respect to 〈, 〉_{ζ}.

Once the basis is constructed, one looks for an approximate solution $\begin{array}{}y(t)\approx \sum _{j=1}^{P}{y}_{j}(t){\varphi}_{j}(\zeta ).\end{array}$ Then, $\begin{array}{}F(t,\sum _{j=1}^{P}{y}_{j}(t){\varphi}_{j}(\zeta ),\sum _{j=1}^{P}{\dot{y}}_{j}(t){\varphi}_{j}(\zeta ))=0.\end{array}$ To obtain the deterministic coefficients *y*_{j}(*t*), one computes the inner products $\begin{array}{}\u3008F(t,\sum _{j=1}^{P}{y}_{j}(t){\varphi}_{j}(\zeta ),\sum _{j=1}^{P}{\dot{y}}_{j}(t){\varphi}_{j}(\zeta )),{\varphi}_{k}(\zeta ){\u3009}_{\zeta}=0,k=1,\dots ,P.\end{array}$ In this manner, one arrives at a deterministic system of *P* differential equations, which may be solved by standard numerical techniques. Once *y*_{1}(*t*), …, *y*_{P}(*t*) have been computed, the expectation of the actual solution *y*(*t*) is approximated by *y*_{1}(*t*) and the covariance matrix is approximated by $\begin{array}{}\sum _{i=1}^{P}\end{array}$ *y*_{i}(*t*)*y*_{i}(*t*)^{⊤}.

In [8], the authors use the Random Variable Transformation technique [10, Th. 1] in case that some random input parameters appearing in the RDE come from mappings of absolutely continuous random variables, whose probability density function is known.

In [9], the authors focus on the case that the random inputs *ζ*_{1}, …, *ζ*_{s} are not independent. They consider the canonical bases $\begin{array}{}{\mathcal{C}}_{i}^{p}\end{array}$ = {1, *ζ*_{i}, (*ζ*_{i})^{2}, …, (*ζ*_{i})^{p}}, for 1 ≤ *i* ≤ *s*, and construct a sequence of multivariate polynomials in *ζ*, via a simple tensor product: $\begin{array}{}{\varphi}_{j}(\zeta )={\zeta}_{1}^{{p}_{1}}\cdots {\zeta}_{s}^{{p}_{s}},\end{array}$ where 1 ≤ *j* ≤ *P* corresponds to the multi-index (*p*_{1}, …, *p*_{s}) and *P* = (*p* + *s*)!/(*p*!*s*!). Notice that this new sequence $\begin{array}{}\{{\varphi}_{j}(\zeta ){\}}_{j=1}^{P}\end{array}$ is not orthonormal with respect to 〈, 〉_{ζ}. However, one proceeds with the RDE as in [7] and, in practice, one obtains good approximations of the expectation and covariance of *y*(*t*).

Based on ample numerical evidence, the gPC-based methods described in [3, 4, 7, 8, 9] converge in the mean square sense at spectral rate. Some theoretical results that justify this assertion are presented in [3, pp. 33–35, p. 73], [11, Th. 2.2], [12, 13, 14, 15].

In this paper we deal with an important class of differential equations with uncertainty often met in Mathematical Physics, namely general random non-autonomous second-order linear differential equations:

$$\begin{array}{}{\displaystyle \left\{\begin{array}{l}\ddot{X}(t)+A(t)\dot{X}(t)+B(t)X(t)=C(t),\phantom{\rule{thickmathspace}{0ex}}t\in \mathbb{R},\\ X({t}_{0})={Y}_{0},\\ \dot{X}({t}_{0})={Y}_{1}.\end{array}\right.}\end{array}$$(1)

Our goal is to obtain approximations of the solution stochastic process *X*(*t*) as well as of its main statistical features, by taking advantage of the adaptive gPC techniques [7, 9]. Here, *A*(*t*), *B*(*t*) and *C*(*t*) are stochastic processes and *Y*_{0} and *Y*_{1} are random variables in an underlying complete probability space (*Ω*, 𝓕, ℙ). The term *X*(*t*) is the solution stochastic process to the random IVP (1) in some probabilistic sense. We will detail conditions for existence and uniqueness of solution in the following section.

Particular cases of (1) (with no random forcing term, *C*(*t*)) have been treated in the extant literature by using the random Fröbenius method. Specifically, Airy, Hermite, Legendre, Laguerre and Bessel differential equations have been randomized and rigorously studied in [16, 17, 18, 19, 20, 21], respectively. The study includes the computation of the expectation and the variance of the solution stochastic process.

In our recent contributions [22, 23], we have studied the general problem (1) when *A*(*t*), *B*(*t*) and *C*(*t*) are analytic stochastic processes in the mean square sense. As it has been proved there, the random power series solution converges in the mean square sense when *A*(*t*) and *B*(*t*) are analytic processes in the L^{∞}(*Ω*) sense, *C*(*t*) is a mean square convergent random power series, and the initial conditions *Y*_{0} and *Y*_{1} belong to L^{2}(*Ω*). Under those assumptions, the expectation and variance statistics of the solution process *X*(*t*) can be rigorously approximated.

In [24] the authors study RDEs by taking advantage of homotopy analysis and they provide a complete set of illustrative examples dealing with random second-order linear differential equations.

In this paper, we want to go one step further and we will perform a computational analysis based upon adaptive gPC, by showing its capability to deal with the general random IVP (1) that comprises Airy, Hermite, Legendre, Laguerre and Bessel differential equations, or any other formulation of (1) based on analytic data processes, just as particular cases. We will thus resolve the future line of research brought up in [23, Section 5].

The paper is organized as follows. Section 2 describes the application of adaptive gPC to solve the random IVP (1) and the computation of the expectation and covariance of *X*(*t*). The study is split into two cases depending on the probabilistic dependence of the random inputs. In Section 3, we show the algorithms corresponding to the theory previously developed in Section 2. Section 4 is addressed to show particular examples of (1) where adaptive gPC, Fröbenius method and Monte Carlo simulation are carried out to obtain approximations for the expectation, variance and covariance of the solution stochastic process. It is evinced that adaptive gPC provides the same results as the Fröbenius method with small orders of basis *p*, and, moreover, in cases where the Fröbenius method is not applicable, adaptive gPC might be successful. Finally, in Section 5, conclusions are drawn.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.