# Fast generation of isotropic Gaussian random fields on the sphere

Peter E. Creasey and Annika Lang

# Abstract

The efficient simulation of isotropic Gaussian random fields on the unit sphere is a task encountered frequently in numerical applications. A fast algorithm based on Markov properties and fast Fourier transforms in 1d is presented that generates samples on an n×n grid in O(n2logn). Furthermore, an efficient method to set up the necessary conditional covariance matrices is derived and simulations demonstrate the performance of the algorithm. An open source implementation of the code has been made available at https://github.com/pec27/smerfs.

## 1 Introduction

Methods for modelling spatially distributed data with isotropic random fields occur in many areas, including astrophysics [2, 7], geophysics [19], optics [11], image processing [4] and computer graphics [12]. The building block for almost all of which is the Gaussian random field in d, although many applications require realisations in other geometries, the most important of which may be the unit sphere 𝕊2, especially for geophysics and astrophysics, which often use spherical data.

The computational complexity of realising an n×n lattice of points of a Gaussian random field in d depends considerably upon the structure of the covariance function. In the worst (non-isotropic) case, we must calculate the covariance between each point and perform a Cholesky decomposition of the resulting matrix, at a cost of O(n6) operations [19]. Fortunately, the majority of Gaussian random fields we are interested in are isotropic (i.e., the covariance depends only on the geodesic distance between points), allowing us to perform a spectral decomposition into harmonic (Fourier) components. In the periodic case on 2 (i.e., a torus) with a regular lattice, this provides a dramatic computational improvement due to the existence of the fast Fourier transform (FFT) algorithm, which allows the realisation to be computed in O(n2logn2) operations.

On 𝕊2, similar results for isotropic Gaussian random fields also apply, i.e., we can perform a spectral decomposition into the spherical harmonic functions. Unfortunately, the corresponding spherical transform algorithm has fewer symmetries to exploit, and for a general n×n grid on the sphere, O(n4) steps are required for an isotropic covariance function. If one chooses an iso-latitude system such as the equidistant cylindrical projection (i.e., a regular grid in θ and ϕ, the inclination and azimuthal angles) or the HEALPix discretisation ([6], as used for the cosmic microwave background data of WMAP), one can exploit the Fourier element of the azimuthal angle to reduce this to O(n3logn) (see, e.g., [5]).

As one moves to higher and higher resolutions however, it becomes even more attractive to find algorithms with superior scaling. One such possibility is to consider Gaussian Markov random fields (hereafter GMRFs) (e.g., [23, 10, 16, 21, 13, 22]) which extend the Markov property to higher dimensions and orders. The essential property of these fields is that the probability measure for a conditional realisation of these fields depends upon only the boundary of the constraining set (and its derivatives) rather than the full set. Such a property allows one to construct a realisation recursively, i.e., to iteratively expand the realisation around its boundary, a process sometimes described as a telescoping representation [22].

The requirement for an isotropic GMRF to obey this property is that the covariance function is the Green’s function of an operator that is a polynomial in the Laplacian (with appropriate coefficients to ensure strong ellipticity; see, e.g., [13]). This turns out to describe rather a large class of covariance functions [21].

The contribution of this paper is an algorithm that generates samples on an n×n grid of the sphere in O(n2logn) once the conditional covariance matrices are computed. This is achieved by decomposing the field into 1d Gaussian Markov random fields. These can be sampled together with the derivatives point by point and then transformed to an isotropic Gaussian random field on the unit sphere by FFT.

The paper is structured as follows: In Section 2, we derive the decomposition of an isotropic Gaussian random field into 1d GMRFs via Fourier transforms, and compute the conditional covariance matrices. A computationally efficient method to set up the conditional covariance matrices is presented in Section 3. We collect the results of the previous sections in Section 4, where we present the algorithms explicitly. Finally, the performance and convergence of the introduced algorithm is demonstrated in a simulation example in Section 5. Our implementation is available online at https://github.com/pec27/smerfs.

## 2 Decomposition of isotropic Gaussian random fields into 1d Gaussian Markov random fields

Let us assume that T is a zero mean 2-weakly isotropic Gaussian random field (GRF for short) on the unit sphere 𝕊2 in 3, i.e., on

𝕊2:={x3:x=1},

where denotes the Euclidean norm. Then T admits an expansion with respect to the surface spherical harmonic functions𝒴:=(Ym,0,m=-,,) as mappings Ym:[0,π]×[0,2π), which are given by

Ym(θ,ϕ):=2+14π(-m)!(+m)!Pm(cosθ)eimϕ

for 0, m=0,, and (θ,ϕ)[0,π]×[0,2π), and by

Ym:=(-1)mY-m¯

for and m=-,,-1. Here (Pm,0,m=0,,) denote the associated Legendre polynomials which are given by

Pm(μ):=(-1)m(1-μ2)m/2mμmP(μ)

for 0, m=0,, and μ[-1,1], where (P,0) are the Legendre polynomials given by Rodrigues’ formula (see, e.g., [20])

P(μ):=2-1!μ(μ2-1)

for all 0 and μ[-1,1]. Associated Legendre polynomials with negative m=-,,-1 satisfy

Pm=(-1)|m|(-|m|)!(+|m|)!P-m.

This expansion of T is given by (see, e.g., [8, Corollary 2.5])

T==0m=-amYm,

where 𝔸:=(am,0,m=-,,) is a sequence of complex-valued, centred, Gaussian random variables with the following properties:

1. (i)

𝔸+:=(am,0,m=0,,) is a sequence of independent, complex-valued Gaussian random variables.

2. (ii)

The elements of 𝔸+ with m>0 satisfy Ream and Imam independent and are 𝒩(0,C/2) distributed.

3. (iii)

The elements of 𝔸+ with m=0 are real-valued and 𝒩(0,C) distributed.

4. (iv)

The elements of 𝔸 with m<0 are deduced from those of 𝔸+ by the formulae

am=(-1)ma-m¯.

Here (C,0) is called the angular power spectrum.

In what follows, let us consider the case that there exist κi such that

(2.1)C:=i=1M(κi+(+1))-1

for all 0, i.e., C-1 is an eigenvalue of

:=i=1M(κi-Δ𝕊2)

with corresponding eigenfunctions (Ym,m=-,,), where Δ𝕊2 denotes the spherical Laplacian (also known as the Laplace–Beltrami operator). In the spirit of [9], T is the solution of the stochastic partial differential equation

1/2T=W,

where W is white noise which admits the formal Karhunen–Loève expansion

W==0m=-ηmYm.

Here (ηm,,m=1,,) is a set of independent, complex-valued standard normally distributed random variables independent of the real-valued standard normally distributed random variables (η0,0). For m<0, the same relations as in condition (iv) hold. For fixed m, define the random field

gm:Ω×[-1,1]

by

gm(z):=12π02πT(θ,ϕ)e-imϕdϕ

for z:=cosθ[-1,1]. We observe that

T==0m=-amYm=m=-|m|amYm,

and obtain for any m0,

gm0(z)=m=-|m|amLm(z)12π02πeimϕe-im0ϕdϕ=|m0|am0Lm0(z),

where we set

Lm(z):=2+14π(-m)!(+m)!Pm(z).

Moreover, we obtain for m0<0 the relation

gm0=|m0|am0Lm0
=|m0|(-1)m0a-m0¯2+14π(-m0)!(+m0)!(-1)|m0|(+m0)!(-m0)!P-m0
=|-m0|a-m0¯L-m0=g-m0¯.

Since gm0 is generated by a sum of centred Gaussian random variables, it is clear that gm0 is centred Gaussian. In the following lemma, we show that they are independent for positive m and compute the covariance.

## Lemma 2.1.

The sequence (gm,mN0) consists of pairwise independent centred Gaussian random fields on [-1,1] with covariance given by

Cgm(z1,z2):=𝔼(gm(z1)¯gm(z2))=|m|CLm(z1)Lm(z2),

while the functions (gm,m<0) with negative index are determined by the relation

gm=g-m¯.

## Proof.

We have already seen that gm is centred Gaussian and that gm=g-m¯. It remains to show independence as well as to compute the covariance. Therefore, let us fix m0,m1 and z1,z2[-1,1]. By the previous computations it holds that

𝔼(gm0(z1)¯gm1(z2))=0|m0|1|m1|𝔼(a0m0¯a1m1)L0m0(z1)L1m1(z2)
=max{|m0|,|m1|}𝔼(am0¯am1)Lm0(z1)Lm1(z2)
=δm0m1|m0|CLm0(z1)Lm1(z2),

using the properties of the random variables in 𝔸, where δm0m1=1 for m0=m1 and 0 else. Similarly, we obtain

𝔼(gm0(z1)gm1(z2))=δm0(-m1)|m0|CLm0(z1)Lm1(z2),

which finishes the proof since uncorrelated Gaussian random variables are independent. ∎

We define the operators

m:=ddz(1-z2)ddz-m21-z2

and

𝒟m:=i=1M(κi-m).

Let us recall the associated Legendre differential equations for 0 and m=-,, (see, e.g., [1]):

ddz((1-z2)ddzu(z))+((+1)-m21-z2)u(z)=0,

which is solved by the associated Legendre polynomials. Therefore, we obtain that

𝒟mLm=i=1M(κi+(+1))Lm=C-1Lm,

i.e., gm solves the stochastic differential equation

𝒟m1/2gm=Wm,

where Wm is white noise on [-1,1] and 𝒟m1/2 is a well-defined operator of order M since 𝒟m is of order 2M with positive spectrum.

For τ(-1,1), let us first consider the open sets Aτ=(τ,1) with complements Aτc=(-1,τ]. Since white noise satisfies the strong Markov property, we obtain for all zAτ that

P(𝒟m1/2gm(z)𝒟m1/2gm(s),sAτc)=P(𝒟m1/2gm(z)𝒟m1/2gm(τ)),

and gm is generated by a stochastic differential equation as in [18, (2.1.21)]. However, gm does not satisfy it but it follows from [18] that the M-dimensional vector of derivatives up to order M-1,

gm(0M-1)=(gm(0),gM(1),,gm(M-1))T,

has the property. In other words, we obtain for all zAτ,

P(gm(0M-1)(z)gm(0M-1)(s),sAτc)=P(gm(0M-1)gm(0M-1)(τ)).

Therefore, it is sufficient to know gm(p)(τ) for p=0,,M-1 when generating a sample value for any z>τ where the random field is already constructed for all z<τ. Since we are considering Gaussian fields which have Gaussian derivatives, it is sufficient to know the mean and the covariance between any two points and derivatives up to order M-1.

Since all derivatives are well-defined, we obtain the covariance functions, which we will refer to as cross covariances,

Cgm,p,q(z1,z2):=𝔼(gm(p)(z1)¯gm(q)(z2))=p+qz1pz2qCgm(z1,z2).

These derivatives can analytically be calculated with the identities of the associated Legendre polynomials. We will discuss a computational efficient method in Section 3. Let us set

Jpqm(z1,z2):=Cgm,p,q(z1,z2)

such that Jm(z1,z2) is the matrix consisting of all covariances between different derivatives at z1 and z2. The conditional distribution of gm(0M-1)(z2) given gm(0M-1)(z1) is known to be Gaussian with conditional mean

𝔼(gm(0M-1)(z2)|gm(0M-1)(z1))=Jm(z1,z2)(Jm)-1(z1,z1)gm(0M-1)(z1)
(2.2)=:Am(z1,z2)gm(0M-1)(z1)

and conditional covariance matrix

Cov(gm(0M-1)(z2)|gm(0M-1)(z1))=Jm(z2,z2)-Jm(z1,z2)(Jm)-1(z1,z1)(Jm)T(z1,z2)
=Jm(z2,z2)-Jm(z1,z2)(Am)T(z1,z2)
(2.3)=:Bm(Bm)T(z1,z2).

By using the values gm(0M-1)(z1) at z1 for non-negative m, it is straightforward to sample the vector gm(0M-1)(z2) from the conditional distribution by

(2.4)gm(p)(z2)=q=0M-1Apqm(z1,z2)gm(q)(z1)+q=0M-1Bpqm(z1,z2)wqm,

where the random variables (wqm,q=0,,M-1) are independent complex-valued standard normally distributed for m>0, and independent standard normally distributed for m=0. This construction is sometimes referred to as a state-space model [3].

Note that we will also need an initial sampling, and for this we choose the equator (z=0), i.e.,

(2.5)gm(p)(0)=q=0M-1Beqpqmwqm,

where

Beqm(Beqm)T:=Jm(0,0).

## 3 Efficient covariance computation

We have shown in Lemma 2.1 that

(3.1)Cgm(z1,z2)=𝔼(gm(z1)¯gm(z2))=|m|CLm(z1)Lm(z2),

i.e., we obtained a representation in term of the associated Legendre polynomials. This allows us to compute it but from an algorithmic perspective; the computation is expensive for large m and as the recurrence relations are unsuitable for numerical computation with limited precision. As such we find an alternative formulation in terms of Legendre functions that is numerically superior at even moderate-m values.

Let us assume that the κi are distinct in equation (2.1) and have the representation κi=-λi(λi+1) with λi to avoid singularities. More specifically, for λi, either =λi or =-λ-1 is in 0, and therefore C is not well-defined. A typical example is M=2 and C=(a2+2(+1)2)-1 with a, which satisfies the desired partial fraction decomposition with κ1=ia and κ2=-ia. Then the coefficients C in equation (2.1) can be decomposed (via partial fractions) into

(3.2)C=i=1Mbi(+1)-λi(λi+1)

for some finite constants bi, provided the κi are distinct.

We note that the partial sums in equation (3.1) can be written in the form

Cgm(z1,z2)=12πi=1MbiGλim(z1,z2),

where

Gλim(z1,z2)=2π|m|Lm(z1)Lm(z2)(+1)-λi(λi+1),

which is nothing more than the bilinear expansion for the Green’s function [17, (17.72)] for the operator 𝒟m=κi-m. This operator can be alternatively formulated as the solution to the differential equation

(3.3)𝒟mGλm(x,y)=δ(x-y).

One eigenfunction of this operator with eigenvalue zero is the associated Legendre function Pλim, where Pλim denotes the generalisation of Pm to non-integers λi.

Associated Legendre polynomials can be generalised to complex degree λ and order μ as

Pλμ(z)=1Γ(1-μ)(1+z1-z)μ/2F12(-λ,λ+1;1-μ;1-z2)

via the (Gauss) hypergeometric function

F12(a,b;c;x)=n=0(a)n(b)n(c)nxnn!,

where (x)n denotes the rising Pochhammer symbol. The relevant case of μ being an integer (i.e., μ=m) is only defined in the limit, which we find by (see [1, 15.3.3])

F12(a,b;c;x)=(1-x)c-a-bF12(c-a,c-b;c;x)

and (see [1, 15.1.2])

limc-m1Γ(c)F12(a,b;c;x)=(a)m+1(b)m+1(m+1)!xm+1F12(a+m+1,b+m+1;m+2;x)

to give

(3.4)Pλm(z)=limμmPλμ(z)=(-λ)m(1+λ)mm!(1-z1+z)m/2F12(-λ,λ+1;1+m;1-z2).

This function is well-defined at z=1 with Pλm(1)=δm0, and singular at z=-1 for all m0 if λ.

For a second linearly independent solution to (3.3) we use Pλim(-z), which is not linearly dependent since λ+m is not an integer; see, e.g., [1, 8.2.3]. We can then construct the Green’s function by using the Wronskian (e.g., [15, 14.2.3])

𝒲{Pλm(z),Pλm(-z)}=2(1-z2)-1Γ(λ+1-m)Γ(-λ-m)

as

Gλm(x,y)=12Γ(1+λ-m)Γ(-λ-m)×{Pλm(-x)Pλm(y),xy,Pλm(x)Pλm(-y),xy,

which satisfies continuity, regularity at x=-1 and x=1, and by substitution,

(-λ(λ+1)-m)Gλm(x,y)=δ(x-y).

In conclusion, we obtain that

Jpqm(z1,z2)=Cgm,p,q(z1,z2)
=12πi=1MbiGλim(z1,z2)
(3.5)=14πi=1MbiΓ(1+λi-m)Γ(-λi-m)×{(-1)pPλim+p(-z1)Pλim+q(z2),z1z2,(-1)qPλim+p(z1)Pλim+q(-z2),z1z2.

Substituting the associated Legendre functions by the hypergeometric functions as in (3.4), i.e., setting

Pλim+p(z1)=(-λi)m+p(1+λi)m+p(m+p)!(1-z11+z1)(m+p)/2F12(-λi,λi+1;1+(m+p);1-z12),

and similarly for q and z2, we derive our efficient computation method for the cross-covariance matrices Jm.

We observe that the above formula requires many re-computations of associated Legendre functions. To be more efficient, we transform the vector gm(0M-1) of derivatives up to order M-1 to a vector of linear combinations of derivatives of increasing order. This does not change anything since we are in the end only interested in the random field itself but not in its derivatives. Therefore, define for m the family of operators (𝒰mq,q=0,,M-1) by

𝒰mqf:=(-1)q(1-x2)(m+q)/2qxq[(1-x2)-m/2f],

which satisfies 𝒰mqPλm=Pλm+q. Thus setting

(3.6)Jpqm(x,y):=𝒰mp𝒰mqCgm(x,y)

is much easier to compute while the properties of the random field itself are not changed. Let us keep the notation gm(0M-1) even if we use the modified matrices Jm.

## 4 Algorithms

Below we outline an algorithm for the construction of the isotropic GRF on 𝕊2. Since the sampling of the GRF is merely filtered white noise, in our tests the time is dominated by the computation of the (inhomogeneous) filter coefficients, and as such we have described the pre-computation in Algorithm 1, which just has to be done once, and the generation of samples in Algorithm 2.

For n, let us fix a grid z=(z-n,,z0,,zn) on [-1,1] and ϕ=(ϕ0,,ϕ2mmax) for some mmax. One possible symmetric example is to set zj=sin(2πj2n+1) such that

z-n=sin(-2nπ2n+1),z0=0,zn=sin(2nπ2n+1),

and ϕ=πmmax-1(0,1,2,,2mmax-1) in [0,2π). Furthermore, define the covariance operator by setting λ=(λ1,,λM) and b=(b1,,bM) in (3.2).

## Algorithm 1 (Pre-computation of covariance matrices.).

Algorithm 1 is devoted to the pre-computations that set up the covariance matrices that just have to be computed once and can be reused for sampling on the same grid. We remark that the algorithm can be optimised if the discrete grid z is symmetric around zero, i.e., if it satisfies z-i=-zi. Then Am and Bm just have to be computed for all zi0 and the negative values follow by symmetry. This is the case for the example grid z above.

Samples of the GRF on the sphere are generated with Algorithm 2 based on the pre-computed families of matrices A, B and Beq. Since the computational cost to generate white noise is negligible, the complexity of the algorithm reduces to (2n+1) FFT in 1d, which makes it computationally fast. Therefore, the algorithm is attractive if many samples have to be computed. An example are ice crystals, which can be modelled by lognormal random fields [14]. For more information on the relation of lognormal and Gaussian random fields, the reader is referred to [8].

## 5 Simulation

To illustrate the algorithm and show the performance of the presented algorithm, we show simulation results in this section which include the generation of a sample and the convergence of the covariance function with respect to the discretisation.

Let us first show random field samples that the algorithm generates. Therefore, we fix the angular power spectrum given by

(5.1)C=(10+2(+1)2)-1.

The algorithm in Section 4 generates the one-dimensional random fields (gm,m=0,,mmax) on a discrete grid first, which are then transformed to random fields on the sphere by FFT. To give the reader an idea of gm, we show in Figure 1 samples of g0(cosθ) and the real part of g5(cosθ) along with its standard deviation or RMS expectation 𝔼(Re(gm(cosθ))2)1/2.

### Figure 1

Random walks g0 and (the real part of) g5 in the upper and lower panels, respectively, for the angular power spectrum given in equation (5.1). The black line indicates the walks, whilst the grey shaded region indicates the ± standard deviation 𝔼(Re(gm)2)1/2.

The standard deviation is dependent on θ since the random field is of zero mean but not translation invariant.

Note that g5 tends to zero at θ=0,180. Since T has a continuous first derivative (i.e., M>1, as it is in this case), then for all m0 the standard deviation is zero for cosθ=±1, i.e., the nonzero iso-latitude Fourier transforms must tend to zero near the poles to meet the smoothness criterion.

### Figure 2

Left to right, visualisation of the process of building the random field starting from the equator, i.e., at each iso-latitude line (constant z) we conditionally sample the gm(z) based on the previous line (including derivatives) and gradually work our way north and south.

The process of generating a sample is illustrated in Figure 2. We start with generating the random fields gm(0), which become after FFT the discrete sample of the random field at the equator. Conditionally, gm(z-1) and gm(z1) are sampled and added to the picture after FFT. This process is continued until all of the sphere is covered with random numbers and the random field on the sphere is complete.

A method to validate that a method for sampling a GRF is appropriate and that it converges is to compute the covariance function via the examination of many samples and compare it to the analytic one inferred from the angular power spectrum. We have used the GRF with angular power spectrum from equation (5.1) and corresponding covariance function

CT(x,y)==02+14πCP(x,y3),

which is plotted in the left two panels of Figure 3 as solid blue line.

In order to analyse the errors of our method, we have constructed filters at multiple resolutions of nθ{4,8,16,32}, keeping mmax=nθ and nϕ=2nθ (i.e., so the maximum angular separation between adjacent grid points is π/nθ). We generated N:=320,000 samples (Tjnθ,j=1,,N) on all resolutions to estimate the covariance by

Cov¯(x,y):=N-1j=1NTjnθ(x)Tjnθ(y)

and to compare it to the theoretical one. The dotted and dashed lines in the left two panels of Figure 3 show the results for the different nθ. The left figure is based on equatorial evaluations, while the middle one estimates the covariance along meridional lines.

### Figure 3

Error analysis for an example GMRF. The left and middle panels estimate the covariance along equatorial andmeridional lines, respectively, and the analytic covariance in blue. The right panel indicates the convergence of the error asa function of nθ.

The error was computed by taking on each grid the maximum over all grid points in a set 𝒮, which was once the equator and once the meridian, of the difference of the theoretical and statistical covariance, i.e., the error enθ was computed by

enθ:=maxx,y𝒮|Cov¯(x,y)-CT(x,y)|.

The results are shown in the right panel of Figure 3. The maximum error on the covariance function falls as mmax-3/2 (where nθ=mmax) as can be seen in the error plot.

Funding source: Knut och Alice Wallenbergs Stiftelse

Award Identifier / Grant number: 621-2014-3995

Funding source: Vetenskapsrådet

Award Identifier / Grant number: 621-2014-3995

Funding statement: The work of the second author was supported in part by the Knut and Alice Wallenberg foundation and the Swedish Research Council under Reg. No. 621-2014-3995.

# Acknowledgements

The first author would like to thank Ben Lowing, and the second author would like to thank the members of the KAW project “Stochastics for big data and big systems” for fruitful discussions.

### References

[1] M. Abramowitz and I. Stegun, Handbook of Mathematical Functions, Dover Publications, Mineola, 1965. Search in Google Scholar

[2] J. M. Bardeen, J. R. Bond, N. Kaiser and A. S. Szalay, The statistics of peaks of Gaussian random fields, Astrophys. J. 304 (1986), 15–61. 10.1086/164143 Search in Google Scholar

[3] W. L. Brogan, Modern Control Theory, 3rd ed., Prentice-Hall, Upper Saddle River, 1991. Search in Google Scholar

[4] F. Cohen, Z. Fan and M. Patel, Classification of rotated and scaled textured images using Gaussian Markov random field models, IEEE Trans. Pattern Anal. Mach. Intelligence 13 (1991), no. 2, 192–202. 10.1109/34.67648 Search in Google Scholar

[5] J. Driscoll and D. Healy, Computing Fourier transforms and convolutions on the 2-sphere, Adv. in Appl. Math. 15 (1994), no. 2, 202–250. 10.1006/aama.1994.1008 Search in Google Scholar

[6] K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke and M. Bartelmann, HEALPix: A framework for high-resolution discretization and fast analysis of data distributed on the sphere, Astrophys. J. 622 (2005), 759–771. 10.1086/427976 Search in Google Scholar

[7] Y. Hoffman and E. Ribak, Constrained realizations of Gaussian fields – A simple algorithm, Astrophys. J. Letters 380 (1991), L5–L8. Search in Google Scholar

[8] A. Lang and C. Schwab, Isotropic Gaussian random fields on the sphere: Regularity, fast simulation and stochastic partial differential equations, Ann. Appl. Probab. 25 (2015), no. 6, 3047–3094. 10.1214/14-AAP1067 Search in Google Scholar

[9] F. Lindgren, H. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B. Stat. Methodol. 73 (2011), no. 4, 423–498. 10.1111/j.1467-9868.2011.00777.x Search in Google Scholar

[10] H. McKean, Jr., Brownian motion with a several-dimensional time, Theory Probab. Appl. 8 (1963), no. 4, 335–354. 10.1137/1108042 Search in Google Scholar

[11] E. Mendez and K. O’Donnell, Observation of depolarization and backscattering enhancement in light scattering from Gaussian random surfaces, Optics Commun. 61 (1987), no. 2, 91–95. 10.1016/0030-4018(87)90225-2 Search in Google Scholar

[12] G. S. P. Miller, The definition and rendering of terrain maps, SIGGRAPH Comput. Graph. 20 (1986), no. 4, 39–48. 10.1145/15886.15890 Search in Google Scholar

[13] J. Moura and S. Goswami, Gauss–Markov random fields (CMrf) with continuous indices, IEEE Trans. Inform. Theory 43 (1997), no. 5, 1560–1573. 10.1109/18.623152 Search in Google Scholar

[14] T. Nousiainen and G. M. McFarquhar, Light scattering by quasi-spherical ice crystals, J. Atmospheric Sci. 61 (2004), no. 18, 2229–2248. 10.1175/1520-0469(2004)061<2229:LSBQIC>2.0.CO;2 Search in Google Scholar

[15] F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. Saunders, NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.0.15 of 2017-06-01. Search in Google Scholar

[16] L. D. Pitt, A Markov property for Gaussian processes with a multidimensional parameter, Arch. Ration. Mech. Anal. 43 (1971), 367–391. 10.1007/BF00252003 Search in Google Scholar

[17] K. F. Riley, M. P. Hobson and S. J. Bence, Mathematical Methods for Physics and Engineering: A Comprehensive Guide, 2nd ed., Cambridge University Press, Cambridge, 2002. Search in Google Scholar

[18] Y. A. Rozanov, Markov Random Fields, Springer, New York, 1982. Search in Google Scholar

[19] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, Monogr. Statist. Appl. Probab. 104, Chapman & Hall, London, 2005. Search in Google Scholar

[20] G. Szegő, Orthogonal Polynomials, 4th ed., Amer. Math. Soc. Colloq. Publ. 23, American Mathematical Society, Providence, 1975. Search in Google Scholar

[21] A. Tewfik, B. Levy and A. Willsky, Internal models and recursive estimation for 2-d isotropic random fields, IEEE Trans. Inform. Theory 37 (1991), no. 4, 1055–1066. 10.1109/18.86997 Search in Google Scholar

[22] D. Vats and J. Moura, Telescoping recursive representations and estimation of Gauss–Markov random fields, IEEE Trans. Inform. Theory 57 (2011), no. 3, 1645–1663. 10.1109/TIT.2011.2104612 Search in Google Scholar

[23] P. Whittle, On stationary processes in the plane, Biometrika 41 (1954), no. 3–4, 434–449. 10.1093/biomet/41.3-4.434 Search in Google Scholar