Skip to content
BY 4.0 license Open Access Published by De Gruyter November 17, 2020

Discretisation and Product Distributions in Ring-LWE

  • Sean Murphy EMAIL logo and Rachel Player

Abstract

A statistical framework applicable to Ring-LWE was outlined by Murphy and Player (IACR eprint 2019/452). Its applicability was demonstrated with an analysis of the decryption failure probability for degree-1 and degree-2 ciphertexts in the homomorphic encryption scheme of Lyubashevsky, Peikert and Regev (IACR eprint 2013/293). In this paper, we clarify and extend results presented by Murphy and Player. Firstly, we make precise the approximation of the discretisation of a Normal random variable as a Normal random variable, as used in the encryption process of Lyubashevsky, Peikert and Regev. Secondly, we show how to extend the analysis given by Murphy and Player to degree-k ciphertexts, by precisely characterising the distribution of the noise in these ciphertexts.

MSC 2010: 94A60; 11T71

1 Introduction

The Ring-LWE problem [6, 12] has become a standard hard problem underlying lattice-based cryptography. In [7], a detailed algebraic background for Ring-LWE was given, together with a statistical framework based on δ-subgaussian random variables [9, 10]. Another statistical framework applicable to Ring-LWE, based on a Central Limit approach, was outlined in [11]. It is argued in [11] that this is a more natural approach than one using δ-subgaussian arguments, when considering the important application setting of homomorphic encryption [5].

Ciphertexts in all homomorphic encryption schemes have an inherent noise which is small in fresh cipher-texts and grows during homomorphic evaluation operations. If the noise grows too large, decryption will fail. A thorough understanding of the statistical properties of the noise is therefore essential for choosing efficient parameters while ensuring correctness. Rather than analysing the noise directly, we consider the embedding of the noise via the canonical embedding (see e.g. [7]) in a complex space H.

In this paper, we present results on discretisation and product distributions applicable to Ring-LWE cryptography, which clarify and extend results presented in [11]. For concreteness, these results could be applied to the homomorphic encryption scheme of Section 8.3> of [7], termed SymHom by [11] and analysed there.

In a Ring-LWE discretisation, an element of the complex space H is rounded to some randomly determined nearby element of H in a lattice coset Λ + c. We require that all components of the vector expressing this discretisation in an appropriate basis for H are bounded by an appropriate threshold in order for a successful decryption to take place. The statistical properties of the discretisation process are therefore of fundamental importance in determining correctness. Our results demonstrate how we can obtain a good multivariate Normal approximation for (embedded) noise of a degree-1 (fresh) ciphertext vector expressed in a decryption basis after a change of basis transformation. This justifies the approach used in [11, Theorem 1] for bounding the decryption failure probability of such ciphertexts.

In homomorphic Ring-LWE cryptosystems such as SymHom, for k= k1+k2, a degree-k ciphertext cmult is formed as the result of the homomorphic multiplication of two ciphertexts c1 and c2 of degrees k1 and k2 respectively. The noise in cmult is defined to be the product of the noises in the input ciphertexts c1 and c2. We show that using the Central Limit Framework of [11], the distribution of a vector expressing the (embedded) noise in a degree-k SymHom ciphertext in an appropriate decryption basis can be approximated by a multivariate Normal distribution. This extends the analysis for degree-2 ciphertexts given in [11, Theorem 2].

1.1 Contributions

In Section 3 we make precise the approximation of the CRR discretisation (Definition 2.5) of a Normal random variable as a Normal random variable, so potentially allowing a more direct and powerful approach to CRR discretisation than a δ-subgaussian approach. Moreover, our techniques are potentially generalisable to other randomised discretisation methods. Our first main result is Proposition 3.5, which describes the distribution of the Balanced Reduction (Definition 2.4) of a Normal random variable. To obtain Proposition 3.5, we first show in Lemma 3.1 that the Balanced Reduction of a Normal random variable gives a Triangular distribution, which is itself approximated by a Normal distribution (Lemma 3.2).

In Section 4 we extend the analysis of degree-2 ciphertexts given in [11] to degree-k ciphertexts. Our second main result is Lemma 4.4, which shows that a component Zj(k) of the k-fold -product Z (k) has a K distribution (Section 4.1).

2 Background

In this section, we give the relevant background for our discussion. In Section 2.1 we recall the necessary algebraic background to Ring-LWE, following [7]. In Section 2.2 we recall results on discretisation following [10]. In Section 2.3 we recall the definition and basic properties of the Meijer G-Function [2, 3, 4].

2.1 Algebraic Background

The mathematical structure underlying Ring-LWE is the polynomial quotient ring obtained from the mth cyclotomic polynomial of degree n. For simplicity, we consider the case where m is a large prime, so n=ϕ(m)= m1, and we let n=12n. Our focus is solely on the vector space aspects of Ring-LWE, and in particular our discussion is based on the complex space H (Definition 2.1).

Definition 2.1

The conjugate pair space H is H=T(n), where T is the n × n unitary conjugate pairs matrix given by T=212(IniJnJniIn), where In is the n×n identity matrix and Jn is the n×n reverse diagonal matrix of 1s.

We note that T1=T, where T denotes the conjugate transpose of T. We can represent elements of H as vectors with respect to a basis for H, and two such bases of H of direct relevance are specified in Definition 2.2.

Definition 2.2

The I-basis for H is given by the columns of the n ×n identity matrix In, that is to say by standard basis vectors. The T-basis for H is given by the columns of the conjugate pair matrix T.

We note that an element of H is expressed as a vector in the I-basis as a vector of n′ conjugate pairs and by construction in the T-basis as a real-valued vector. A vector expressing an element of H in the I-basis has the same norm as a vector expressing the same element in the T-basis as T is a unitary matrix (|Tv|2=|v|2) Furthermore, the complex space H has a natural well-defined multiplication operation, and Definition 2.3 specifies this multiplication operation for vectors expressing elements of H in the I-basis and in the T-basis.

Definition 2.3

If a=(a1,,an)T and b=(b1,,bn)T are vectors expressing elements of H in the I-basis for H, then the ⊙-product ab=(a1b1,,anbn)T is their componentwise product. If u and v are (real-valued) vectors expressing elements of H in the T-basis for H, then the ⊗-product uv=T(TuTv).

The -product of two real-valued vectors can be expressed by considering appropriate pairs of components. The space H can be regarded as H2××H2, where H 2 = T R 2 . For two real-valued vectors u, v ∈2 expressing elements of H 2 in the T-basis for H 2, their -product is given by

uv=(u1u2)(v1v2)=212(u1v1u2v2u1v2+u2v1).

2.2 Discretisation Background

The discretisation process in (for example) a homomorphic Ring-LWE cryptosystem “rounds” an element of H to some randomly determined nearby element of H in a lattice coset Λ + c of some lattice Λ in H. As an illustration of a discretisation process, we use the coordinate-wise randomised rounding method of discretisation or CRR discretisation given in the first bullet point of Section 2.4.2 of [7]. We give a formal statistical description of CRR discretisation in terms of a random Balanced Reduction function following [10].

Definition 2.4

The univariate Balanced Reduction functiononis the random function

R ( a ) = 1 ( a a ) w i t h p r o b a b i l i t y a a ( a a ) w i t h p r o b a b i l i t y 1 ( a a ) .

The multivariate Balanced Reduction functiononl with support on [−1, 1]l is the random function R = R 1 , , R l with component functions R 1 , , R l that are independent univariate Balanced Reduction functions.

Definition 2.5

Suppose B is a (column) basis matrix for the n-dimensional lattice Λ in H. Ifis the Balanced Reduction function, then the coordinate-wise randomised rounding discretisation or CRR discretisation X Λ+cB of the random variable X on H to the lattice coset Λ + c with respect to the basis matrix B is the random variable

X Λ+cB=X+BR(B1(cX)).

The CRR discretisation X Λ+cB of the random variable X with respect to the basis B of Λ is a random variable on the lattice coset Λ+c, and is a valid (does not depend on the chosen coset representative c) discretisation [7, 10].

2.3 Meijer G-Functions

Our analysis in Section 4 will be most easily expressed in terms of Meijer G-functions [2–4], which are specified in general in Definition 2.6. Definition 2.7 gives three classes of Meijer G-functions that are of direct relevance to us.

Definition 2.6

The Meijer G-Function Gpqξv(a1apb1bq|x) is defined for x ≠ 0 and integers ξ, v, p, q with 0 ≤ ξq and 0 ≤ vp by the line integral

Gpqξv(a1apb1bqx)=12πiL j=1ξ(bjs) j=1vΓ(1aj+s) j=ξ+1qΓ(1bj+s) j=v+1pΓ(ajs)xsds

in the complex plane, where Γ denotes the gamma function and akbj1,2,(forj=1,,ξandk= 1,,v) The integral path L runs fromito isuch that all poles of Γ(bjs) are to the right of the path (for j=1,,ξ) and all the poles of Γ(1ak+s) are to the left of the path (for k = 1, . . . , v), though other paths are possible.

Definition 2.7

For a positive integer k and the integral path L of Definition 2.6, the functions Gk,Hk and Jk are the Meijer-G functions given by

Gk(x)=G0kk0(000x)=12πiLΓ(s)kxsds,
Hk(x)=G1k1k11(1111|x)=12πiLΓ(1s)k1Γ(s)xSds
and Jk(x)=G0kk0(01212x)=12πiLΓ(s)Γ(12s)k1xsds.

For small k, we note that G1(x)=exp(x) and G2(x)=2K0(2x12), where K0(x)=0exp(xcosht) dt is a modified Bessel function of the second kind [1]. Similarly, we also have H 1 ( x ) = exp x 1 and H2(x)=x1+x, as well as J 1 ( x ) = exp ( x ) and J 2 ( x ) = π 1 2 exp 2 x 1 2 .

3 Discretisation Distributions in Ring-LWE

In Section 3.1, we show that the Balanced Reduction of a Gaussian random variable underlying a degree-1 ciphertext in situations of interest is essentially a Triangular random variable, which can itself be approximated by a Normal random variable. In Section 3.2, we make precise the multivariate Normal approximation of the CRR discretisation of the embedded noise in a degree-1 SymHom ciphertext.

3.1 The Balanced Reduction of a Normal Random Variable

A Ring-LWE encryption process is based on the discretisation of Normal random variables in H .We therefore consider the discretisation X Λ+cB of a random variable X=TX (in the I-basis) which is the image of some real-valued multivariate Normal random variable X′ under T .However, B1(cX) is a real-valued multivariate Normal random variable. Thus we must consider the Balanced Reduction of R B 1 ( c X ) of the Normal random variable B1(cX), and Lemma 3.1 essentially shows that such a Balanced Reduction gives a Triangular distribution.

Lemma 3.1

If Y ~ N μ , σ 2 , then its Balanced Reduction R ( Y ) has the Triangular distribution △(density function 1|z| for |z|<1 and 0 otherwise) as its limiting distribution as the standard deviation σ.

Sketch Proof. We can express the density function fℛ(Y) of ℛ(Y) in terms of the density function fY of Y= Y Y , the “modulo 1” reduction of Y. By considering the Fourier series for fY on [0, 1), we can obtain a Fourier series for fℛ(Y) on (−1, 1) and hence show that fR(Y)(y)1|y| on (1,1) as σ. A full proof is given in Appendix A. □

The Fourier form shown in the proof of Lemma 3.1 (Appendix A) in fact shows that the Balanced Reduction of a Normal N(μ, σ2) random variable with any mean μ is very close to a Triangular distribution with mean E() = 0 and variance Var ()=16 for even a moderate standard deviation σ, as illustrated in Figure 1 for the small standard deviation σ = 0.50. Ring-LWE applications typically use a larger standard deviation than 0.5, so giving an even closer approximation.

Figure 1 The density functions of a Triangular (△) random variable (solid line), a Balanced Reduction ℛ(N(0, 0.502)) of a Normal random variable with standard deviation 0.50 (dashed line) and a Normal random variable 

	N(0,16)
$\text{N}\left( 0,\frac{1}{6} \right)$with standard deviation 

	(16)12
${{\left( \frac{1}{6} \right)}^{\frac{1}{2}}}$(dotted line).
Figure 1

The density functions of a Triangular () random variable (solid line), a Balanced Reduction ℛ(N(0, 0.502)) of a Normal random variable with standard deviation 0.50 (dashed line) and a Normal random variable N(0,16) with standard deviation (16)12 (dotted line).

The Triangular distribution can obviously itself be approximated by a Normal N(0,16) distribution with the same mean E() = 0 and variance Var ()=16 in the manner outlined in Lemma 3.2. This closeness of this approximating N(0,16) distribution to a Triangular distribution, and essentially also to a Balanced Reduction of an N(0, σ2) Normal random variable for σ > 0.50, is illustrated in Figure 1.

Lemma 3.2

Suppose that W~Δ has a Triangular distribution with distribution function FW(w)=P(W w)=12(w+2wsign(w)w2) for |w|1. If Φ is the distribution function of a standard Normal N(0, 1) random variable, then the random variable W=(16)12Φ1(FW(W))~N(0,16) has a Normal distribution with mean 0 and variance 16.

Proof. If Z~N(0,16), then FZ1(z)=(16)12Φ1(z) is the inverse distribution function of Z. Thus the distribution function FW of W′ is

FW(w)=P(W=(16)12Φ1(FW(W))=FZ1(FW(W))w)=P(WFW1(FZ(w)))=FW(FW1(FZ(w)))=FZ(w).

Thus W′ and Z have the same distribution function and so W~N(0,16).

The discrepancy between the Triangular random variable W ∼ △ and the approximating Normal random variable W~N(0,16), and hence between the Balanced Reduction of an appropriate Normal distribution and an N(0,16) distribution, is a very small distribution. This small distribution is formally specified in Definition 3.3 and illustrated in Figure 2, and we term this distribution the Ghost distribution because of its shape and elusive nature. Lemma 3.4 gives the statistical properties of the Ghost distribution. Proposition 3.5 summarises the distribution of the Balanced Reduction of a Normal random variable, using the notation to denote “is approximately distributed as”.

Figure 2 The density function of a Ghost random variable.
Figure 2

The density function of a Ghost random variable.

Definition 3.3

Suppose that W~Δ has a Triangular distribution with distribution function FW(w)=P(W w)=12(w+2wsign(w)w2) for |w|1. If Φ is the distribution function of a standard Normal N(0, 1) random variable, then the random variable W=WW=W(16)12Φ1(FW(W)) has a Ghost distribution. Such a random variable W′′ is denoted W′′ ~ .

Lemma 3.4

A Ghost random variable W′′ ~ has mean E (W′′) = 0 and variance Var(W′′) = 0.0012, so has standard deviation St Dev(W′′) = 0.035. Furthermore, the tail probabilities of W′′ are given by the following Table.

θ 0.03 0.15 0.37 0.62 0.84
P(|W′′| > θ) 10−1 10−2 10−3 10−4 10−5

Proof. The results can be obtained by numerical integration and so on. □

Proposition 3.5

The distribution of the Balanced Reduction ℛ(N(μ, σ2)) of a univariate Normal distribution for standard deviations σ of interest in Ring-LWE can essentially be approximated (with a slight abuse of notation) as

3.2 The Distribution of a CRR Discretisation

We consider the CRR discretisation X Λ + c B of a complex-valued random vector X = TX′ that is the image under T of a spherically symmetric real-valued Normal random variable X~N(0;ρ2In) with component standard deviation . This component standard deviation ρ is typically larger than the length of the basis vectors, that is to say the column lengths of B or equivalently of the real matrix TB. We can express this CRR discretisation as either a complex-valued random vector X Λ + c B in the I-basis for H or as a real-valued random vector T X Λ + c B in the T-basis for H. Following Proposition 3.5, the distributions of these vectors are essentially given by

We observe that the first of these three distributions is typically the dominating distribution. For example, the real-valued distribution of T X Λ + c B differs from a Normal distribution by The distribution is usually negligible for the lattice basis matrices B in Ring-LWE. Similarly, the variance matrix of TB(N(0;16In)) is usually negligible in comparison with ρ2In. For practical purposes we can therefore consider that T X Λ + c B has an N(0;ρ2In) distribution or equivalently that X Λ + c B has a T(N(0;ρ2In)) distribution.

In the decryption of a degree-1 ciphertext, such a discretisation (that is, the noise in the ciphertext embedded in H) is considered as a real-valued vector in a “decryption basis”. An appropriate change of basis matrix C to such a decryption basis can be expressed as C=CT for a real matrix C′. We therefore consider the real-valued vector C X Λ + c B which can be expressed as

where C′ = CT and CB are real matrices. The decryption is successful if every component of C X Λ + c B is less than an appropriate threshold.

In summary, this discussion justifies the approach used in [11, Theorem 1] for obtaining a bound for a decryption failure probability for C X Λ + c B by using the distributional approximation

C X Λ + c B ˙ N 0 ; ρ 2 C C T .

4 Product Distributions in Ring-LWE

The noise in a degree-k ciphertext in SymHom can be seen as the k-fold ⊙-product of the noises of k degree-1 ciphertexts in the I-basis for H. We are interested in the k-fold ⊙-product of the form X 1 Λ + c B X k Λ + c B of the discretisation vectors X 1 Λ + c B , , X k Λ + c B given by degree-1 ciphertexts. The discussion of Section 3.2 shows that this distribution can be approximated as

X 1 Λ + c B X k Λ + c B ˙ T N 0 ; ρ 1 2 I n T N 0 ; ρ k 2 I n .

We consider the equivalent ⊗-product T X 1 Λ + c B T X k Λ + c B expressing the embedded noises as real vectors in the T-basis, with approximate distribution

T X 1 Λ + c B T X k Λ + c B ÷ N 0 ; ρ 1 2 I n N 0 ; ρ k 2 I n .

The ⊗-product in Rn decomposes into n=12n independent ⊗-products in ℝ2. Thus we consider the distribution on ℝ2 given by the k-fold ⊗-product of spherical bivariate Normal random variables

N(0;ρ12I2)N(0;ρk2I2).

In particular, we consider the distribution of a 1-dimensional component of this 2-dimensional distribution. This approach allows us to construct an approximate multivariate distribution for the vector expressing the embedded noise in an appropriate decryption basis.

4.1 The 𝒦 Distribution

We use the 𝒦 distribution, which we now introduce, to analyse the component distribution of a k-fold ⊗product.

Definition 4.1

A symmetric continuous univariate random variable X has a 𝒦 distribution with shape k (positive integer) and variance v2 > 0 if it has density function fX(x)=(2πv2)12Jk(12v2x2), where Jk is the Meijer G-function of Definition 2.7. We write X ∼ 𝒦(k, v2) to denote that X has such a distribution.

We note that an 𝒦(1, 1) distribution is a standard Normal N(0, 1) distribution and that 𝒦(2, 1) is a univariate Laplace distribution. The density functions of the 𝒦(1, 1), 𝒦(2, 1) and 𝒦(4, 1) distributions are shown in Figure 3, and tail probabilities are tabulated in Figure 4 for the 𝒦(k, 1) distributions for shape k = 1, . . . , 6. The tail probability functions for the 𝒦(1, 1), 𝒦(2, 1) and 𝒦(4, 1) distributions are illustrated in Figure B1 in Appendix B. It can be seen that 𝒦(k, 1) is far more highly weighted around 0 and in the tails for shape k > 1 than the comparable standard Normal distribution N(0, 1) = 𝒦(1, 1) with the same mean 0 and variance 1.

Figure 3 The density function of a 𝒦(1, 1) = N(0, 1) distribution (solid line), the density function of a 𝒦(2, 1) distribution (dashed line) and the density function of a 𝒦(4, 1) distribution (dotted line).
Figure 3

The density function of a 𝒦(1, 1) = N(0, 1) distribution (solid line), the density function of a 𝒦(2, 1) distribution (dashed line) and the density function of a 𝒦(4, 1) distribution (dotted line).

Figure 4 The tail probabilities for a 𝒦(k, 1) distribution with shape k = 1, . . . , 6.
Figure 4

The tail probabilities for a 𝒦(k, 1) distribution with shape k = 1, . . . , 6.

Figure B1 The tail probability functions 

	P(|K(1,1)|>x)
$\mathbf{P}(|\mathcal{K}(1,1)|>x)$of a 

	K(1,1)=N(0,1)
$\mathcal{K}(1,1)=\text{N}(0,1)$distribution (solid line), 

	P(|K(2,1)|>x)
$\mathbf{P}(|\mathcal{K}(2,1)|>x)$of a 𝒦(2, 1) distribution (dashed line) and 

	P(|K(4,1)|>x)
$\mathbf{P}(|\mathcal{K}(4,1)|>x)$of a 𝒦(4, 1) distribution (dotted line).
Figure B1

The tail probability functions P(|K(1,1)|>x) of a K(1,1)=N(0,1) distribution (solid line), P(|K(2,1)|>x) of a 𝒦(2, 1) distribution (dashed line) and P(|K(4,1)|>x) of a 𝒦(4, 1) distribution (dotted line).

4.2 The ⊗-product of Spherical Bivariate Normal Distributions

We now establish the distribution of a component Zj(k) of the k-fold ⊗-product Z (k) of spherical bivariate Normal distributions. Lemma 4.2 gives the density function fZ(k) of the bivariate random variable Z (k). Lemma 4.3 then gives the associated characteristic function ϕZ(k) of Z (k). Finally, Lemma 4.4 shows that a component Zj(k) of the k-fold ⊗-product Z(k) has the 𝒦 distribution with shape k. Full proofs of these results are provided in Appendix C.

Lemma 4.2

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables and that Gk is the Meijer G-function ofDefinition 2.7. Their k-fold⊗-product Z(k)=Z1Zk has density function fZ(k) on2 given fZ(k)(z)=(2πρ2)1Gk(12ρ2|z|2), where ρ2=ρ12ρk2.

Sketch Proof. The proof establishes the density function f| Z(k) | of | Z(k) | by an inductive argument based on the multiplicative convolution of particular Meijer G-functions. The final form of the density function fZ(k) of Z(k) then follows from a polar transformation. □

Lemma 4.3

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables and thatk is the Meijer G-function ofDefinition 2.7. Their k-fold⊗-product Z(k)=Z1Zk has characteristic function ϕZ(k) on2 given by ϕZ(k)(t)=Hk(2ρ2|t|2), where ρ2=ρ12ρk2

Sketch Proof. The characteristic function ϕZ(k) is evaluated by means of polar co-ordinates to give a multiplicative convolution of Meijer G-functions.

Lemma 4.4

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables, and let Z(k)=Z1Zk be their k-fold ⊗-product. A component Zj(k) of Z(k) has a 𝒦(k, ρ2) distribution (Definition 4.1) with shape k and variance ρ2=ρ12ρk2.

Sketch Proof. The characteristic function corresponding to the density function fY is the appropriate marginal characteristic function derived from Lemma 4.3.

4.3 Application to Homomorphic Multiplication Noise Growth

By considering repeated multiplication of degree-1 ciphertexts we can see that the (embedded) noise in a degree-k ciphertext is an element of H that can be expressed as a real valued random vector W(k)= (W1(k),,Wn(k)) in the T-basis formed by a k-fold ⊗-product. The discussion of Section 4.2 shows that the distribution of a component Wj(k)K.(k,ρ2) can be approximated by a Kdistribution with shape k and some variance ρ2 obtained as the product of individual variances. Furthermore, a component Wj(k) is independent of every other component, except its complex conjugate “twin” component to which it is uncorrelated.

For decryption, we consider the embedded noise of a degree-k ciphertext expressed as the real random vector C′W(k) in an appropriate decryption basis. We can use a Central Limit framework [11] to approximate the distribution of C′W(k) as a multivariate Normal distribution under mild conditions on C′ for “product variance” ρ2 as

CW(k)N(0;ρ2CCT).

This Normal approximation can then be used to obtain information about the probability of decryption failure, as was done for k = 2 in [11, Theorem 2].

The quality of the approximation will decrease as the degree k increases due to the heavier tails of 𝒦(k, ρ2) as k increases. In the case of a somewhat homomorphic encryption scheme, requiring to support only a few multiplications, this may not be problematic. Moreover, the quality of this approximation can be checked empirically if required.


Article note

Rachel Player was supported by an ACE-CSR Ph.D. grant, by the French Programme d’Investissement d’Avenir under national project RISQ P141580, and by the European Union PROMETHEUS project (Horizon 2020 Research and Innovation Program, grant 780701).


Acknowledgement

We thank the anonymous referees for their comments on previous versions of this paper, and we thank Carlos Cid for his interesting discussions about this paper.

References

[1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions Dover Publications, 1965.Search in Google Scholar

[2] R. Askey and A. Daalhuis and A. Olde, Meijer G-function NIST Handbook of Mathematical Functions (F. Olver et al. ed.), Cambridge University Press, 2010.Search in Google Scholar

[3] H. Bateman and A. Erdélyi, Higher Transcendental Functions 1, McGraw-Hill, 1953.Search in Google Scholar

[4] R. Beals and J. Szmiglieski, Meijer G-Functions: A Gentle Introduction, Notices Amer. Math. Soc. 60 (2013), 886–872.10.1090/noti1016Search in Google Scholar

[5] C. Gentry, Fully Homomorphic Encryption using Ideal Lattices, in: 41st Annual ACM Symposium on Theory of Computing, STOC 2009 Proceedings, ACM, (2009), 169–178.Search in Google Scholar

[6] V. Lyubashevsky and C. Peikert and O. Regev, On Ideal Lattices and Learning with Errors over Rings, in: Advances in Cryptology - EUROCRYPT 2010 Lecture Notes in Comput. Sci. 6110, Springer, (2010), 1–23.Search in Google Scholar

[7] V. Lyubashevsky and C. Peikert and O. Regev, A Toolkit for Ring-LWE Cryptography preprint (2013), https://eprint.iacr.org/2013/293Search in Google Scholar

[8] V. Lyubashevsky and C. Peikert and O. Regev, A Toolkit for Ring-LWE Cryptography, in: Advances in Cryptology - EUROCRYPT 2013 Lecture Notes in Comput. Sci. 7881, Springer, (2013), 35–54.Search in Google Scholar

[9] D. Micciancio and C. Peikert, Trapdoors for Lattices: Simpler, Tighter, Faster, Smaller, in: Advances in Cryptology - EUROCRYPT 2012 Lecture Notes in Comput. Sci. 7237, Springer, (2012), 700–718.Search in Google Scholar

[10] S. Murphy and R. Player, -subgaussian Random Variables in Cryptography in Information Security and Privacy – 24th Australasian Conference, ACISP 2019, Lecture Notes in Computing. Sci. 11547, Springer, (2019), 251–268.Search in Google Scholar

[11] S. Murphy and R. Player, A Central Limit Framework for Ring-LWE Decryption preprint (2019), https://eprint.iacr.org/2019/452Search in Google Scholar

[12] D. Stehlé and R. Steinfeld and K. Tanaka and K. Xagawa, Eflcient Public Key Encryption Based on Ideal Lattices, in: Advances in Cryptology - ASIACRYPT 2009 Lecture Notes in Comput. Sci. 5912, Springer, (2009), 617–635.Search in Google Scholar

A Proof of a Result of Section 3 about a Normal Balanced Reduction

Lemma 3.1

If Y~N(μ,σ2), then its Balanced Reduction R(Y)Δ has the Triangular distribution (density function 1|z| for |z|<1 and 0 otherwise) as its limiting distribution as the standard deviation σ.

Proof. Let fY denote the density function of Y~N(μ,σ2), and let fY(y)=k=fY(y+k) denote the density function of Y=Y Y , the “modulo 1” reduction of Y to [0, 1). By construction, R(Y)=R(Y), so the distribution function F ℛ(Y) of ℛ(Y) is given by

FR(Y)(y)=P(R(Y)=R(Y)y)=01P(R(Y)yY=z)fY(z)dz=01P(R(z)y)fY(z)dz=01FR(z)(y)fY(z)dz.

The distribution function FR(z)(y)=P(R(z)y) of ℛ(z) takes the value 0 for y<( z z), the value 1( z z) for ( z z)y<1( z z) and the value 1 for y1 z z) Thus this distribution function FR(z)(y) can be expressed for 1y<1 as

FR(z)(y)={ z [ 0z<y+1][y+1<z1]  for 1y<0
 and FR(z)(y)={ 1[0z<y]z[y<z1]  for 0y<1.

For 1y<0 this distribution function F ℛ(Y) of ℛ(Y) therefore evaluates as

FR(Y)(y)=0y+1zfY(z)dz=1y(1+z)fY(1+z)dz,

whereas, for 0y<1 and noting that E(Y)=01yfY(y)dy, we have

FR(Y)(y)=0yfY(z)dz+y1zfY(z)dz=E(Y)+0y(1z)fY(z)dz.

Thus the density function fℛ(Y) of ℛ(Y) is given by

fR(Y)(y)=FR(Y)(y)={ (1+y)fY(1+y)[1y<0](1y)fY(y)[0y<1].

The density function fY(y)=k=ck exp(i2πk(yμ)) of Y′ on [0, 1) can be expressed as a Fourier series in (yμ) (of period 1) with coefficients

ck=01fY(z+μ)exp(i2πkz)dz
=01l=fY(z+l+μ)exp(i2πkz)dz
=l=ll+1fY(z+μ)exp(i2πkz)dz
=exp(i2πk(zμ))fY(z)dz
=E(i2πk(Yμ))=ϕYμ(2πk)=exp(2π2σ2k2),

where ϕYμ(t)=exp(12σ2t2) is the characteristic function of Yμ~N(0,σ2). The density function fℛ(Y) of ℛ(Y) on (−1, 1) is therefore given by

f R ( Y ) ( y ) = ( 1 | y | ) 1 + 2 k = 1 exp 2 π 2 σ 2 k 2 cos ( 2 k π ( y μ ) ) .
 Thus fR(Y)(y)(1|y|) on (1,1) as σ, so R(Y) as σ.

B Illustration of tail probability functions of 𝒦 distributions

The tail probability functions for the 𝒦(1, 1), 𝒦(2, 1) and 𝒦(4, 1) distributions are illustrated in Figure B1.

C Proofs of Results of Section 4 about the ⊗-product

Lemma 4.2

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables and that Gk is the Meijer G-function of Definition 2.7. Their k-fold ⊗-product Z(k)=Z1Zk has density function fZ(k) on ℝ2 given by f Z ( k ) ( z ) = 2 π ρ 2 1 G k 1 2 ρ 2 | z | 2 , where ρ2=ρ12ρk2.

Proof. For simplicity, we suppose ρ12==ρk2=1 as this gives a direct re-scaling of the stated result. We first show that the density function f| Z(k) | for the length |Z(k)| of this k-fold⊗-product Z (k) is f| Z(k) |(r)=rGk(12r2) for r ≥ 0, which we demonstrate by induction. When k = 1, the length | Z(1) |=| Z1 | has the distribution of the length | N(0;I2) |=χ2 of a x-distribution with 2 degrees of freedom. Thus the density function f| Z(1) |(r)= rexp(12r2)=rG1(12r2) is given by the appropriate Meijer G-function.

We now assume inductively that the length | Z(k1) | of the (k − 1)-fold ⊗-product Z(k1)=Z1Zk1 has density function f|Z(k1)|(r)=rGk1(12r2). Direct calculation shows that | Z(k) |=212| Z(k1)Zk |,so| Z(k) | has density function

f Z ( k ) ( r ) = f 2 1 2 Z ( k 1 ) Z k ( r ) = 2 1 2 f Z ( k 1 ) Z k 2 1 2 r = 2 1 2 0 z 1 f Z ( k 1 ) 2 1 2 r z 1 f Z k ( z ) d z = 2 1 2 0 z 1 2 1 2 r z 1 G k 1 r 2 z 2 z G 1 1 2 z 2 d z = 2 r 0 z 1 G k 1 r 2 z 2 G 1 1 2 z 2 d z == r 0 y 1 G k 1 1 2 r 2 y 1 G 1 ( y ) d y

However, y1G1(y)=y1G0110(0y)=G0110(1y) in the Meijer G-function notation of Definition 2.7, so

f Z ( k ) ( r ) = r 0 G 0 k 1 k 1 0 00 0 1 2 y 1 r 2 G 0 1 1 0 ( 1 y ) d y = r G 0 k k 0 00 0 1 2 r 2 = r G k 1 2 r 2 ,

as the final integral is a multiplicative convolution of Meijer G-functions. Thus f| Z(k) | has the appropriate form and the inductive demonstration is complete.

The result for the density function f Z ( k ) of the spherically symmetric Z(k) then follows immediately from the polar transformation linking fZ(k) and f| Z(k) |.

Lemma 4.3

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables and that Hk is the Meijer G-function of Definition 2.7. Their k-fold ⊗-product Z(k)=Z1Zk has characteristic function ϕZ(k) on ℝ2 given by ϕZ(k)(t)=Hk(2ρ2|t|2) where ρ2=ρ12ρk2.

Proof. For simplicity, we set ρ12==ρk2=1, so ρ2=1. The density function fZ(k) of Z(k) is fZ(k)(z)= (2π)1Gk(12|z|2), so the characteristic function ϕZ(k) of Z(k) is given by

ϕZ(k)(t)=E(exp(itTZ(k)))=12π2exp(itTz)Gk(12|z|2)dz.

We can write t=r(cosθ,sinθ)T and z=s(cosα,sinα)T for t and z in polar co-ordinates, so tTz=rscos(αθ). In terms of these polar co-ordinates, the characteristic function ϕZ(k)(r,θ)=ϕZ(k)(t) of Z(k) can be expressed as

ϕZ(k)(r,θ)=12π002πcos(irscos(αθ))rgk(12r2)dαds=12π002πcos(rscos(αθ))rgk(12r2)dαds

where J0(x)=1π0πcos(xcosτ)dτ is a Bessel function of the first kind [1]. However, both terms J0(rs)= J0(|t|s)G0120(0014r2s2)=G0120(00| 14 |t|2s2) and Gk(12s2)=G0kk0(0012s2) making up the integrand are Meijer G-functions. Thus the characteristic function ϕ Z ( k ) ( t ) = ϕ Z ( k ) ( r , θ ) of Z(k) can be evaluated as a multiplicative convolution to give

ϕ D Z ( k ) ( t ) = 0 G 0 2 1 0 00 | t | 2 s 2 r G 0 k k 0 00 0 s 2 d s = 0 G 0 2 1 0 00 | t | 2 u G 0 k k 0 00 0 u d u = 0 G 2 0 0 1 11 2 | t | 2 u 1 G 0 k k 0 00 0 u d u = G 1 k 1 k 1 1 11 1 + 1 2 | t | 2 = H k 2 | t | 2 .

Lemma 4.4

Suppose that Z1~N(0;ρ12I2),,Zk~N(0;ρk2I2) are independent spherical bivariate Normal random variables, and let Z(k)=Z1Zk be their k-fold⊗-product.A component Zj(k) of Z (k) has a K(k,ρ2) distribution (Definition 4.1) with shape k and variance ρ2=ρ12ρk2.

Proof. For simplicity, we set ρ12==ρk2=1, so ρ2 = 1. Suppose Z(k) has orthogonal components Z1(k) and Z2(k), so we can write Z=(Z1(k),Z2(k))T. Thus the joint characteristic function ϕZ1(k),Z2(k)(t1,t2)=ϕZ(k)(t), where t=(t1,t2), so Lemma 4.3 shows that

ϕZ1(k),Z2(k)(t1,t2)=E(exp(i(t1Z1(k)+t2Z2(k))))=Hk(2(t12+t22)2).

The characteristic function ϕZ1(k) of a component Z1(k) say of Z(k) is therefore given by ϕZI(k)(t1)=E(exp(it1Z1(k)))= ϕZ1(k),Z2(k)(t1,0)=Hk(2t12).

Suppose X~K(k,1), so X has density function fX(x)=(2π)12Jk(12x2). The characteristic function ϕX of X is given by

ϕ D X ( u ) = E ( exp ( i u X ) ) = ( 2 π ) 1 2 cos ( u x ) J k ( x 2 ) d x = ( 2 π ) 1 2 cos ( u x ) G 0 k k 0 0 1 2 1 2 x 2 d x = 2 1 2 u 1 G 1 k 1 k 1 1 1 2 1 2 1 2 + . 1 2 2 u 2 = G 1 k 1 k 1 1 11 1 + 1 2 u 2 = H k ( 2 u 2 ) .

Thus ϕZi(k)(u)=ϕX(u)=Hk(2u2) are the same characteristic function, and Zj(k) therefore has the same distribution as X~K(k,1).

Received: 2019-06-05
Accepted: 2019-07-01
Published Online: 2020-11-17

© 2020 S. Murphy and R. Player, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 30.9.2023 from https://www.degruyter.com/document/doi/10.1515/jmc-2020-0073/html
Scroll to top button