Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter August 8, 2018

Estimation of the hardness of the learning with errors problem with a restricted number of samples

  • Nina Bindel EMAIL logo , Johannes Buchmann , Florian Göpfert and Markus Schmidt

Abstract

The Learning With Errors (LWE) problem is one of the most important hardness assumptions lattice-based constructions base their security on. In 2015, Albrecht, Player and Scott presented the software tool LWE-Estimator to estimate the hardness of concrete LWE instances, making the choice of parameters for lattice-based primitives easier and better comparable. To give lower bounds on the hardness, it is assumed that each algorithm has given the corresponding optimal number of samples. However, this is not the case for many cryptographic applications. In this work we first analyze the hardness of LWE instances given a restricted number of samples. For this, we describe LWE solvers from the literature and estimate their runtime considering a limited number of samples. Based on our theoretical results we extend the LWE-Estimator. Furthermore, we evaluate LWE instances proposed for cryptographic schemes and show the impact of restricting the number of available samples.

MSC 2010: 94A60; 11T71

1 Introduction

The Learning With Errors (LWE) problem is used in the construction of many cryptographic lattice-based primitives [20, 30, 31]. It became popular due to its flexibility for instantiating very different cryptographic solutions and its (presumed) hardness against quantum algorithms. Moreover, LWE can be instantiated such that it is provably as hard as worst-case lattice problems [31].

In general, an instance of LWE is characterized by parameters n, α(0,1), and q. To solve an instance of LWE, an algorithm has to recover the secret vector 𝐬qn, given access to m LWE samples (𝐚i,ci=𝐚i𝐬+eimodq)qn×q, where the coefficients of 𝐬 and the ei are small and chosen according to probability distribution characterized by α (see Definition 2).

To ease the hardness estimation of concrete instances of LWE, the LWE-Estimator [3, 4] was introduced. In particular, the LWE-Estimator is a very useful software tool to choose and compare concrete parameters for lattice-based primitives. To this end, the LWE-Estimator summarizes and combines existing attacks to solve LWE from the literature. The effectiveness of LWE solvers often depend on the number of given LWE samples. To give conservative bounds on the hardness of LWE, the LWE-Estimator assumes that the optimal number of samples is given for each algorithm, i.e., the number of samples for which the algorithm runs in minimal time. However, in cryptographic applications the optimal number of samples is often not available. In such cases the hardness of used LWE instances estimated by the LWE-Estimator might be overly conservative. Hence, also the system parameters of cryptographic primitives based on those hardness assumptions are more conservative than necessary from the viewpoint of state-of-the-art cryptanalysis. A more precise hardness estimation is to take the restricted number of samples given by cryptographic applications into account.

In this work we close this gap. We extend the theoretical analysis and the LWE-Estimator such that the hardness of an LWE instance is computed when only a restricted number of samples is given. As in [4], our analysis is based on the following algorithms: exhaustive search, the Blum–Kalai–Wassermann (BKW) algorithm, the distinguishing attack, the decoding attack, and the standard embedding approach. In contrast to the existing LWE-Estimator we do not adapt the algorithm proposed by Arora and Ge [8] due to its high costs and consequential insignificant practical use. Additionally, we also analyze the dual embedding attack. This variant of the standard embedding approach is very suitable for instances with a small number of samples since the embedding lattice is of dimension m+n+1 instead of m+1 as in the standard embedding approach. Hence, it is very important for our case with restricted number of samples. As in [4], we also analyze and implement small secret variants of all considered LWE solvers, where the coefficients of the secret vector are chosen from a pre-defined set of small numbers, with a restricted number of samples given.

Moreover, we evaluate our implementation to show that the hardness of most of the considered algorithms are influenced significantly by limiting the number of available samples. Furthermore, we show how the impact of reducing the number of samples differs depending on the model the hardness is estimated in.

Our implementation is already integrated into the existing LWE-Estimator at https://bitbucket.org/malb/lwe-estimator (from commit-id eb45a74 on). In our implementation, we always use the existing estimations with optimal number of samples, if the given restricted number of samples exceeds the optimal number. If not enough samples are given, we calculate the computational costs using the estimations presented in this work.

Figure 1 Overview of existing LWE solvers categorized by
different solving strategies described in Section 2;algorithms
using basis reduction are dashed-framed; algorithms considered in
this
work are written in bold.
Figure 1

Overview of existing LWE solvers categorized by different solving strategies described in Section 2;algorithms using basis reduction are dashed-framed; algorithms considered in this work are written in bold.

Figure 1 shows the categorization by strategies used to solve LWE: One approach reduces LWE to finding a short vector in the dual lattice formed by the given samples, also known as Short Integer Solution (SIS) problem. Another strategy solves LWE by considering it as a Bounded Distance Decoding (BDD) problem. The direct strategy solves for the secret directly. In Figure 1, we dash-frame algorithms that make use of basis reduction methods. The algorithms considered in this work are written in bold.

In Section 2 we introduce notations and definitions required for the subsequent sections. In Section 3 we describe basis reduction and its runtime estimations. In Section 4 we give our analyses of the considered LWE solvers. In Section 5 we describe and evaluate our implementation. In Section 6 we explain how restricting the number of samples impacts the bit-hardness in different models. In Section 7 we summarize our work.

2 Preliminaries

2.1 Notation

We follow the notation used by Albrecht, Player and Scott [4]. Logarithms are base 2 if not indicated otherwise. We write ln() to indicate the use of the natural logarithm. Column vectors are denoted by lowercase bold letters and matrices by uppercase bold letters. Let 𝐚 be a vector then we denote the i-th component of 𝐚 by 𝐚(i). We write 𝐚i for the i-th vector of a list of vectors. Moreover, we denote the concatenation of two vectors 𝐚 and 𝐛 by 𝐚𝐛=(𝐚(1),,𝐚(n),𝐛(1),,𝐛(n)) and 𝐚𝐛=i=1n𝐚(i)𝐛(i) is the usual dot product. We denote the euclidean norm of a vector 𝐯 with 𝐯.

With D,αq we denote the discrete Gaussian distribution over with mean zero and standard deviation σ=αq2π. For a finite set S, we denote sampling the element s uniformly from S with s$S. Let χ be a distribution over ; then we write xχ if x is sampled according to χ. Moreover, we denote sampling each coordinate of a matrix 𝐀m×n with distribution χ by 𝐀χm×n with m,n>0.

2.2 Lattices

For definitions of a lattice L, its rank, its bases, and its determinant det(L) we refer to [4]. For a matrix 𝐀qm×n, we define the lattices

L(𝐀)={𝐲m:there exists 𝐬n such that 𝐲=𝐀𝐬modq},
L(𝐀)={𝐲m:𝐲T𝐀=𝟎modq}.

The distance between a lattice L and a vector 𝐯m is defined as

dist(𝐯,L)=min{𝐯-𝐱:𝐱L}.

Furthermore, the i-th successive minimum λi(L) of the lattice L is defined as the smallest radius r such that there are i linearly independent vectors of norm at most r in the lattice. Let L be an m-dimensional lattice. Then the Gaussian heuristic is given as

λ1(L)m2πedet(L)1m

and the Hermite factor of a basis is given as

δ0m=𝐯det(L)1m,

where 𝐯 is the shortest non-zero vector in the basis. The Hermite factor describes the quality of a basis, which, for example, may be the output of a basis reduction algorithm. We call δ0 the root-Hermite factor and logδ0 the log-root-Hermite factor.

At last we define the fundamental parallelepiped as follows. Let 𝐗 be a set of n vectors 𝐱i. The fundamental parallelepiped of 𝐗 is defined as

P12(𝐗)={i=0n-1αi𝐱i:αi[-12,12)}.

2.3 The LWE problem and solving strategies

In the following we recall the definition of LWE.

Definition 1 (Learning with Errors distribution).

Let n and q>0 be integers, and α>0. We define by χ𝐬,α the LWE distribution which outputs (𝐚,𝐚,𝐬+e)qn×q, where 𝐚$qn and e$D,αq.

Definition 2 (Learning with Errors problem).

Let n,m,q>0 be integers and α>0. Let the coefficients of 𝐬 be sampled according to D,αq. Given m samples (𝐚𝐢,𝐚𝐢,𝐬+ei)qn×q from χ𝐬,α for i=1,,m, the learning with errors problem is to find 𝐬. Given m samples (𝐚𝐢,ci)qn×q for i=1,,m, the decisional learning with errors problem is to decide whether they are sampled by an oracle χ𝐬,α or whether they are sampled uniformly random in q.

In Regev’s original definition of LWE, the attacker has access to arbitrary many LWE samples, which means that χ𝐬,α is seen as an oracle that outputs samples at will. If the maximum number of samples available is fixed, we can write them as a fixed set of m>0 samples

{(𝐚1,c1=𝐚1𝐬+e1modq),,(𝐚m,cm=𝐚m𝐬+emmodq)},

often written as matrix (𝐀,𝐜)qm×n×qm with 𝐜=𝐀𝐬+𝐞modq. We call 𝐀sample matrix.

In the original definition, 𝐬 is sampled uniformly at random in qn. At the loss of n samples, an LWE instance can be constructed where the secret 𝐬 is distributed as the error 𝐞 (see [7]).

Two characterizations of LWE are considered in this work: (1) the generic characterization by n,α,q, where the coefficients of secret and error are chosen according to the distribution D,αq, and (2) LWE with small secret, i.e., the coefficients of the secret vector are chosen according to a distribution over a small set, e.g., I={0,1}; the error is again chosen with distribution D,αq.

2.3.1 Learning with Errors problem with small secret

In the following, let {a,,b} be the set the coefficients of 𝐬 are sampled from for LWE instances with small secret. To solve LWE instances with small secret, some algorithms use modulus switching as explained next. Let (𝐚,c=𝐚𝐬+emodq) be a sample of an (n,α,q)-LWE instance. If the entries of 𝐬 are small enough, this sample can be transformed into a sample (𝐚~,c~) of an (n,α,p)-LWE instance, with p<q and

(pq𝐚-pq𝐚)𝐬pqe.

The transformed samples can be constructed such that (𝐚~,c~)=(pq𝐚,pqc)pn×p with

(2.1)p2πn12σ𝐬α

and σ𝐬 being the standard deviation of the elements of the secret vector 𝐬 [4, Lemma 2]. With the components of 𝐬 being uniformly distributed, the variance of the elements of the secret vector 𝐬 is determined by

σ𝐬2=(b-a+1)2-112.

The result is an LWE instance with errors having standard deviation 2αp2π+𝒪(1) and α=2α. For some algorithms, such as the decoding attack or embedding approaches (cf. Section 4.2, 4.3, and 4.4, respectively), modulus switching should be combined with exhaustive search guessing g components of the secret at first. Then, the algorithm runs in dimension n-g. Therefore, all of these algorithms can be adapted to have at most the cost of exhaustive search and potentially have an optimal g somewhere in between zero and n.

The two main hardness assumptions leading to the basic strategies of solving LWE are the Short Integer Solutions (SIS) problem and the Bounded Distance Decoding (BDD) problem. We describe both of them in the following.

2.3.2 Short Integer Solutions problem

The Short Integer Solutions (SIS) problem is defined as follows: Given a matrix 𝐀qm×n consisting of n vectors 𝐚i$qm, find a vector 𝐯𝟎m such that 𝐯β with β<q and 𝐯T𝐀=𝟎modq.

Solving the SIS problem with appropriate parameters solves Decision-LWE. Given m samples written as (𝐀,𝐜), which either satisfy 𝐜=𝐀𝐬+𝐞modq or 𝐜 is chosen uniformly at random in qm, the two cases can be distinguished by finding a short vector 𝐯 in the dual lattice L(𝐀). Then, 𝐯𝐜 either results in 𝐯𝐞, if 𝐜=𝐀𝐬+𝐞modq, or 𝐯𝐞 is uniformly random over q. In the first case, 𝐯𝐜=𝐯𝐞 follows a Gaussian distribution over , inherited from the distribution of 𝐞, and is usually small. Therefore, as long as the Gaussian distribution can be distinguished from uniformly random, Decision-LWE can be solved as long as 𝐯 is short enough.

2.3.3 Bounded Distance Decoding problem

The BDD problem is defined as follows. Given a lattice L, a target vector 𝐜m, and dist(𝐜,L)<μλ1(L) with μ12, find the lattice vector 𝐱L closest to 𝐜.

An LWE instance (𝐀,𝐜=𝐀𝐬+𝐞modq) can be seen as an instance of BDD. Let 𝐀 define the lattice L(𝐀). Then the point 𝐰=𝐀𝐬 is contained in the lattice L(𝐀). Since 𝐞 follows the Gaussian distribution, over 99.7% of all encountered errors are within three standard deviations of the mean. For LWE parameters typically used in cryptographic applications, this is significantly smaller than λ1(L(𝐀)). Therefore, 𝐰 is the closest lattice point to 𝐜 with very high probability. Hence, finding 𝐰 eliminates 𝐞. If 𝐀 is invertible the secret 𝐬 can be calculated.

3 Description of basis reduction algorithms

Basis reduction is a very important building block of most of the algorithms to solve LWE considered in this paper. It is applied to a lattice L to find a basis {𝐛0,,𝐛n-1} of L such that the basis vectors 𝐛i are short and nearly orthogonal to each other. Essentially, two different approaches to reduce a lattice basis are important in practice: the Lenstra–Lenstra–Lovász (LLL) basis reduction algorithm [24, 29, 14] and the Blockwise Korkine–Zolotarev (BKZ) algorithm with its improvements, called BKZ 2.0 [19, 14]. The runtime estimations of basis reduction used to solve LWE is independent of the number of given LWE samples. Hence, we do not describe the mentioned basis reduction algorithms but only summarize the runtime estimations used in the LWE-Estimator [4]. For a deeper contemplation, we refer to [25, 26, 32, 4].

Following the convention of Albrecht, Player and Scott [4], we assume that the first non-zero vector 𝐛0 of the basis of the reduced lattice is the shortest vector in the basis.

The Lenstra–Lenstra–Lovász algorithm.

Let L be a lattice with basis 𝐁={𝐛0,,𝐛n-1}. Furthermore, let 𝐁*={𝐛0*,,𝐛n-1*} be the Gram–Schmidt basis with Gram–Schmidt coefficients

μi,j=𝐛i𝐛j*𝐛j*𝐛j*,1j<i<n.

Let ϵ>0. Then the runtime of the LLL algorithm is determined by 𝒪(n5+ϵlog2+ϵB) with B>𝐛i for all i with 0in-1. Additionally, an improved variant, called L2, exists, whose runtime is estimated to be 𝒪(n5+ϵlogB+n4+ϵlog2B), see [29]. Furthermore, a runtime 𝒪(n3log2B) is estimated heuristically, see [14]. The first vector of the output basis is guaranteed to satisfy 𝐛0(43+ϵ)n-12λ1(L) with ϵ>0.

The Blockwise Korkine–Zolotarev algorithm.

The BKZ algorithm employs an algorithm to solve several SVP instances of smaller dimension, which can be seen as an SVP oracle. The SVP oracle can be implemented by computing the Voronoi cells of the lattice, by sieving, or by enumeration. During BKZ several BKZ rounds are done. In each BKZ round an SVP oracle is called several times to receive a better basis after each round. The algorithm terminates when the quality of the basis remains unchanged after another BKZ round. The difference between BKZ and BKZ 2.0 are the usage of extreme pruning [19], early termination, limiting the enumeration radius to the Gaussian Heuristic, and local block pre-processing [14].

There exist several practical estimations of the runtime tBKZ of BKZ in the literature. Some of these results are listed in the following. Our firstly mentioned estimation is based on Lindner and Peikert’s [26] estimation. Originally, Linder and Peikert’s estimates were extrapolated from experimental data computed on a machine that run at 2.3 GHz. Following [4], we give the corresponding estimates

logtBKZ(δ0)=1.8logδ0-78.9

in clock cycles, called LP model. This result should be used carefully, since applying this estimation implies the existence of a subexponential algorithm for solving LWE [4]. The estimation – shown by Albrecht, Cid, Faugère, Fitzpatrick and Perret [1] –

logtBKZ(δ0)=0.009log2δ0-4.1,

called delta-squared model, is non-linear in logδ0 and it is claimed that this is more suitable for current implementations. As before, the estimates of the delta-square model are given in clock cycles that were converted from Albrecht–Cid–Faugère–Fitzpatrick–Perret’s extrapolation of the runtime of experiments derived on a 2.3GHz machine. Additionally, in the LWE-Estimator a third approach is used. Given an n-dimensional lattice, the running time in clock cycles is estimated to be

(3.1)tBKZ=ρntk,

where ρ is the number of BKZ rounds and tk is the time needed to find short enough vectors in lattices of dimension k. Even though, ρ is exponentially upper bounded by (nk)n at best, in practice the results after ρ=n2k2logn rounds yield a basis with

𝐛02νkn-12(k-1)+32det(L)1n,

where νkk is the maximum of root-Hermite factors in dimensions k, see [22]. However, recent results like progressive BKZ (running BKZ several times consecutively with increasing block sizes) show that even smaller values for ρ can be achieved. Consequently, the more conservative choice ρ=8 is used in the LWE-Estimator. In the latest version of the LWE-Estimator the following runtime estimations to solve SVP of dimension k are used and compared:[1]

tk,enum=0.27kln(k)-1.02k+16.10,
tk,sieve=0.29k+16.40,
tk,q-sieve=0.27k+16.40.

The estimation tk,enum are extrapolations of the runtime estimates presented by Chen and Nguyen [14]. The estimations tk,sieve and tk,q-sieve are presented in [11] and [23], respectively. The latter is a quantumly enhanced sieving algorithm.

Under the Gaussian heuristic and geometric series assumption, the following correspondence between the block size k and δ0 can be given:

limnδ0=(vk-1k)12(k-1)(k2πe(πk)1k)12(k-1),

where vk is the volume of the unit ball in dimension k. As examples show, this estimation may also be applied when n is finite [4]. As a function of k, the lattice rule of thumb approximates δ0=k12k, sometimes simplified to δ0=21k. Albrecht, Player and Scott [4] show that the simplified lattice rule of thumb is a lower bound to the expected behavior on the interval [40,250] of usual values for k. The simplified lattice rule of thumb is indeed closer to the expected behavior than the lattice rule of thumb, but it implies an subexponential algorithm for solving LWE. For later reference we write

δ0(1)=(k2πe(πk)1k)12(k-1),δ0(2)=k12k,andδ0(3)=21k.

4 Description of algorithms to solve the LWE problem

In this section we describe the algorithms used to estimate the hardness of LWE and analyze them regarding their computational cost. If there exists a small secret variant of an algorithm, the corresponding section is divided into general and small secret variant.

Since the goal of this paper is to investigate how the number of samples m influences the hardness of LWE, we restrict our attention to attacks that are practical for restricted m. This excludes Arora and Ge’s algorithm and BKW, which require at least sub-exponential m. Furthermore, we do not include purely combinatorial attacks like exhaustive search or meet-in-the-middle, since there runtime is not influenced by m.

4.1 Distinguishing attack

The distinguishing attack solves decisional LWE via the SIS strategy using basis reduction. For this, the dual lattice L(𝐀)={𝐰m:𝐰T𝐀=𝟎modq} is considered. The dimension of the dual lattice L(𝐀) is m, the rank is m, and det(L(𝐀))=qn (with high probability) [28]. Basis reduction is applied to find a short vector in L(𝐀). The result is used as short vector 𝐯 in the SIS problem to distinguish the Gaussian from the uniform distribution. By doing so, the decisional LWE problem is solved. Since this attack is in a dual lattice, it is sometimes also called dual attack.

4.1.1 General variant of the distinguishing attack

The success probability ϵ is the advantage of distinguishing 𝐯𝐞 from uniformly random and can be approximated by standard estimates [4]:

ϵ=e-π(𝐯α)2.

In order to achieve a fixed success probability ϵ, a vector 𝐯 of length

𝐯=1αln(1ϵ)π

is needed. Let

f(ϵ)=ln(1ϵ)π.

The logarithm of δ0 required to achieve a success probability of ϵ to distinguish 𝐯𝐞 from uniformly random is given as

logδ0=1mlog(1αf(ϵ))-nm2log(q),

where m is the given number of LWE samples. To estimate the runtime of the distinguishing attack, it is sufficient to determine δ0, since the attack solely depends on basis reduction. Table 1 gives the runtime estimations of the distinguishing attack in the LP and the delta-squared model described in Section 3. Table 2 gives the block size k of BKZ derived in Section 3 following the second approach to estimate the runtime of the distinguishing attack.

Table 1

Logarithmic runtime of the distinguishing attackfor the LP and the delta-squared model (cf. Section 3).

ModelLogarithmic runtime
LP1.8m2mlog(1αf(ϵ))-nlog(q)-78.9
delta-squared0.009m4(mlog(1αf(ϵ))-nlog(q))2+4.1
Table 2

Block size k depending on δ0 required to achieve a success probability of ϵ for the distinguishing attack for different models for the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)log(k2πe(πk)1k)2(k-1)=logł(f(ϵ)α)m-nlogqm2
δ0(2)klogk=12m2mlog(1αf(ϵ))-nlog(q)
δ0(3)k=mlog(1αf(ϵ))-nmlog(q)

On the one hand, the runtime of BKZ decreases exponentially with the length of 𝐯. On the other hand, using a longer vector reduces the success probability. To achieve an overall success probability close to 1, the algorithm has to be run multiple times. The number of repetitions is determined to be 1ϵ2 via the Chernoff bound [15]. Let T(ϵ,m) be the runtime of a single execution of the algorithm. Then, the best overall runtime is the minimum of 1ϵ2T(ϵ,m) over different choices of ϵ. This requires randomization of the attack to achieve independent runs. We assume that an attacker can achieve this without using additional samples, which is conservative from an cryptographic point of view.

4.1.2 Small secret variant of the distinguishing attack

The distinguishing attack for small secrets works similar to the general case, but it exploits the smallness of the secret 𝐬 by applying modulus switching at first. To solve a small secret LWE instance with the distinguishing attack, the strategy described in Section 2.3.1 can be applied: First, modulus switching is used and afterwards the algorithm is combined with exhaustive search.

Using the same reasoning as in the standard case, the required logδ0 for an n,2α,p-LWE instance is given by

logδ0=1mlog(12αf(ϵ))-nm2logp,

where p can be estimated by equation (2.1). The rest of the algorithm remains the same as in the standard case. Table 3 gives the run times estimations of in the LP and the delta-squared model described in Section 3. Table 4 gives the block size k of BKZ derived in Section 3 following the second approach to estimate run times of the distinguishing attack with small secret. Combining this algorithm with exhaustive search as described in Section 2.3.1 may improve the runtime.

Table 3

Logarithmic runtime of the distinguishing attackwith small secret in the LP and the delta-squared model(cf. Section 3).

ModelLogarithmic runtime
LP1.8m2mlog(12αf(ϵ))-nlog(p)-78.9
delta-squared0.009m4(mlog(12αf(ϵ))-nlog(p))2+4.1
Table 4

Block size k depending on δ0 required to achieve a success probability of ϵ for the distinguishing attack with small secret for different models for the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)log(k2πe(πk)1k)2(k-1)=log(12αf(ϵ))m-nlog(p)m2
δ0(2)klogk=12m2mlog(12αf(ϵ))-nlog(p)
δ0(3)k=m2mlog(12αf(ϵ))-nlog(p)

4.2 Decoding approach

The decoding approach solves LWE via the BDD strategy described in Section 2. The procedure considers the lattice L=L(𝐀) defined by the sample matrix 𝐀 and consists of two steps: the reduction step and the decoding step. In the reduction step, basis reduction is employed on L. In the decoding phase the resulting basis is used to find a close lattice vector 𝐰=𝐀𝐬 and thereby eliminate the error vector 𝐞.

In the following let the target success probability be the overall success probability of the attack, chosen by the attacker (usually close to 1). In contrast, the success probability refers to the success probability of a single run of the algorithm. The target success probability is achieved by running the algorithm potentially multiple times with a certain success probability for each single run.

4.2.1 General variant of decoding approach

To solve BDD, and therefore LWE, the most basic algorithm is Babai’s Nearest Plane algorithm [9]. Given a BDD instance (𝐀,𝐜=𝐀𝐬+𝐞modq) from m samples, the solving algorithm consists of two steps. First, basis reduction on the lattice L=L(𝐀) is used, which results in a new basis 𝐁=(𝐛0,,𝐛n-1) for L with root-Hermite factor δ0. The decoding steps is a recursive algorithm that gets as input the partial basis 𝐁=(𝐛0,,𝐛n-i) (the complete basis in the first call) and a target vector 𝐯 (𝐜 in the first call). In every step, it searches for the coefficient αn-i such that 𝐯=𝐯-αn-i𝐛n-i is as close as possible to the subspace spanned by (𝐛0,,𝐛n-i-1). The recursive call is then with the new sub-basis (𝐛0,,𝐛n-i-1) and 𝐯 as target vector.

The result of the algorithm is the lattice point 𝐰L such that 𝐜𝐰+P12(𝐁*). Therefore, the algorithm is able to recover 𝐬 correctly from 𝐜=𝐀𝐬+𝐞modq if and only if 𝐞 lies in the fundamental parallelepiped P12(𝐁*). The success probability of the Nearest Plane algorithm is the probability of 𝐞 falling into P12(𝐁*):

Pr[𝐞P12(𝐁*)]=i=0m-1Pr[|𝐞𝐛i*|<𝐛i*𝐛i*2]
=i=0m-1erf(𝐛i*π2αq).

Hence, an attacker can adjust his overall runtime according to the trade-off between the quality of the basis reduction and the success probability.

Lindner and Peikert [26] present a modification of the Nearest Plane algorithm named Nearest Planes. They introduce additional parameters di1 to the decoding step, which describes how many nearest planes the algorithm takes into account on the i-th level of recursion.

The success probability of the Nearest Planes algorithm is the probability of 𝐞 falling into the parallelepiped P12(𝐁*diag(𝐝)), given as follows:

(4.1)Pr[𝐞P12(𝐁*diag(𝐝))]=i=0m-1erf(di𝐛i*π2αq).

To choose values di, Lindner and Peikert suggest to maximize min(di𝐛i*) while minimizing the overall runtime. As long as the values di are powers of 2, this can be shown to be optimal [4]. For a fixed success probability, the optimal values di can be found iteratively. In each iteration, the value di, for which di𝐛i* is currently minimal, is usually increased by one. Then the success probability given by equation (4.1) is calculated again. If the result is at least as large as the chosen success probability, the iteration stops [26]. An attacker can choose the parameters δ0 and di, which determine the success probability ϵ of the algorithm. Presumably, an attacker tries to minimize the overall runtime

T=TBKZ+TNPϵ,

where TBKZ is the runtime of the basis reduction with chosen target quality δ0, TNP is the runtime of the decoding step with chosen di, and ϵ is the success probability achieved by δ0 and di. To estimate the overall runtime, it is reasonable to assume that the time of the basis reduction and the decoding step are balanced. To give a more precise estimation, one bit has to be subtracted from the number of operations, since the estimation is up to a factor of 2 worse than the optimal runtime.

The runtime of the basis reduction is determined by δ0 as described in Section 3. The values di cannot be expressed by a formula and therefore, there is also no closed formula for δ0. As a consequence, the runtime of the basis reduction step cannot be explicitly given here. They are found by iteratively varying values for δ0 until the running times of the two steps are balanced as described above.

The runtime of the decoding step for Lindner and Peikert’s Nearest Planes algorithm is determined by the number of points i=0m-1di that have to be exhausted and the time tnode it takes to process one point:

TNP=tnodei=0m-1di.

Since no closed formula is known to calculate the values di, they are computed by step-wise increasing like described above until the success probability calculated by equation (4.1) reaches the fixed success probability. In the LWE-Estimator tnode1015.1 clock cycles is used. Hence, both the runtime TBKZ and TNP depend on δ0 and the fixed success probability.

Since this contemplation only considers a fixed success probability, the best trade-off between success probability and the running time of a single execution described above must be found by repeating the process above with varying values of the fixed success probability.

4.2.2 Small secret variant of decoding approach

The decoding approach for small secrets works the same as in the general case, but it exploits the smallness of the secret 𝐬 by applying modulus switching at first and combining this algorithm with exhaustive search afterwards as described in Section 2.3.1.

4.3 Standard embedding

The standard embedding attack solves LWE via reduction to uSVP. The reduction is done by creating an (m+1)-dimensional lattice that contains the error vector 𝐞. Since 𝐞 is very short for typical instantiations of LWE, this results in a uSVP instance. The typical way to solve uSVP is to apply basis reduction.

Let

L(𝐀)={𝐲m:there exists 𝐬n such that 𝐲=𝐀𝐬modq}

be the q-ary lattice defined by the matrix 𝐀 as defined in Section 2.2. Moreover, let (𝐀,𝐜=𝐀𝐬+𝐞modq) and t=dist(𝐜,L(𝐀))=𝐜-𝐱, where 𝐱L(𝐀) such that 𝐜-𝐱 is minimized. Then the lattice L(𝐀) can be embedded in the lattice L(𝐀), with

𝐀=(𝐀𝐜𝟎t).

If t<λ1(L(𝐀))2γ, the higher-dimensional lattice L(𝐀) has a unique shortest vector 𝐜=(-𝐞,t)qm+1 with length

𝐜=mα2q22π+|t|2,

see [27, 16]. Therefore, 𝐞 can be extracted from 𝐜, 𝐀𝐬 is known, and 𝐬 can be solved for.

To determine the success probability and the runtime, we distinguish between two cases: t=𝐞 and t<𝐞. The case t=𝐞 is mainly of theoretical interest. Practical attacks and the LWE-Estimator use t=1 instead, so we focus on this case in the following.

Based on Albrecht, Fitzpatrick and Göpfert [2], Göpfert shows [21, Section 3.1.3] that the standard embedding attack succeeds with non-negligible probability if

(4.2)δ0(q1-nm1eταq)1m,

where m is the number of LWE samples. The value τ is experimentally determined to be τ0.4 for a success probability of ϵ=0.1 [2].

In Table 5, we put all together and state the runtime for the cases from Section 3 of the standard embedding attack in the LP and the delta-squared model. Table 6 gives the block size k of BKZ derived in Section 3 following the second approach to estimate run times of the standard embedding attack.

Table 5

Logarithmic runtime of the standard embedding attack in the LP and the delta-squared model(cf. Section 3).

ModelLogarithmic runtime
LP1.8mlog(q1-nm)-log(eταq)-78.9
delta-squared0.009m2(log(q1-nm)-log(eταq))2+4.1
Table 6

Block size k depending on δ0 required such that the standard embedding attack succeeds for different modelsfor the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)(k2πe(πk)1k)12(k-1)=(q1-nm1eταq)1m
δ0(2)klogk=12mlog(q1-nm)-log(eταq)
δ0(3)k=mlog(q1-nm)-log(eταq)

As discussed above, the success probability ϵ of a single run depends on τ and thus does not necessarily yield the desired target success probability ϵtarget. If the success probability is lower than the target success probability, the algorithm has to be repeated ρ times to achieve

(4.3)ϵtarget1-(1-ϵ)ρ.

Consequently, it has to be considered that ρ executions of this algorithm have to be done, i.e., the runtime has to multiplied by ρ. As before we assume that the samples may be reused in each run.

4.3.1 Small secret variant of standard embedding

To solve a small secret LWE instance based on embedding, the strategy described in Section 2.3.1 can be applied: First, modulus switching is used and afterwards the algorithm is combined with exhaustive search. The standard embedding attack on LWE with small secret using modulus switching works the same as standard embedding in the non-small secret case, except that it operates on instances characterized by n, 2α, and p instead of n, α, and q with p<q. It is combined with guessing parts of the secret, which allows for larger δ0 and therefore for an easier basis reduction. To be more precise, the requirement for δ0 from equation (4.2) changes as follows:

δ0(p1-nm1eτ2αp)1m,

where p can be estimated by equation (2.1). As stated in the description of the standard case, the overall runtime of the algorithm is determined depending on δ0.

In Table 7, we state the runtime of the standard embedding attack in the LP and the delta-squared model. Table 8 gives the block size k of BKZ derived in Section 3 following the second approach to estimate runtime of the standard embedding attack with small secret. The success probability remains the same.

Table 7

Logarithmic runtime of the standard embedding attack with small secret in the LP and the delta-squared model (cf. Section 3).

ModelLogarithmic runtime
LP1.8mlog(p1-nm)-log(2eταp)-78.9
delta-squared0.009m2(log(p1-nm)-log(2eταp))2+4.1
Table 8

Block size k depending on δ0 required such that the standard embedding attack with small secret succeeds for different models for the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)(k2πe(πk)1k)12(k-1)=(p1-nm1eτ2αp)1m
δ0(2)klogk=12mlog(p1-nm)-log(2eταp)
δ0(3)k=mlog(p1-nm)-log(2eταp)

4.4 Dual embedding

Dual embedding is very similar to standard embedding shown in Section 4.3. However, since the embedding is into a different lattice, the dual embedding algorithm runs in dimension n+m+1 instead of m+1, while the number of required samples remains m. Therefore, it is more suitable for instances with a restricted number of LWE samples [16]. In case the optimal number of samples is given (as assumed in the LWE-Estimator by Albrecht, Player and Scott [4]) this attack is as efficient as the standard embedding attack. Hence, it was not included in the LWE-Estimator so far.

4.4.1 General variant of dual embedding

For an LWE instance 𝐜=𝐀𝐬+𝐞modq, let the matrix 𝐀oqm×(n+m+1) be defined as

𝐀o=(𝐀𝐈m𝐜),

with 𝐈mm×m being the identity matrix. Define

L(𝐀o)={𝐯n+m+1:𝐀o𝐯=𝟎modq}

to be the lattice in which uSVP is solved. Considering 𝐯=(𝐬,𝐞,-1)T leads to

𝐀o𝐯=𝐀𝐬+𝐞-𝐜=𝟎modq

and therefore 𝐯L(𝐀o). According to [16], the length of 𝐯 is small and can be estimated to be

𝐯(n+m)α2q22π.

Since this attack is similar to standard embedding, the estimations of the success probability and the running time is the same except for adjustments with respect to the dimension and determinant. Hence, the dual embedding attack is successful if the root-Hermite delta fulfills

(4.4)δ0=(qmm+n1eταq)1n+m,

while the number of LWE samples is m.

In Table 9, we state the runtime of the dual embedding attack in the LP and the delta-squared model. Table 10 gives the block size k of BKZ derived in Section 3 following the second approach to estimate runtime of the dual embedding attack.

Table 9

Logarithmic runtime of the dual embedding attack in the LP and the delta-squared model(cf. Section 3).

ModelLogarithmic runtime
LP1.8(n+m)log(qmm+n)-log(eταq)-78.9
delta-squared0.009(n+m)2(log(qmm+n)-log(eταq))2+4.1
Table 10

Block size k depending on δ0 required such that the dual embedding attack succeeds for different modelsfor the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)(k2πe(πk)1k)12(k-1)=(qmm+n1eταq)1n+m
δ0(2)klogk=12n+mlog(qmm+n)-log(eταq)
δ0(3)k=n+mlog(qmm+n)-log(eταq)

Since this algorithm is not mentioned in [4], we explain the analysis for an unlimited number of samples in the following. The case where the number of samples is not limited, and thus the optimal number of samples can be used, is a special case of the discussion above. To be more precise, to find the optimal number of samples moptimal, the parameter m with maximal δ0 (according to equation (4.4)) has to be found. This yields the lowest runtime using dual-embedding. The success probability is determined similar to the standard embedding, see Section 4.3.

4.4.2 Small secret variant of dual embedding

There are two small secret variant of the dual embedding attack: One is similar to the small secret variant of the standard embedding, the other is better known as the embedding attack by Bai and Galbraith. Both are described in the following.

Small secret variant of dual embedding with modulus switching.

As before, the strategy described in Section 2.3.1 can be applied: First, modulus switching is used and afterwards the algorithm is combined with exhaustive search. This variant works the same as dual embedding in the non-small secret case, except that it operates on instances characterized by n, 2α, and p instead of n, α, and q with p<q. This allows for larger δ0 and therefore for an easier basis reduction. Hence, the following inequality has to be fulfilled by δ0:

δ0=(pmm+n1eτ2αp)1n+m,

where p can be estimated by equation (2.1).

In Table 11, we state the runtime of the dual embedding attack with small secret in the LP and the delta-squared model. Table 12 gives the block size k of BKZ derived in Section 3 following the second approach to estimate runtime of the dual embedding attack with small secret. The success probability remains the same.

Table 11

Logarithmic runtime of the dual embedding attack with small secret using modulus switching in the LP and the delta-squared model (cf. Section 3).

ModelLogarithmic runtime
LP1.8(n+m)log(pmm+n)-log(2eταp)-78.9
delta-squared0.009(n+m)2(log(pmm+n)-log(2eταp))2+4.1
Table 12

Block size k depending on δ0 required such that the dual embedding attack with small secret using modulus switching succeeds for different models for the relation of k and δ0 (cf. Section 3).

Relation δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)(k2πe(πk)1k)12(k-1)=(pmm+n1eτ2αp)1n+m
δ0(2)klogk=12n+mlog(pmm+n)-log(2eταp)
δ0(3)k=n+mlog(pmm+n)-log(2eταp)
Bai and Galbraith’s embedding.

The embedding attack by Bai and Galbraith [10] solves LWE with a small secret vector 𝐬, with each entry in [a,b], by embedding. Similar to the dual embedding, Bai and Galbraith’s solves uSVP in the lattice

L(𝐀o)={𝐯n+m+1:𝐀o𝐯=𝟎modq}

for the matrix 𝐀oqm×(n+m+1) defined as

𝐀o=(𝐀𝐈m𝐜)

in order to recover the short vector 𝐯=(𝐬,𝐞,-1)T. Since 𝐬𝐞, the uSVP algorithm has to find an unbalanced solution.

To tackle this, the lattice should be scaled such that it is more balanced, i.e., the first n rows of the lattice basis are multiplied with a factor depending on σ (see [10]). Hence, the determinant of the lattice is increased by a factor of (2b-aσ)n without significantly increasing the norm the error vector. This increases the δ0 needed to successfully execute the attack. The required δ0 can be determined similarly as done in the standard embedding in Section 4.3:

logδ0=mlog(q2στπe)+nlog(ξσq)m2,

where m=n+m, ξ=2b-a, and m LWE samples are used.

In Table 13, we state the runtime of the Bai-Galbraith embedding attack in the LP and the delta-squared model. Table 14 gives the block size k of BKZ derived in Section 3 following the second approach to estimate runtime of the Bai-Galbraith embedding attack with small secret.

Table 13

Logarithmic runtime of Bai and Galbraith’s embedding attack in the LP and the delta-squared model(cf. Section 3).

ModelLogarithmic runtime
LP1.8m2mlog(q2στπe)+nlog(ξσq)-78.9
delta-squared0.009m4(mlog(q2στπe)+nlog(ξσq))2+4.1
Table 14

Block size k depending on δ0 determined in the embedding attack by Bai and Galbraith for different modelsfor the relation of k and δ0 (cf. Section 3).

Rel. δ0Block size k in tBKZ=ρntk, cf. equation (3.1)
δ0(1)(k2πe(πk)1k)12(k-1)=mlog(q2στπe)+nlog(ξσq)m2
δ0(2)klogk=12m2mlog(q2στπe)+nlog(ξσq)
δ0(3)k=m2mlog(q2στπe)+nlog(ξσq)

The success probability is determined similar to the standard embedding, see equation (4.3) in Section 4.3.

Similar to the other algorithms for LWE with small secret, the runtime of Bai and Galbraith’s attack can be combined with exhaustive search guessing parts of the secret. However, in contrast to the other algorithms using basis reduction, Bai and Galbraith state that applying modulus switching to their algorithm does not improve the result. The reason for this is, that modulus switching reduces q by a larger factor than it reduces the size of the error.

5 Implementation

In this section, we describe our implementation of the results presented in Section 4 as an extension of the LWE-Estimator introduced in [3, 4]. Furthermore, we compare results of our implementation focusing on the behavior when limiting the number of available LWE samples.

5.1 Description of our implementation

Our extension is also written in sage and it is already merged with the original LWE-Estimator (from commit-id eb45a74 on) in March 2017. In the following we used the version of the LWE-Estimator from June 2017 (commit-id: e0638ac) for our experiments.

Except for Arora and Ge’s algorithm based on Gröbner bases, we adapt each algorithm the LWE-Estimator implements to take a fixed number of samples into account if a number of samples is given by the user. If not, each of the implemented algorithms assumes unlimited number of samples (and hence assumes the optimal number of samples is available). Our implementation also extends the algorithms coded-BKW, decisional-BKW, search-BKW, and meet-in-the-middle attacks (for a description of these algorithms see [4]) although we omitted the theoretical description of these algorithms in Section 4.

Following the notation in [4], we assign an abbreviation to each algorithm to refer:

dualdistinguishing attack, Section 4.1,
decdecoding attack, Section 4.2,
usvp-primalstandard embedding, Section 4.3,
usvp-dualdual embedding, Section 4.4,
usvp-baigalBai-Galbraith embedding, Section 4.4.2,
usvpminimum of usvp-primal, usvp-dual, and usvp-baigal,
mitmexhaustive search,
bkwcoded-BKW,
arora-gbArora and Ge’s algorithm based on Gröbner bases.

The shorthand symbol bkw solely refers to coded-BKW and its small secret variant. Decision-BKW and Search-BKW are not assigned an abbreviation and are not used by the main method estimate_lwe, because coded-BKW is the latest and most efficient BKW algorithm. Nevertheless, the other two BKW algorithms can be called separately via the function bkw, which is a convenience method for the functions bkw_search and bkw_decision, and its corresponding small secret variant bkw_small_secret.

In the LWE-Estimator the three different embedding approaches usvp-primal, usvp-dual, and usvp-baigal (in case of LWE with small secret is called) are summarized as the attack usvp and the minimum of the three embedding algorithms is returned. In our experiments we show the different impacts of those algorithms and hence we display the results of the three embedding approaches separately.

Let the LWE instance be defined by n, α=1/(2πnlog2n), and qn2 as proposed by Regev [31]. In the following, let n=128 and let the number of samples be given by m=256. If instead of α only the Gaussian width parameter (sigma_is_stddev=False) or the standard deviation (sigma_is_stddev=True) is known, α can be calculated by alphaf(sigma, q, sigma_is_stddev).

The main function to call the LWE-Estimator is called estimate_lwe. Listing 1 shows how to call the LWE-Estimator on the given LWE instance (with Gaussian distributed error and secret) including the following attacks: distinguishing attack, decoding, and embedding attacks. The first two lines of Listing 1 define the parameters n,α,q, and the number of samples m. In the third line the program is called via estimate_lwe. For each algorithm a value rop is returned that gives the hardness of the LWE instance with respect to the corresponding attack.

Listing 1.

Basic example of calling the LWE-Estimator of the LWE instance n=128, α=1/(2πnlog2n), qn2, and m=256.

Listing 2 shows the estimations of the LWE instance with n=128, α=1/(2πnlog2n), qn2, and m=256 with the secret coefficients chosen uniformly random in [-1,0,1].

Listing 2.

Example of calling the hardness estimations of the small secret LWE instance n=128, qn2, α=1/(2πnlog2n), m=256, with secret coefficients chosen uniformly random in {-1,0,1}.

In the following, we give interesting insights earned during the implementation.

One problem arises in the decoding attack dec when very strictly limiting the number of samples. It uses enum_cost to calculate the computational cost of the decoding step. For this, amongst other things, the stretching factors di of the parallelepiped are computed iteratively by step-wise increase as described in Section 4.2. In this process, the success probability is used, which is calculated as a product of terms erf(di𝐛i*π/(2αq)), see equation (4.1). Since the precision is limited, this may lead falsely to a success probability of 0. In this case, the loop never terminates. This problem can be avoided but doing so leads to an unacceptable long runtime. Since this case arises only when very few samples are given, our software throws an error, saying that there are too few samples.

The original LWE-Estimator routine to find the block size k for BKZ, called k_chen, iterates through possible values of k, starting at 40, until the resulting δ0 is lower than the targeted δ0. As shown in Listing 3, this iteration uses steps of multiplying k by at most 2. When given a target-δ0 close to 1, only a high value k can satisfy the used equation. Hence, it takes a long time to find the suitable k. Therefore, the previous implementation of finding k for BKZ is not suitable in case a limited number of samples is given. Thus, we replace this function in our implementation by finding k using the secant-method as presented in Listing 4.

Listing 3.

Iteration to find k in method k_chen of the previous implementation used in the LWE-Estimator.

Listing 4.

Implementation of method k_chen to find k using the secant-method.

5.2 Comparison of implementations and algorithms

In the following, we present hardness estimations of LWE with and without taking a restricted number of samples into account. The presented experiments are done for the following LWE instance: n=128, α=1/(2πnlog2n), and qn2.

We show the base-2 logarithm of the estimated hardness of the LWE instance under all implemented attacks (except for Arora and Ge’s algorithm) in Table 15. According to the experiments, the hardness decreases with increasing the number of samples and remains the same after reaching the optimal number of samples. If our software could not find a solution the entry is filled with NaN. This is mostly due to too few samples provided to apply the respective algorithm.

Table 15

Logarithmic hardness of the algorithms exhaustive search (mitm), coded-BKW (bkw), distinguishing attack (sis),decoding (dec), standard embedding (usvp-primal), and dual embedding (usvp-dual) depending on the given number of samples for the LWE instance n=128, α=1/(2πnlog2n), and qn2.

Samplesmitmdualdecusvp-primalusvp-dual
100326.5127.392.3NaN95.7
150326.587.165.0NaN55.7
200326.577.257.7263.449.2
250326.574.756.868.848.9
300326.574.756.851.448.9
350326.574.756.848.948.9
400326.574.756.848.948.9
450326.574.756.848.948.9
Samplesbkw
11021NaN
21021NaN
41021NaN
61021NaN
81021NaN
11022NaN
3102285.1
4102285.1
6102285.1

In Table 16, we show the logarithmic hardness and the corresponding optimal number of samples estimated for unlimited number of samples. It should be noted that some algorithms rely on multiple executions, e.g., to amplify a low success probability of a single run to a target success probability. In such a case, the previous implementation of the LWE-Estimator assumed new samples for each run of the algorithm. In our implementation, we assume that samples may be reused in repeated runs of the same algorithm, giving a lower bound on the hardness estimations. Hence, sometimes the optimal number of samples computed by the original LWE-Estimator and the optimal number of samples computed by our method differ a lot in Table 16, e.g., decoding attack. To compensate this and to provide better comparability, we recalculate the optimal number of samples.

Table 16

Logarithmic hardness with optimal number of samples computed by the previous LWE-Estimator andthe optimal number of samples recalculated according to the model used in this work for the LWE instance n=128, α=1/(2πnlog2n), and qn2

Optimal number of samples
AlgorithmOriginal calculationRecalculationHardness [bit]
mitm181181395.9
sis19279512837674.7
dec5343636658.1
usvp1641237348.9
bkw1.01210221.012102285.1

Comparing Table 15 and Table 16 shows that for a number of samples lower than the optimal number of samples, the estimated hardness is either (much) larger than the estimation using optimal number of samples or does not exist. In contrast, for a number of samples greater than or equal to the optimal number of samples, the hardness is exactly the same as in the optimal case, since the implementation falls back on the optimal number of samples when enough samples are given. Without this the hardness would increase again as can be seen for the dual embedding attack in Figure 2. For the results presented in Figure 2 we manually disabled the function to fall back to the optimal number of samples.

Figure 2 Logarithmic hardness of dual embedding(usvp-dual) without falling
back to
optimal case for a number of samples larger than the optimal number of samples
for the LWE instance n=128{n=128}, α=1/(2⁢π⁢n⁢log2⁡n){\alpha={1}/({\sqrt{2\pi n}\log^{2}n})}, and q≈n2{q\approx n^{2}}.
Figure 2

Logarithmic hardness of dual embedding(usvp-dual) without falling back to optimal case for a number of samples larger than the optimal number of samples for the LWE instance n=128, α=1/(2πnlog2n), and qn2.

Figure 3 Comparison of the logarithmic hardness of the LWE instance
n=128{n=128}, α=1/(2⁢π⁢n⁢log2⁡n){\alpha={1}/({\sqrt{2\pi n}\log^{2}n})}, and q≈n2{q\approx n^{2}} of
the algorithms meet-in-the-middle (mitm), distinguishing (dual), decoding
(dec), standard embedding(usvp-primal), and dual embedding (usvp-dual), when
limiting the number of samples.
Figure 3

Comparison of the logarithmic hardness of the LWE instance n=128, α=1/(2πnlog2n), and qn2 of the algorithms meet-in-the-middle (mitm), distinguishing (dual), decoding (dec), standard embedding(usvp-primal), and dual embedding (usvp-dual), when limiting the number of samples.

In Figure 3 we show the effect of limiting the available number of samples on the considered algorithms. We do not include coded-BKW in this plot, since the number of required samples to apply the attack is very large (about 1022). The first thing that strikes is that the limitation of the number of samples leads to an clearly notable increase of the logarithmic hardness for all shown algorithms but exhaustive search and BKW. The latter ones are basically not applicable for a limited number of samples. Furthermore, while the algorithms labeled with mitm, sis, dec, and usvp-primal are applicable for roughly the same interval of samples, the dual embedding algorithm (usvp-dual) stands out: the logarithmic hardness of dual-embedding is lower than for the other algorithms for m>150. The reason is that during the dual embedding SVP is solved in a lattice of dimension n+m, when only m samples are given. Moreover, the dual embedding attack is the most efficient attack up to roughly 350 samples. Afterwards, it is as efficient as the standard embedding (usvp-primal).

6 Impact on concrete instances

Table 17

Comparison of hardness estimations with or without accounting for restricted number of samples for theLinder–Peikert encryption scheme with n=256, q=4093, α=8.35q2π, and m=384.

tk,q-sievetk,sievetk,enumLP
LWE solverm=m=384m=m=384m=m=384m=m=384
mitm407.0407.0407.0407.0407.0407.0407.0407.0
usvp97.7102.0104.2108.9144.6157.0149.9159.9
dec106.1111.5111.5117.2138.0143.4144.3148.7
dual106.2132.5112.3133.1166.0189.1158.0165.2
bkw212.8212.8212.8212.8

We tested and compared various proposed parameters of different primitives such as signature schemes [10, 5], encryption schemes [26, 18], and key exchange protocols [6, 13, 12]. In this section we explain our findings using an instantiation of the encryption scheme by Linder and Peikert [26] as an example. It aims at “medium security” (about 128 bits) and provides n+ samples, where is the message length.[5] For our experiments, we use =1. The secret follows the error distribution, which means that it is not small. However, we expect a similar behavior for small secret instances.

Except bkw and mitm, all attacks considered use basis reduction as a subroutine. As explained in Section 3, several ways to predict the performance of basis reduction exist. Assuming that sieving scales as predicted to higher dimension leads to the smallest runtime estimates for BKZ on quantum (called tk,q-sieve) and classical (tk,sieve) computers. However, due to the subexponential memory requirement of sieving, it might be unrealistic that sieving is the most efficient attack (with respect to runtime and memory consumption) and hence enumeration might remain the best SVP solver even for high dimensions. Consequently, we include the runtime estimation of BKZ with enumeration (tk,enum) to our experiments. Finally, we also performed experiments using the prediction by Lindner and Peikert (LP).

Our results are summarized in Table 17. We write “–” if the corresponding algorithm was not applicable for the tested instance of the Linder–Peikert scheme. Since bkw and mitm do not use basis reduction as subroutine, their runtimes are independent of the used BKZ prediction.

For the LWE instance considered, the best attack with arbitrary many samples always remains the best attack after restricting the number of samples. Restricting the samples always leads to an increased runtime for every attack, up to a factor of 226. Considering only the best attack shows that the hardness increases by about 5 bits. Unsurprisingly, usvp (which consists nearly solely of basis reduction) performs best when we assume that BKZ is fast, but gets outperformed by the decoding attack when we assume larger runtimes for BKZ.

7 Summary

In this work, we present an analysis of the hardness of LWE for the case of a restricted number of samples. For this, we describe the approaches distinguishing attack, decoding, standard embedding, and dual embedding shortly and analyze them with regard to a restricted number of samples. Also, we analyze the small secret variants of the mentioned algorithms under the same restriction of samples.

We adapt the existing software tool LWE-Estimator to take the results of our analysis into account. Moreover, we also adapt the algorithms BKW and meet-in-the-middle that are omitted in the theoretical description. Finally, we present examples, compare hardness estimations with optimal and restricted numbers of samples, and discuss our results.

The usage of a restricted set of samples has its limitations, e.g., if given too few samples, attacks are not applicable as in the case of BKW. On the other hand, it is possible to construct LWE instances from a given set of samples. For example, in [17] ideas how to generate additional samples (at cost of having higher noise) are presented. An integration in the LWE-Estimator and comparison of those methods would give an interesting insight, since it may lead to improvements of the estimation, especially for the algorithms exhaustive search and BKW.


Communicated by Spyros Magliveras


Award Identifier / Grant number: CRC 1119 CROSSING

Funding statement: This work has been supported by the German Research Foundation (DFG) as part of project P1 within the CRC 1119 CROSSING.

References

[1] M. R. Albrecht, C. Cid, J.-C. Faugère, R. Fitzpatrick and L. Perret, On the complexity of the BKW algorithm on LWE, Des. Codes Cryptogr. 74 (2015), no. 2, 325–354. 10.1007/s10623-013-9864-xSearch in Google Scholar

[2] M. R. Albrecht, R. Fitzpatrick and F. Göpfert, On the efficacy of solving LWE by reduction to unique-SVP, Information Security and Cryptology – ICISC 2013, Lecture Notes in Comput. Sci. 8565, Springer, Berlin (2014), 293–310. 10.1007/978-3-319-12160-4_18Search in Google Scholar

[3] M. R. Albrecht, F. Göpfert, C. Lefebvre, R. Player and S. Scott, Estimator for the bit security of LWE instances, 2016, https://bitbucket.org/malb/lwe-estimator [Online; accessed 01-June-2017]. Search in Google Scholar

[4] M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, J. Math. Cryptol. 9 (2015), no. 3, 169–203. 10.1515/jmc-2015-0016Search in Google Scholar

[5] E. Alkim, N. Bindel, J. Buchmann, O. Dagdelen, E. Eaton, G. Gutoski, J. Krämer and F. Pawlega, Revisiting TESLA in the quantum random oracle model, Post-Quantum Cryptography, Lecture Notes in Comput. Sci. 10346, Springer, Berlin (2017), 143–162. 10.1007/978-3-319-59879-6_9Search in Google Scholar

[6] E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange – A new hope, Proceedings of the 25th USENIX Security Symposium (Austin 2016), USENIX, Berkeley (2016), 327–343. Search in Google Scholar

[7] B. Applebaum, D. Cash, C. Peikert and A. Sahai, Fast cryptographic primitives and circular-secure encryption based on hard learning problems, Advances in Cryptology – CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 595–618. 10.1007/978-3-642-03356-8_35Search in Google Scholar

[8] S. Arora and R. Ge, New algorithms for learning in presence of errors, Automata, Languages and Programming. Part I, Lecture Notes in Comput. Sci. 6755, Springer, Berlin (2011), 403–415. 10.1007/978-3-642-22006-7_34Search in Google Scholar

[9] L. Babai, On Lovász’ lattice reduction and the nearest lattice point problem, STACS 85 (Saarbrücken 1985), Lecture Notes in Comput. Sci. 182, Springer, Berlin (1985), 13–20. 10.1007/BFb0023990Search in Google Scholar

[10] S. Bai and S. D. Galbraith, An improved compression technique for signatures based on learning with errors, Topics in Cryptology – CT-RSA 2014, Lecture Notes in Comput. Sci. 8366, Springer, Berlin (2014), 28–47. 10.1007/978-3-319-04852-9_2Search in Google Scholar

[11] A. Becker, L. Ducas, N. Gama and T. Laarhoven, New directions in nearest neighbor searching with applications to lattice sieving, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York (2016), 10–24. 10.1137/1.9781611974331.ch2Search in Google Scholar

[12] J. Bos, C. Costello, L. Ducas, I. Mironov, M. Naehrig, V. Nikolaenko, A. Raghunathan and D. Stebila, Frodo: Take off the ring! Practical, quantum-secure key exchange from LWE, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, New York (2016), 1006–1018. 10.1145/2976749.2978425Search in Google Scholar

[13] J. Bos, C. Costello, M. Naehrig and D. Stebila, Post-quantum key exchange for the TLS protocol from the ring learning with errors problem, IEEE Symposium on Security and Privacy, IEEE Press, Piscataway (2015), 553–570. 10.1109/SP.2015.40Search in Google Scholar

[14] Y. Chen and P. Q. Nguyen, BKZ 2.0: Better lattice security estimates, Advances in Cryptology – ASIACRYPT 2011, Lecture Notes in Comput. Sci. 7073, Springer, Berlin (2011), 1–20. 10.1007/978-3-642-25385-0_1Search in Google Scholar

[15] H. Chernoff, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann. Math. Statistics 23 (1952), 493–507. 10.1214/aoms/1177729330Search in Google Scholar

[16] Ö. Dagdelen, R. El Bansarkhani, F. Göpfert, T. Güneysu, T. Oder, T. Pöppelmann, A. H. Sánchez and P. Schwabe, High-speed signatures from standard lattices, Progress in Cryptology – LATINCRYPT 2014, Lecture Notes in Comput. Sci. 8895, Springer, Berlin (2015), 84–103. 10.1007/978-3-319-16295-9_5Search in Google Scholar

[17] A. Duc, F. Tramèr and S. Vaudenay, Better algorithms for LWE and LWR, Advances in Cryptology – EUROCRYPT 2015. Part I, Lecture Notes in Comput. Sci. 9056, Springer, Berlin (2015), 173–202. 10.1007/978-3-662-46800-5_8Search in Google Scholar

[18] R. El Bansarkhani, Lara – A design concept for lattice-based encryption, preprint (2017), https://eprint.iacr.org/2017/049.pdf. 10.1007/978-3-030-32101-7_23Search in Google Scholar

[19] N. Gama, P. Q. Nguyen and O. Regev, Lattice enumeration using extreme pruning, Advances in Cryptology – EUROCRYPT 2010, Lecture Notes in Comput. Sci. 6110, Springer, Berlin (2010), 257–278. 10.1007/978-3-642-13190-5_13Search in Google Scholar

[20] C. Gentry, C. Peikert and V. Vaikuntanathan, Trapdoors for hard lattices and new cryptographic constructions, Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing – STOC’08, ACM, New York (2008), 197–206. 10.1145/1374376.1374407Search in Google Scholar

[21] F. Göpfert, Securely instantiating cryptographic schemes based on the learning with errors assumption, PhD thesis, Darmstadt University of Technology, Darmstadt, 2016. Search in Google Scholar

[22] G. Hanrot, X. Pujol and D. Stehlé, Algorithms for the shortest and closest lattice vector problems, Coding and Cryptology, Lecture Notes in Comput. Sci. 6639, Springer, Berlin (2011), 159–190. 10.1007/978-3-642-20901-7_10Search in Google Scholar

[23] T. Laarhoven, M. Mosca and J. van de Pol, Finding shortest lattice vectors faster using quantum search, Des. Codes Cryptogr. 77 (2015), no. 2–3, 375–400. 10.1007/s10623-015-0067-5Search in Google Scholar PubMed PubMed Central

[24] A. K. Lenstra, H. W. Lenstra, Jr. and L. Lovász, Factoring polynomials with rational coefficients, Math. Ann. 261 (1982), no. 4, 515–534. 10.1007/BF01457454Search in Google Scholar

[25] H. W. Lenstra, Jr., Integer programming with a fixed number of variables, Math. Oper. Res. 8 (1983), no. 4, 538–548. 10.1287/moor.8.4.538Search in Google Scholar

[26] R. Lindner and C. Peikert, Better key sizes (and attacks) for LWE-based encryption, Topics in Cryptology – CT-RSA 2011, Lecture Notes in Comput. Sci. 6558, Springer, Berlin (2011), 319–339. 10.1007/978-3-642-19074-2_21Search in Google Scholar

[27] V. Lyubashevsky and D. Micciancio, On bounded distance decoding, unique shortest vectors, and the minimum distance problem, Advances in Cryptology – CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 577–594. 10.1007/978-3-642-03356-8_34Search in Google Scholar

[28] D. Micciancio and O. Regev, Lattice-based cryptography, Post-Quantum Cryptography, Springer, Berlin (2009), 147–191. 10.1007/978-3-540-88702-7_5Search in Google Scholar

[29] P. Q. Nguyên and D. Stehlé, Floating-point LLL revisited, Advances in Cryptology – EUROCRYPT 2005, Lecture Notes in Comput. Sci. 3494, Springer, Berlin (2005), 215–233. 10.1007/11426639_13Search in Google Scholar

[30] C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem: extended abstract, Proceedings of the 2009 ACM International Symposium on Theory of Computing – STOC’09, ACM, New York (2009), 333–342. 10.1145/1536414.1536461Search in Google Scholar

[31] O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Proceedings of the 37th Annual ACM Symposium on Theory of Computing – STOC’05, ACM, New York (2005), 84–93. 10.1145/1060590.1060603Search in Google Scholar

[32] C.-P. Schnorr and M. Euchner, Lattice basis reduction: Improved practical algorithms and solving subset sum problems, Math. Program. 66 (1994), no. 2, 181–199. 10.1007/BF01581144Search in Google Scholar

Received: 2017-07-25
Revised: 2018-05-27
Accepted: 2018-06-21
Published Online: 2018-08-08
Published in Print: 2019-03-01

© 2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 30.9.2023 from https://www.degruyter.com/document/doi/10.1515/jmc-2017-0040/html
Scroll to top button