## Abstract

The Learning With Errors (LWE) problem is one of the most important hardness
assumptions lattice-based constructions base their security on. In 2015,
Albrecht, Player and Scott presented the
software tool *LWE-Estimator* to estimate the hardness of concrete LWE
instances, making the choice of parameters for lattice-based primitives easier
and better comparable.
To give lower bounds on the hardness, it is assumed that
each algorithm has given the corresponding optimal number of samples. However,
this is not the case for many cryptographic applications.
In this work
we first analyze the hardness of LWE instances given a restricted number of
samples. For this, we describe LWE solvers from the literature and estimate
their runtime considering a limited number of samples.
Based on our theoretical results we extend the LWE-Estimator.
Furthermore, we evaluate LWE instances proposed for cryptographic schemes
and show the impact of restricting the number of available samples.

## 1 Introduction

The Learning With Errors (LWE) problem is used in the construction of many cryptographic lattice-based primitives [20, 30, 31]. It became popular due to its flexibility for instantiating very different cryptographic solutions and its (presumed) hardness against quantum algorithms. Moreover, LWE can be instantiated such that it is provably as hard as worst-case lattice problems [31].

In general, an instance of LWE is characterized by parameters
*m* LWE samples

To ease the hardness estimation of concrete instances of LWE,
the *LWE-Estimator* [3, 4]
was introduced. In
particular, the LWE-Estimator is a very useful software tool to choose and
compare concrete parameters for lattice-based primitives. To this
end, the LWE-Estimator summarizes and combines existing attacks to
solve LWE from the literature. The effectiveness of LWE solvers often
depend on the
number of given LWE samples. To give conservative bounds on the hardness of
LWE, the
LWE-Estimator assumes that the *optimal* number of samples is given for
each algorithm, i.e., the number of samples for which the algorithm runs in
minimal time. However, in cryptographic applications the optimal number of
samples is often not available. In such cases the hardness of used
LWE instances estimated by the LWE-Estimator might be overly conservative.
Hence, also the system parameters of cryptographic primitives based on those
hardness assumptions are more conservative than necessary from the viewpoint of
state-of-the-art cryptanalysis. A more precise hardness estimation
is to take
the restricted number of samples given by cryptographic applications into
account.

In this work we close this gap. We extend the theoretical analysis
and the LWE-Estimator such that the hardness of an LWE instance is computed
when only a restricted number of samples is given.
As in [4], our analysis is based
on the
following algorithms: exhaustive search, the Blum–Kalai–Wassermann (BKW)
algorithm, the distinguishing attack, the decoding attack, and the standard
embedding approach. In contrast to the existing LWE-Estimator we
do not adapt the algorithm proposed by Arora and
Ge [8] due to
its high costs and consequential insignificant practical use.
Additionally, we also analyze the dual embedding attack. This variant of
the standard embedding approach is very suitable for instances with a small
number of samples since the embedding lattice is of dimension

Moreover, we evaluate our implementation to show that the hardness of most of the considered algorithms are influenced significantly by limiting the number of available samples. Furthermore, we show how the impact of reducing the number of samples differs depending on the model the hardness is estimated in.

Our implementation is already integrated into the existing LWE-Estimator at https://bitbucket.org/malb/lwe-estimator (from commit-id eb45a74 on). In our implementation, we always use the existing estimations with optimal number of samples, if the given restricted number of samples exceeds the optimal number. If not enough samples are given, we calculate the computational costs using the estimations presented in this work.

Figure 1 shows the categorization by strategies used to solve
LWE: One approach reduces LWE to finding a short vector in
the dual lattice formed by the given samples, also known as Short Integer
Solution (SIS) problem. Another strategy solves LWE by considering it as a
Bounded Distance Decoding (BDD) problem. The *direct* strategy solves for
the secret directly.
In Figure 1, we dash-frame algorithms that make use of basis
reduction methods. The algorithms considered in this work are written in bold.

In Section 2 we introduce notations and definitions required for the subsequent sections. In Section 3 we describe basis reduction and its runtime estimations. In Section 4 we give our analyses of the considered LWE solvers. In Section 5 we describe and evaluate our implementation. In Section 6 we explain how restricting the number of samples impacts the bit-hardness in different models. In Section 7 we summarize our work.

## 2 Preliminaries

### 2.1 Notation

We follow the notation used by
Albrecht, Player and Scott [4].
Logarithms are base 2 if not indicated otherwise. We write *i*-th component of
*i*-th vector of a list of vectors. Moreover, we denote the
concatenation of two vectors

With *S*, we denote sampling the element *s* uniformly from
*S* with *x*
is sampled according to χ.
Moreover, we denote sampling each coordinate of a matrix

### 2.2 Lattices

For definitions of a lattice *L*, its rank, its bases, and its
determinant

The distance between a lattice *L* and a vector

Furthermore, the
*i*-th successive minimum *L* is
defined as the smallest radius *r* such that there are *i* linearly independent
vectors of norm at most *r* in the lattice. Let *L* be an *m*-dimensional lattice.
Then the Gaussian heuristic is given as

and the Hermite factor of a basis is given as

where

At last we define the fundamental parallelepiped as follows.
Let
*n* vectors

### 2.3 The LWE problem and solving strategies

In the following we recall the definition of LWE.

### Definition 1 (Learning with Errors distribution).

Let *n* and

### Definition 2 (Learning with Errors problem).

Let *m* samples *m* samples

In Regev’s original definition of LWE, the attacker has access to arbitrary many
LWE samples, which means that

often written as matrix *sample matrix*.

In the original definition, *n* samples, an LWE instance can be constructed where the secret

Two characterizations of LWE are considered in this work: (1)
the generic characterization by

#### 2.3.1 Learning with Errors problem with small secret

In the following, let *modulus
switching* as explained next. Let

The transformed
samples can be constructed such that

and

The result is an LWE instance with errors
having standard deviation *g* components of the secret at
first. Then, the algorithm runs in dimension *g* somewhere in between zero and *n*.

The two main hardness assumptions leading to the basic strategies of solving LWE are the Short Integer Solutions (SIS) problem and the Bounded Distance Decoding (BDD) problem. We describe both of them in the following.

#### 2.3.2 Short Integer Solutions problem

The Short Integer Solutions (SIS) problem is defined as follows:
Given a matrix *n* vectors

Solving the SIS problem with appropriate parameters solves Decision-LWE. Given *m* samples written as

#### 2.3.3 Bounded Distance Decoding problem

The BDD problem is defined as follows.
Given a lattice *L*, a target vector

An LWE instance

## 3 Description of basis reduction algorithms

Basis reduction is a very important building block of most of the algorithms
to solve LWE considered in this paper. It is applied to a lattice *L* to find a
basis *L* such that the basis
vectors

Following the convention of
Albrecht, Player and Scott [4], we
assume that the first non-zero
vector

### The Lenstra–Lenstra–Lovász algorithm.

Let *L* be a
lattice with basis

Let *i* with

### The Blockwise Korkine–Zolotarev algorithm.

The BKZ algorithm employs an algorithm to solve several SVP instances of smaller dimension, which can be seen as an SVP oracle. The SVP oracle can be implemented by computing the Voronoi cells of the lattice, by sieving, or by enumeration. During BKZ several BKZ rounds are done. In each BKZ round an SVP oracle is called several times to receive a better basis after each round. The algorithm terminates when the quality of the basis remains unchanged after another BKZ round. The difference between BKZ and BKZ 2.0 are the usage of extreme pruning [19], early termination, limiting the enumeration radius to the Gaussian Heuristic, and local block pre-processing [14].

There exist several practical estimations of the runtime

in clock cycles, called LP model. This result should be used carefully, since applying this estimation implies the existence of a subexponential algorithm for solving LWE [4]. The estimation – shown by Albrecht, Cid, Faugère, Fitzpatrick and Perret [1] –

called
delta-squared
model, is non-linear in *n*-dimensional lattice, the running time
in clock cycles is estimated to be

where ρ is the number of BKZ rounds and *k*. Even though, ρ is
exponentially upper bounded by

where *k* are used and compared:^{[1]}

The estimation

Under the Gaussian heuristic and geometric series assumption, the following
correspondence between the block size *k* and

where *k*. As examples show, this estimation may also be
applied when *n* is finite [4]. As a function
of *k*,
the *lattice rule of thumb* approximates *k*. The simplified lattice rule of thumb is indeed closer to the expected
behavior than the lattice rule of thumb, but it implies an subexponential
algorithm for solving LWE. For later reference we write

## 4 Description of algorithms to solve the LWE problem

In this section we describe the algorithms used to estimate the hardness of LWE and analyze them regarding their computational cost. If there exists a small secret variant of an algorithm, the corresponding section is divided into general and small secret variant.

Since the goal of this paper is to investigate how the number of samples *m*
influences
the hardness of LWE, we restrict our attention to attacks that are practical for
restricted *m*. This excludes Arora and Ge’s algorithm and BKW, which require
at least sub-exponential *m*.
Furthermore, we do not include purely combinatorial attacks like exhaustive
search or meet-in-the-middle, since there runtime is not influenced by *m*.

### 4.1 Distinguishing attack

The distinguishing attack solves decisional LWE via the SIS strategy
using basis reduction. For this, the dual lattice
*m*, the rank is *m*, and

#### 4.1.1 General variant of the distinguishing attack

The success probability ϵ is the advantage of distinguishing

In order to achieve a fixed
success probability ϵ, a vector

is needed. Let

The logarithm of

where *m* is the given number of LWE samples. To estimate the runtime of
the distinguishing attack, it is sufficient to determine *k* of BKZ derived in Section 3
following the second approach to estimate the runtime of the distinguishing
attack.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

On the one hand, the runtime of BKZ decreases exponentially with the length
of

#### 4.1.2 Small secret variant of the distinguishing attack

The distinguishing attack for small secrets works
similar to the general case, but it exploits the smallness of the
secret

Using the same reasoning as in the standard case, the required *n*,*p*-LWE instance is given by

where *p* can be estimated by equation (2.1). The rest of
the algorithm remains the same as in the standard case.
Table 3 gives the run times estimations of
in
the LP and the delta-squared model described
in Section 3.
Table 4
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate run times of the distinguishing
attack
with small secret.
Combining this algorithm with exhaustive search as described in
Section 2.3.1 may improve the runtime.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

### 4.2 Decoding approach

The decoding approach solves LWE via the BDD strategy described in
Section 2. The procedure considers the lattice
*L*.
In the decoding phase the resulting basis is used to find a close lattice
vector

In the following let the *target success
probability* be the overall success probability of the attack, chosen by the
attacker (usually close to 1). In contrast, the success probability refers to
the success probability of a single run of the algorithm. The target success
probability is achieved by running the algorithm potentially multiple times with
a certain success probability for each single run.

#### 4.2.1 General variant of decoding approach

To solve BDD, and therefore LWE, the most basic algorithm is Babai’s Nearest
Plane algorithm [9]. Given a BDD instance
*m*
samples, the solving algorithm consists of two steps. First, basis
reduction on the lattice
*L* with root-Hermite
factor

The result of the algorithm is the lattice point

Hence, an attacker can adjust his overall runtime according to the trade-off between the quality of the basis reduction and the success probability.

Lindner and Peikert [26] present a modification of the Nearest
Plane algorithm named Nearest Planes.
They introduce additional parameters *i*-th
level of recursion.

The success probability of the Nearest Planes algorithm is the
probability of

To choose values

where

The runtime of the basis reduction is determined by

The runtime of the decoding step for Lindner and Peikert’s Nearest Planes
algorithm is determined by the number of points

Since no closed formula is known to calculate the values

Since this contemplation only considers a fixed success probability, the best trade-off between success probability and the running time of a single execution described above must be found by repeating the process above with varying values of the fixed success probability.

#### 4.2.2 Small secret variant of decoding approach

The decoding approach for small secrets works the same as in the general
case, but it exploits the smallness of the secret

### 4.3 Standard embedding

The standard embedding attack solves LWE via reduction to uSVP.
The reduction is done by creating an

Let

be the q-ary lattice defined by the matrix

If

see [27, 16].
Therefore,

To determine the success probability and the runtime, we distinguish between
two cases:

Based on Albrecht, Fitzpatrick and Göpfert [2], Göpfert shows [21, Section 3.1.3] that the standard embedding attack succeeds with non-negligible probability if

where *m* is the number of LWE samples. The value τ is
experimentally determined to be

In Table 5, we put all together and state the
runtime for the cases from Section 3 of the standard
embedding attack in the LP and the delta-squared model.
Table 6
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate run times of the standard embedding
attack.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

As discussed above, the success probability ϵ of a single run
depends on τ and thus does not necessarily yield the desired target
success probability

Consequently, it has to be considered that ρ executions of this algorithm have to be done, i.e., the runtime has to multiplied by ρ. As before we assume that the samples may be reused in each run.

#### 4.3.1 Small secret variant of standard embedding

To solve a small secret LWE instance based on embedding, the strategy
described in Section 2.3.1 can be applied: First, modulus
switching is used and afterwards the algorithm is combined with
exhaustive search.
The standard embedding attack on LWE with small
secret using modulus switching works the same as standard embedding in the
non-small secret case, except that it operates on instances characterized by
*n*, *p* instead of *n*, α, and *q* with

where *p* can be estimated by equation (2.1). As stated in
the description of the standard case, the overall runtime of the algorithm is
determined depending on

In Table 7, we state the
runtime of the standard embedding attack in the LP and the delta-squared
model. Table 8
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate runtime of the standard embedding
attack with small secret.
The success probability remains the same.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

### 4.4 Dual embedding

Dual embedding is very similar to standard embedding shown in
Section 4.3. However, since the embedding is into a
different lattice,
the dual embedding algorithm runs in dimension *m*. Therefore, it is more
suitable for instances with a restricted number of
LWE samples [16]. In case the optimal number of samples is given
(as assumed in the LWE-Estimator by Albrecht, Player and Scott [4]) this attack is as
efficient as the standard embedding attack. Hence, it was not included in the
LWE-Estimator so far.

#### 4.4.1 General variant of dual embedding

For an LWE instance

with

to be the lattice in which
uSVP is solved. Considering

and therefore

Since this attack is similar to standard embedding, the estimations of the success probability and the running time is the same except for adjustments with respect to the dimension and determinant. Hence, the dual embedding attack is successful if the root-Hermite delta fulfills

while the number of LWE samples is *m*.

In Table 9, we state the
runtime of the dual embedding attack in the LP and the delta-squared
model. Table 10
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate runtime of the dual embedding
attack.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

Since this algorithm is not mentioned in [4],
we explain the analysis for an unlimited number of samples in the
following.
The case where the number of samples
is not limited, and thus the optimal number of samples can be used, is a
special case of the discussion above. To be more precise,
to find the optimal number of samples *m*
with maximal

#### 4.4.2 Small secret variant of dual embedding

There are two small secret variant of the dual embedding attack: One is similar to the small secret variant of the standard embedding, the other is better known as the embedding attack by Bai and Galbraith. Both are described in the following.

##### Small secret variant of dual embedding with modulus switching.

As before, the strategy described in Section 2.3.1 can be
applied: First, modulus switching is used and afterwards the algorithm is
combined with exhaustive search.
This variant works the same as dual embedding in the non-small secret case,
except that it operates on instances characterized by *n*, *p* instead of *n*, α, and *q* with

where *p* can be estimated by equation (2.1).

In Table 11, we state the
runtime of the dual embedding attack with small secret in the LP and the
delta-squared
model. Table 12
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate runtime of the dual embedding
attack with small secret. The success probability remains the same.

Model | Logarithmic runtime |

LP | |

delta-squared |

Relation | Block size k in |

##### Bai and Galbraith’s embedding.

The embedding attack by Bai and Galbraith [10] solves
LWE with a small secret vector

for the matrix

in order to recover the short vector

To tackle this, the lattice should be scaled such that it is
more balanced, i.e., the first *n* rows of
the lattice basis are multiplied with a factor depending on σ
(see [10]). Hence, the determinant of the
lattice is increased by a factor of

where *m* LWE samples are used.

In Table 13, we state the
runtime of the Bai-Galbraith embedding attack in the LP and the
delta-squared
model. Table 14
gives the block size *k* of BKZ derived in Section 3
following the second approach to estimate runtime of the Bai-Galbraith
embedding attack with small secret.

Model | Logarithmic runtime |

LP | |

delta-squared |

Rel. | Block size k in |

The success probability is determined similar to the standard embedding, see equation (4.3) in Section 4.3.

Similar to the other algorithms for LWE with small secret, the runtime of
Bai and Galbraith’s attack can be combined with exhaustive search
guessing parts of the secret. However,
in contrast to the other algorithms using basis reduction, Bai and Galbraith
state that applying modulus switching to their algorithm does not
improve the result. The reason for this is, that modulus
switching reduces *q* by a larger factor than it reduces the size of the error.

## 5 Implementation

In this section, we describe our implementation of the results presented in Section 4 as an extension of the LWE-Estimator introduced in [3, 4]. Furthermore, we compare results of our implementation focusing on the behavior when limiting the number of available LWE samples.

### 5.1 Description of our implementation

Our extension is also written in sage and it is already merged with the original LWE-Estimator (from commit-id eb45a74 on) in March 2017. In the following we used the version of the LWE-Estimator from June 2017 (commit-id: e0638ac) for our experiments.

Except for Arora and Ge’s algorithm based on Gröbner bases, we adapt each algorithm the LWE-Estimator implements to take a fixed number of samples into account if a number of samples is given by the user. If not, each of the implemented algorithms assumes unlimited number of samples (and hence assumes the optimal number of samples is available). Our implementation also extends the algorithms coded-BKW, decisional-BKW, search-BKW, and meet-in-the-middle attacks (for a description of these algorithms see [4]) although we omitted the theoretical description of these algorithms in Section 4.

Following the notation in [4], we assign an abbreviation to each algorithm to refer:

dual | distinguishing attack, Section 4.1, |

dec | decoding attack, Section 4.2, |

usvp-primal | standard embedding, Section 4.3, |

usvp-dual | dual embedding, Section 4.4, |

usvp-baigal | Bai-Galbraith embedding, Section 4.4.2, |

usvp | minimum of usvp-primal, usvp-dual, and usvp-baigal, |

mitm | exhaustive search, |

bkw | coded-BKW, |

arora-gb | Arora and Ge’s algorithm based on Gröbner bases. |

The shorthand symbol bkw solely refers to coded-BKW and its small secret variant. Decision-BKW and Search-BKW are not assigned an abbreviation and are not used by the main method estimate_lwe, because coded-BKW is the latest and most efficient BKW algorithm. Nevertheless, the other two BKW algorithms can be called separately via the function bkw, which is a convenience method for the functions bkw_search and bkw_decision, and its corresponding small secret variant bkw_small_secret.

In the LWE-Estimator the three different embedding approaches usvp-primal, usvp-dual, and usvp-baigal (in case of LWE with small secret is called) are summarized as the attack usvp and the minimum of the three embedding algorithms is returned. In our experiments we show the different impacts of those algorithms and hence we display the results of the three embedding approaches separately.

Let the LWE instance be defined by *n*,

The main function to call the LWE-Estimator is called estimate_lwe.
Listing 1 shows how to call the LWE-Estimator on the
given LWE instance (with Gaussian distributed error and secret) including the
following attacks: distinguishing attack, decoding, and embedding attacks.
The first two lines of
Listing 1 define the parameters *m*. In the third line the program is called via
estimate_lwe.
For each algorithm a value rop is returned that gives the hardness
of the LWE instance with respect to the corresponding attack.

### Listing 1.

Basic example of calling the
LWE-Estimator of the LWE instance

Listing 2 shows the estimations of the LWE
instance with

### Listing 2.

Example of calling the
hardness estimations of the small secret LWE instance

In the following, we give interesting insights earned during the implementation.

One problem arises in the decoding attack dec when very strictly limiting the
number of samples. It uses enum_cost to calculate the computational
cost of the decoding step. For this, amongst other things, the stretching
factors

The original LWE-Estimator routine to find the block
size *k* for BKZ, called k_chen, iterates through possible values
of *k*,
starting at
40, until the resulting *k*
by at most 2. When given a target-*k* can satisfy the used equation. Hence, it takes a long time to
find the suitable *k*. Therefore,
the previous implementation of finding *k* for BKZ is not
suitable in case a limited number of samples is given. Thus, we replace
this function in our implementation by finding *k* using the secant-method as
presented in Listing 4.

### Listing 3.

Iteration to find *k* in method k_chen of the previous implementation used in the LWE-Estimator.

### Listing 4.

Implementation of method k_chen to
find *k* using the secant-method.

### 5.2 Comparison of implementations and algorithms

In the following, we present hardness estimations of LWE with and without
taking a restricted number of samples into account.
The presented experiments are done for the following LWE instance:

We show the base-2 logarithm of the estimated hardness of the LWE instance under all implemented attacks (except for Arora and Ge’s algorithm) in Table 15. According to the experiments, the hardness decreases with increasing the number of samples and remains the same after reaching the optimal number of samples. If our software could not find a solution the entry is filled with NaN. This is mostly due to too few samples provided to apply the respective algorithm.

Samples | mitm | dual | dec | usvp-primal | usvp-dual |

100 | 326.5 | 127.3 | 92.3 | NaN | 95.7 |

150 | 326.5 | 87.1 | 65.0 | NaN | 55.7 |

200 | 326.5 | 77.2 | 57.7 | 263.4 | 49.2 |

250 | 326.5 | 74.7 | 56.8 | 68.8 | 48.9 |

300 | 326.5 | 74.7 | 56.8 | 51.4 | 48.9 |

350 | 326.5 | 74.7 | 56.8 | 48.9 | 48.9 |

400 | 326.5 | 74.7 | 56.8 | 48.9 | 48.9 |

450 | 326.5 | 74.7 | 56.8 | 48.9 | 48.9 |

Samples | bkw |

NaN | |

NaN | |

NaN | |

NaN | |

NaN | |

NaN | |

85.1 | |

85.1 | |

85.1 |

In Table 16, we show the logarithmic hardness and the
corresponding optimal number of samples estimated for unlimited number of
samples. It should be noted that some algorithms rely
on multiple executions, e.g., to amplify a low success probability of a single
run to a target success probability. In such a case, the previous
implementation
of the LWE-Estimator assumed new samples for each run of the algorithm. In our
implementation, we assume that samples may be reused
in repeated runs of the same algorithm, giving a lower bound on the hardness
estimations. Hence, sometimes the optimal number of
samples computed by the original LWE-Estimator and the optimal number of
samples computed by our method differ a lot in
Table 16, e.g., decoding attack. To compensate this
and to provide better comparability, we *recalculate* the optimal number
of samples.

Optimal number of samples | |||

Algorithm | Original calculation | Recalculation | Hardness [bit] |

mitm | 181 | 181 | 395.9 |

sis | 192795128 | 376 | 74.7 |

dec | 53436 | 366 | 58.1 |

usvp | 16412 | 373 | 48.9 |

bkw | 85.1 |

Comparing Table 15 and Table 16 shows that for a number of samples lower than the optimal number of samples, the estimated hardness is either (much) larger than the estimation using optimal number of samples or does not exist. In contrast, for a number of samples greater than or equal to the optimal number of samples, the hardness is exactly the same as in the optimal case, since the implementation falls back on the optimal number of samples when enough samples are given. Without this the hardness would increase again as can be seen for the dual embedding attack in Figure 2. For the results presented in Figure 2 we manually disabled the function to fall back to the optimal number of samples.

In Figure 3 we show the effect of limiting the available
number of samples on the considered algorithms. We do not include
coded-BKW in this plot, since the number of required samples to apply the
attack is very large (about *m* samples are given.
Moreover, the dual embedding attack is the most efficient attack up to roughly
350 samples. Afterwards, it is as efficient as the standard embedding
(usvp-primal).

## 6 Impact on concrete instances

LP | ||||||||

LWE solver | ||||||||

mitm | 407.0 | 407.0 | 407.0 | 407.0 | 407.0 | 407.0 | 407.0 | 407.0 |

usvp | 97.7 | 102.0 | 104.2 | 108.9 | 144.6 | 157.0 | 149.9 | 159.9 |

dec | 106.1 | 111.5 | 111.5 | 117.2 | 138.0 | 143.4 | 144.3 | 148.7 |

dual | 106.2 | 132.5 | 112.3 | 133.1 | 166.0 | 189.1 | 158.0 | 165.2 |

bkw | 212.8 | – | 212.8 | – | 212.8 | – | 212.8 | – |

We tested and compared various
proposed parameters of different primitives such as signature
schemes [10, 5], encryption
schemes [26, 18], and key exchange
protocols [6, 13, 12].
In this section we explain our findings using an instantiation of the
encryption scheme by Linder and Peikert [26] as an example.
It aims at “medium security” (about 128 bits) and provides ^{[5]}
For our experiments, we use

Except bkw and mitm, all attacks considered
use basis reduction as a subroutine. As explained in Section 3,
several ways to predict the performance of basis reduction exist.
Assuming that sieving scales as predicted to higher dimension leads to
the smallest runtime estimates for BKZ on quantum (called

Our results are summarized in Table 17. We write “–” if the corresponding algorithm was not applicable for the tested instance of the Linder–Peikert scheme. Since bkw and mitm do not use basis reduction as subroutine, their runtimes are independent of the used BKZ prediction.

For the LWE instance considered, the best attack with arbitrary many samples
always remains the best attack after restricting the number of samples.
Restricting the samples always leads to an increased runtime for every
attack, up to a factor of

## 7 Summary

In this work, we present an analysis of the hardness of LWE for the case of a restricted number of samples. For this, we describe the approaches distinguishing attack, decoding, standard embedding, and dual embedding shortly and analyze them with regard to a restricted number of samples. Also, we analyze the small secret variants of the mentioned algorithms under the same restriction of samples.

We adapt the existing software tool *LWE-Estimator* to take the results of
our analysis into account. Moreover, we also adapt the algorithms BKW
and meet-in-the-middle that are omitted in the theoretical description. Finally, we present examples, compare hardness estimations
with optimal and restricted numbers of samples, and discuss our results.

The usage of a restricted set of samples has its limitations, e.g., if given too few samples, attacks are not applicable as in the case of BKW. On the other hand, it is possible to construct LWE instances from a given set of samples. For example, in [17] ideas how to generate additional samples (at cost of having higher noise) are presented. An integration in the LWE-Estimator and comparison of those methods would give an interesting insight, since it may lead to improvements of the estimation, especially for the algorithms exhaustive search and BKW.

**Funding source: **Deutsche Forschungsgemeinschaft

**Award Identifier / Grant number: **CRC 1119 CROSSING

**Funding statement: **This work has been supported by the German Research Foundation (DFG)
as part of project P1 within the CRC 1119 CROSSING.

## References

[1] M. R. Albrecht, C. Cid, J.-C. Faugère, R. Fitzpatrick and L. Perret, On the complexity of the BKW algorithm on LWE, Des. Codes Cryptogr. 74 (2015), no. 2, 325–354. 10.1007/s10623-013-9864-xSearch in Google Scholar

[2] M. R. Albrecht, R. Fitzpatrick and F. Göpfert, On the efficacy of solving LWE by reduction to unique-SVP, Information Security and Cryptology – ICISC 2013, Lecture Notes in Comput. Sci. 8565, Springer, Berlin (2014), 293–310. 10.1007/978-3-319-12160-4_18Search in Google Scholar

[3] M. R. Albrecht, F. Göpfert, C. Lefebvre, R. Player and S. Scott, Estimator for the bit security of LWE instances, 2016, https://bitbucket.org/malb/lwe-estimator [Online; accessed 01-June-2017]. Search in Google Scholar

[4] M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, J. Math. Cryptol. 9 (2015), no. 3, 169–203. 10.1515/jmc-2015-0016Search in Google Scholar

[5] E. Alkim, N. Bindel, J. Buchmann, O. Dagdelen, E. Eaton, G. Gutoski, J. Krämer and F. Pawlega, Revisiting TESLA in the quantum random oracle model, Post-Quantum Cryptography, Lecture Notes in Comput. Sci. 10346, Springer, Berlin (2017), 143–162. 10.1007/978-3-319-59879-6_9Search in Google Scholar

[6] E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange – A new hope, Proceedings of the 25th USENIX Security Symposium (Austin 2016), USENIX, Berkeley (2016), 327–343. Search in Google Scholar

[7] B. Applebaum, D. Cash, C. Peikert and A. Sahai, Fast cryptographic primitives and circular-secure encryption based on hard learning problems, Advances in Cryptology – CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 595–618. 10.1007/978-3-642-03356-8_35Search in Google Scholar

[8] S. Arora and R. Ge, New algorithms for learning in presence of errors, Automata, Languages and Programming. Part I, Lecture Notes in Comput. Sci. 6755, Springer, Berlin (2011), 403–415. 10.1007/978-3-642-22006-7_34Search in Google Scholar

[9] L. Babai, On Lovász’ lattice reduction and the nearest lattice point problem, STACS 85 (Saarbrücken 1985), Lecture Notes in Comput. Sci. 182, Springer, Berlin (1985), 13–20. 10.1007/BFb0023990Search in Google Scholar

[10] S. Bai and S. D. Galbraith, An improved compression technique for signatures based on learning with errors, Topics in Cryptology – CT-RSA 2014, Lecture Notes in Comput. Sci. 8366, Springer, Berlin (2014), 28–47. 10.1007/978-3-319-04852-9_2Search in Google Scholar

[11] A. Becker, L. Ducas, N. Gama and T. Laarhoven, New directions in nearest neighbor searching with applications to lattice sieving, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York (2016), 10–24. 10.1137/1.9781611974331.ch2Search in Google Scholar

[12] J. Bos, C. Costello, L. Ducas, I. Mironov, M. Naehrig, V. Nikolaenko, A. Raghunathan and D. Stebila, Frodo: Take off the ring! Practical, quantum-secure key exchange from LWE, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, New York (2016), 1006–1018. 10.1145/2976749.2978425Search in Google Scholar

[13] J. Bos, C. Costello, M. Naehrig and D. Stebila, Post-quantum key exchange for the TLS protocol from the ring learning with errors problem, IEEE Symposium on Security and Privacy, IEEE Press, Piscataway (2015), 553–570. 10.1109/SP.2015.40Search in Google Scholar

[14] Y. Chen and P. Q. Nguyen, BKZ 2.0: Better lattice security estimates, Advances in Cryptology – ASIACRYPT 2011, Lecture Notes in Comput. Sci. 7073, Springer, Berlin (2011), 1–20. 10.1007/978-3-642-25385-0_1Search in Google Scholar

[15] H. Chernoff, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann. Math. Statistics 23 (1952), 493–507. 10.1214/aoms/1177729330Search in Google Scholar

[16] Ö. Dagdelen, R. El Bansarkhani, F. Göpfert, T. Güneysu, T. Oder, T. Pöppelmann, A. H. Sánchez and P. Schwabe, High-speed signatures from standard lattices, Progress in Cryptology – LATINCRYPT 2014, Lecture Notes in Comput. Sci. 8895, Springer, Berlin (2015), 84–103. 10.1007/978-3-319-16295-9_5Search in Google Scholar

[17] A. Duc, F. Tramèr and S. Vaudenay, Better algorithms for LWE and LWR, Advances in Cryptology – EUROCRYPT 2015. Part I, Lecture Notes in Comput. Sci. 9056, Springer, Berlin (2015), 173–202. 10.1007/978-3-662-46800-5_8Search in Google Scholar

[18] R. El Bansarkhani, Lara – A design concept for lattice-based encryption, preprint (2017), https://eprint.iacr.org/2017/049.pdf. 10.1007/978-3-030-32101-7_23Search in Google Scholar

[19] N. Gama, P. Q. Nguyen and O. Regev, Lattice enumeration using extreme pruning, Advances in Cryptology – EUROCRYPT 2010, Lecture Notes in Comput. Sci. 6110, Springer, Berlin (2010), 257–278. 10.1007/978-3-642-13190-5_13Search in Google Scholar

[20] C. Gentry, C. Peikert and V. Vaikuntanathan, Trapdoors for hard lattices and new cryptographic constructions, Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing – STOC’08, ACM, New York (2008), 197–206. 10.1145/1374376.1374407Search in Google Scholar

[21] F. Göpfert, Securely instantiating cryptographic schemes based on the learning with errors assumption, PhD thesis, Darmstadt University of Technology, Darmstadt, 2016. Search in Google Scholar

[22] G. Hanrot, X. Pujol and D. Stehlé, Algorithms for the shortest and closest lattice vector problems, Coding and Cryptology, Lecture Notes in Comput. Sci. 6639, Springer, Berlin (2011), 159–190. 10.1007/978-3-642-20901-7_10Search in Google Scholar

[23] T. Laarhoven, M. Mosca and J. van de Pol, Finding shortest lattice vectors faster using quantum search, Des. Codes Cryptogr. 77 (2015), no. 2–3, 375–400. 10.1007/s10623-015-0067-5Search in Google Scholar PubMed PubMed Central

[24] A. K. Lenstra, H. W. Lenstra, Jr. and L. Lovász, Factoring polynomials with rational coefficients, Math. Ann. 261 (1982), no. 4, 515–534. 10.1007/BF01457454Search in Google Scholar

[25] H. W. Lenstra, Jr., Integer programming with a fixed number of variables, Math. Oper. Res. 8 (1983), no. 4, 538–548. 10.1287/moor.8.4.538Search in Google Scholar

[26] R. Lindner and C. Peikert, Better key sizes (and attacks) for LWE-based encryption, Topics in Cryptology – CT-RSA 2011, Lecture Notes in Comput. Sci. 6558, Springer, Berlin (2011), 319–339. 10.1007/978-3-642-19074-2_21Search in Google Scholar

[27] V. Lyubashevsky and D. Micciancio, On bounded distance decoding, unique shortest vectors, and the minimum distance problem, Advances in Cryptology – CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 577–594. 10.1007/978-3-642-03356-8_34Search in Google Scholar

[28] D. Micciancio and O. Regev, Lattice-based cryptography, Post-Quantum Cryptography, Springer, Berlin (2009), 147–191. 10.1007/978-3-540-88702-7_5Search in Google Scholar

[29] P. Q. Nguyên and D. Stehlé, Floating-point LLL revisited, Advances in Cryptology – EUROCRYPT 2005, Lecture Notes in Comput. Sci. 3494, Springer, Berlin (2005), 215–233. 10.1007/11426639_13Search in Google Scholar

[30] C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem: extended abstract, Proceedings of the 2009 ACM International Symposium on Theory of Computing – STOC’09, ACM, New York (2009), 333–342. 10.1145/1536414.1536461Search in Google Scholar

[31] O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Proceedings of the 37th Annual ACM Symposium on Theory of Computing – STOC’05, ACM, New York (2005), 84–93. 10.1145/1060590.1060603Search in Google Scholar

[32] C.-P. Schnorr and M. Euchner, Lattice basis reduction: Improved practical algorithms and solving subset sum problems, Math. Program. 66 (1994), no. 2, 181–199. 10.1007/BF01581144Search in Google Scholar

**Received:**2017-07-25

**Revised:**2018-05-27

**Accepted:**2018-06-21

**Published Online:**2018-08-08

**Published in Print:**2019-03-01

© 2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.