Thomas Wunderer

A detailed analysis of the hybrid lattice-reduction and meet-in-the-middle attack

Published online: October 5, 2018

Abstract

Over the past decade, the hybrid lattice-reduction and meet-in-the middle attack (called hybrid attack) has been used to evaluate the security of many lattice-based cryptographic schemes such as NTRU, NTRU Prime, BLISS and more. However, unfortunately, none of the previous analyses of the hybrid attack is entirely satisfactory: They are based on simplifying assumptions that may distort the security estimates. Such simplifying assumptions include setting probabilities equal to 1, which, for the parameter sets we analyze in this work, are in fact as small as 2 - 80 . Many of these assumptions lead to underestimating the scheme’s security. However, some lead to security overestimates, and without further analysis, it is not clear which is the case. Therefore, the current security estimates against the hybrid attack are not reliable, and the actual security levels of many lattice-based schemes are unclear. In this work, we present an improved runtime analysis of the hybrid attack that is based on more reasonable assumptions. In addition, we reevaluate the security against the hybrid attack for the NTRU, NTRU Prime and R-BinLWEEnc encryption schemes as well as for the BLISS and GLP signature schemes. Our results show that there exist both security over- and underestimates in the literature.

MSC 2010: 94A60; 11T71

1 Introduction

In 2007, Howgrave-Graham proposed the hybrid lattice-reduction and meet-in-the-middle attack [20] (referred to as the hybrid attack in the following) against the NTRU encryption scheme [19]. Several works [20, 21, 17, 18, 30] claim that the hybrid attack is by far the best known attack on NTRUEncrypt. In the following years, numerous cryptographers have applied the hybrid attack to their cryptographic schemes in order to estimate their security. These considerations include more variants of the NTRU encryption scheme [17, 18, 30], the recently proposed encryption scheme NTRU Prime [8], a lightweight encryption scheme based on Ring-LWE with binary error [10, 9], and the signature schemes BLISS [14] and GLP [16, 14]. However, the above analyses of the hybrid attack make use of over-simplifying assumptions, yielding unreliable security estimates of the schemes. Many of these assumptions lead to more conservative security estimates (lower than necessary), as they give more power to the attacker. While this is not a problem from a security perspective, in those cases, the schemes might be instantiated more efficiently while preserving the desired security level. On the other hand, there also exist the more dangerous cases in which the security of a scheme is overestimated, i.e., the scheme is less secure than it is claimed to be, as we show in this work. In [30], Schanck summarizes the current state of the analyses of the hybrid attack as follows:

“[…] it should be noted that in the author’s opinion, no analysis of the hybrid attack presented thus far is entirely satisfactory. […] it is hoped that future work will answer some of the outstanding questions related to the attack’s probability of success as a function of the effort spent in lattice reduction and enumeration.”

One of the most common [21, 14, 18, 30, 8] over-simplifications is to assume that collisions in the meet-in-the-middle phase of the attack will always be detected. However, in reality, collisions can only be detected with some (possibly very low) probability. For instance, for the cryptographic schemes we analyze in this work, this probability is sometimes as low as 2 - 80 , highlighting the unrealistic nature of some simplifying assumptions made in previous analyses of the hybrid attack.

Our contribution

In this work, we provide a detailed and more satisfying analysis of the hybrid attack. This is achieved in the following way. We present a generalized version of the hybrid attack applied to shortest vector problems (SVP) and show how it can also be used to solve bounded distance decoding (BDD) problems. This general framework for the hybrid attack can naturally be applied to many lattice-based cryptographic constructions, as we also show in this work. We further provide a detailed and improved analysis of the generalized version of the hybrid attack, which can be used to derive updated security estimates. We offer two types of formulas for our security estimates – one that reflects the current state of the art and a more conservative one that reflects potential advances in cryptanalysis – giving a possible range for the security level. In our analysis of the attack, we reduce the amount of underlying assumptions, eliminate the ones that are over-simplifying and clearly state the remaining ones in order to offer as much transparency as possible. We further provide some experimental results and comparisons to the literature to support the validity of the remaining assumptions on the involved success probabilities.

Our second main contribution is the following: Since previous analyses of the hybrid attack are unreliable, the security estimates of many lattice-based cryptographic schemes against the hybrid attack might be inaccurate, and their actual security level is unclear. We therefore apply our improved analysis to reevaluate the security of various cryptographic schemes against the hybrid attack in order to derive updated security estimates.[1] We first revisit the security against the hybrid attack of the NTRU [18], NTRU Prime [8] and R-BinLWEEnc [9] encryption schemes, and end with the BLISS [14] and GLP [16] signature schemes. Our results show that there exist both security over- and underestimates against the hybrid attack across the literature.

Outline

This work is structured as follows. First, we fix notation and provide the necessary background in Section 2. In Section 3, we describe a generalized version of the hybrid attack on shortest vector problems (SVP) and further explain how it can also be used to solve bounded distance decoding (BDD) problems. Our runtime analysis of the hybrid attack is presented in Section 4. In Section 5, we apply our analysis of the hybrid attack to various cryptographic schemes in order to derive updated security estimates against the hybrid attack. We end this work by giving a conclusion and outlook for possible future work.

2 Preliminaries

Notation.

In this work, we write vectors in bold lowercase letters, e.g., 𝐚 , and matrices in bold uppercase letters, e.g., 𝐀 . Polynomials are written in normal lower case letters, e.g., a. We frequently identify polynomials a = i = 0 n a i x i with their coefficient vectors 𝐚 = ( a 0 , , a n ) , indicated by using the corresponding bold letter. Let n , q , f [ x ] be a polynomial of degree n and R q = q [ x ] / ( f ) . We define the rotation matrix of a polynomial a R q as rot ( a ) = ( 𝐚 , 𝐚𝐱 , 𝐚𝐱 2 , , 𝐚𝐱 n - 1 ) q n × n . Then for a , b R q , the matrix-vector product rot ( a ) 𝐛 mod q corresponds to the product of polynomials a b R q .

We use the abbreviation log ( ) for log 2 ( ) . We further write instead of 2 for the Euclidean norm. For N 0 and m 1 , , m k 0 with m 1 + + m k = N , the multinomial coefficient is defined as

( N m 1 , , m k ) = N ! m 1 ! m k ! .

Lattices and bases.

In this work, we use the following definition of lattices. A discrete additive subgroup of m , for some m , is called a lattice. Let m be a positive integer. For a set of vectors 𝐁 = { 𝐛 1 , , 𝐛 n } m , the lattice spanned by 𝐁 is defined as

Λ ( 𝐁 ) = { 𝐱 m | 𝐱 = i = 1 n α i 𝐛 i for α i } .

Let Λ m be a lattice. A set of vectors 𝐁 = { 𝐛 1 , , 𝐛 n } m is called a basis of Λ if 𝐁 is -linearly independent and Λ = Λ ( 𝐁 ) . Abusing notation, we identify lattice bases with matrices and vice versa by taking the basis vectors as the columns of the matrix. The number of vectors in a basis of a lattice is called the rank (or dimension) of the lattice. If the rank is maximal, i.e., m = n , the lattice is called full-rank. In the following, we only consider full-rank lattices. Let q be a positive integer. The length of the shortest non-zero vector of a lattice Λ is denoted by λ 1 ( Λ ) . An integer lattice Λ m that contains q m is called a q-ary lattice. For a matrix 𝐀 q m × n , we define the q-ary lattice

Λ q ( 𝐀 ) :- { 𝐯 m there exists 𝐰 n such that 𝐀𝐰 = 𝐯 mod q } .

For a lattice basis 𝐁 = { 𝐛 1 , , 𝐛 m } m , its fundamental parallelepiped is defined as

𝒫 ( 𝐁 ) = { 𝐱 = i = 1 m α i 𝐛 i m | - 1 2 α i < 1 2 for all i { 1 , , m } } .

The determinant det ( Λ ) of a lattice Λ m is defined as the m-dimensional volume of the fundamental parallelepiped of a basis of Λ. Note that the determinant of the lattice is well-defined, i.e., it is independent of the basis. The Hermite delta (or Hermite factor) δ of a lattice basis 𝐁 = { 𝐛 1 , , 𝐛 m } m is defined via the equation 𝐛 1 = δ m det ( Λ ) 1 m . It provides a measure for the quality of the basis.

Lattice-based cryptography is based on the presumed hardness of computational problems in lattices. Two of the most important lattice problems are the following:

Problem (Shortest vector problem, SVP).

Given a lattice basis B , the task is to find a shortest non-zero vector in the lattice Λ ( B ) .

Problem (Bounded distance decoding, BDD).

Given α R 0 , a lattice basis B R m and a target vector t R m with dist ( t , Λ ( B ) ) < α λ 1 ( Λ ( B ) ) , the task is to find a vector e R m with e < α λ 1 ( Λ ( B ) ) such that t - e Λ ( B ) .

Babai’s nearest plane.

Babai’s nearest plane algorithm [5] (denoted by NP in the following) is an important building block of the hybrid attack. For more details on the algorithm, we refer to Babai’s original work [5] or Lindner and Peikert’s work [24]. We use the nearest plane algorithm in a black box manner. For the reader, it is sufficient to know the following: The input for the nearest plane algorithm is a lattice basis 𝐁 m and a target vector 𝐭 m , and the corresponding output is a vector 𝐞 m such that 𝐭 - 𝐞 Λ ( 𝐁 ) . We denote the output by NP 𝐁 ( 𝐭 ) = 𝐞 . If there is no risk of confusion, we might omit the basis in the notation, writing NP ( 𝐭 ) instead of NP 𝐁 ( 𝐭 ) . The output of the nearest plane algorithm satisfies the following condition, as shown in [5].

Lemma 2.1.

Let B Z m be a lattice basis, and let t R m be a target vector. Then NP B ( t ) is the unique vector e P ( B ¯ ) that satisfies t - e Λ ( B ) , where B ¯ is the Gram–Schmidt basis of B .

The lengths of the Gram–Schmidt vectors of a reduced basis can be estimated by the following heuristic (for more details, we refer to [24]).

Heuristic (Geometric series assumption (GSA)).

Let B Z m be a reduced basis of some full-rank lattice with Hermite delta δ, and let D denote the determinant of Λ ( B ) . Further, let b ¯ 1 , , b ¯ m denote the corresponding Gram–Schmidt vectors of B . Then the length of b ¯ i is approximately

𝐛 ¯ i δ - 2 ( i - 1 ) + m D 1 m .

While the GSA is widely relied upon in lattice-based cryptography (see, e.g., [2, 3, 4, 10, 13, 27, 20]), we emphasize that it does not offer precise estimates, in particular for the last indices of highly reduced bases, see, e.g., [12].

3 The hybrid attack

In this section, we present a generalized version of the hybrid attack to solve shortest vector problems. Our framework for the hybrid attack is the following: The task is to find the shortest vector 𝐯 in a lattice Λ, given a basis of Λ of the form

𝐁 = ( 𝐁 𝐂 𝟎 𝐈 r ) m × m ,

where 0 < r < m is the meet-in-the-middle dimension, 𝐁 ( m - r ) × ( m - r ) and 𝐂 ( m - r ) × r . In Appendix A.1, we show that for q-ary lattices, where q is prime, one can always construct a basis of this form, provided that the determinant of the lattice is at most q m - r . Additionally, in Section 5, we show that our framework can be applied to many lattice-based cryptographic schemes.

The main idea of the attack is the following: Let 𝐯 be a short vector contained in the lattice Λ. We split the short vector 𝐯 into two parts 𝐯 = ( 𝐯 l , 𝐯 g ) t with 𝐯 l m - r and 𝐯 g r . The second part 𝐯 g represents the part of 𝐯 that is recovered by guessing (meet-in-the-middle) during the attack, while the first part 𝐯 l is recovered with lattice techniques (solving BDD problems). Because of the special form of the basis 𝐁 , we have that

𝐯 = ( 𝐯 l 𝐯 g ) = 𝐁 ( 𝐱 𝐯 g ) = ( 𝐁𝐱 + 𝐂𝐯 g 𝐯 g )

for some vector 𝐱 m - r , hence 𝐂𝐯 g = - 𝐁𝐱 + 𝐯 l . This means 𝐂𝐯 g is close to the lattice Λ ( 𝐁 ) , since it only differs from the lattice by the short vector 𝐯 l , and therefore 𝐯 l can be recovered solving a BDD problem if 𝐯 g is know. The idea now is that if we can correctly guess the vector 𝐯 g , we can hope to find 𝐯 l using the nearest plane algorithm (see Section 2) via NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l , which is the case if the basis 𝐁 is sufficiently reduced. Solving the BDD problem using the nearest plane algorithm is the lattice part of the attack. The lattice Λ ( 𝐁 ) in which we need to solve BDD has the same determinant as the lattice Λ ( 𝐁 ) in which we want to solve SVP, but it has smaller dimension, i.e., m - r instead of m. Therefore, the newly obtained BDD problem is potentially easier to solve than the original SVP instance.

In the following, we explain how one can speed up the guessing part of the attack by Odlyzko’s meet-in-the-middle approach. Using this technique, one is able to reduce the number of necessary guesses to the square root of the number of guesses needed in a naive brute-force approach. Odlyzko’s meet-in-the-middle attack on NTRU was first described in [22] and applied in the hybrid lattice-reduction and meet-in-the-middle attack against NTRU in [20]. The idea is that instead of guessing 𝐯 g directly in a large set M of possible vectors, we guess sparser vectors 𝐯 g and 𝐯 g ′′ in a smaller set N of vectors such that 𝐯 g + 𝐯 g ′′ = 𝐯 g . In our attack, the larger set M will be the set of all vectors with a fixed number 2 c i of the non-zero entries equal to i for all i { ± 1 , , ± k } , where k = 𝐯 g . The smaller set N will be the set of all vectors with only half as many, i.e., only c i , of the non-zero entries equal to i for all i { ± 1 , , ± k } . Assume that NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l . First, we guess vectors 𝐯 g and 𝐯 g ′′ in the smaller set N. We then compute 𝐯 l = NP 𝐁 ( 𝐂𝐯 g ) and 𝐯 l ′′ = NP 𝐁 ( 𝐂𝐯 g ′′ ) . We hope that if 𝐯 g + 𝐯 g ′′ = 𝐯 g , then also 𝐯 l + 𝐯 l ′′ = 𝐯 l , i.e., that the nearest plane algorithm is additively homomorphic on those inputs. The probability that this additive property holds is one crucial element in the runtime analysis of the attack. We further need to detect when this property holds during the attack, i.e., we need to be able to recognize matching vectors 𝐯 g and 𝐯 g ′′ with 𝐯 g + 𝐯 g ′′ = 𝐯 g and 𝐯 l + 𝐯 l ′′ = 𝐯 l , which we call a collision. In order to do so, we store 𝐯 g and 𝐯 g ′′ in (hash) boxes whose addresses depend on 𝐯 l and 𝐯 l ′′ , respectively, such that they collide in at least one box. To define those addresses properly, note that in case of a collision, we have 𝐯 l = - 𝐯 l ′′ + 𝐯 l . Thus 𝐯 l and - 𝐯 l ′′ differ only by a vector of infinity norm y = 𝐯 l . Therefore, the addresses must be crafted such that for any 𝐱 m and 𝐳 m with 𝐳 y , it holds that the intersection of the addresses of 𝐱 and 𝐱 + 𝐳 is non-empty, i.e., 𝒜 𝐱 ( m , y ) 𝒜 𝐱 + 𝐳 ( m , y ) . Furthermore, the set of addresses should not be unnecessarily large so the hash tables do not grow too big and unwanted collisions are unlikely to happen. The following definition satisfies these properties, as can easily be verified.

Definition 3.1.

Let m , y . For a vector 𝐱 m , the set 𝒜 𝐱 ( m , y ) { 0 , 1 } m is defined as

𝒜 𝐱 ( m , y ) = { 𝐚 { 0 , 1 } m ( 𝐚 ) i = 1 if ( 𝐱 ) i > y 2 - 1 for i { 1 , , m } , ( 𝐚 ) i = 0 if ( 𝐱 ) i < - y 2 for i { 1 , , m } } .

We illustrate Definition 3.1 with some examples.

Example.

Let m = 5 be fixed. For varying bounds y and input vectors 𝐱 , we have

𝒜 ( 7 , 0 , - 1 , 1 , - 5 ) ( 5 , 1 ) = { ( 1 , 0 , 0 , 1 , 0 ) , ( 1 , 1 , 0 , 1 , 0 ) }
𝒜 ( 8 , 0 , - 1 , 1 , - 2 ) ( 5 , 2 ) = { ( 1 , 0 , 0 , 1 , 0 ) , ( 1 , 1 , 0 , 1 , 0 ) , ( 1 , 0 , 1 , 1 , 0 ) , ( 1 , 1 , 1 , 1 , 0 ) }
𝒜 ( 2 , - 1 , 9 , 1 , - 2 ) ( 5 , 3 ) = { ( 1 , 0 , 1 , 0 , 0 ) , ( 1 , 0 , 1 , 1 , 0 ) , ( 1 , 1 , 1 , 0 , 0 ) , ( 1 , 1 , 1 , 1 , 0 ) }
𝒜 ( 2 , - 5 , 0 , 7 , - 2 ) ( 5 , 4 ) = { ( 1 , 0 , 0 , 1 , 0 ) , ( 1 , 0 , 0 , 1 , 1 ) , ( 1 , 0 , 1 , 1 , 0 ) , ( 1 , 0 , 1 , 1 , 1 ) }

The hybrid attack on SVP without precomputation is presented in Algorithm 1. A list of the attack parameters and the parameters used in the runtime analysis of the attack and their meaning is given in Table 1. In order to increase the chance of Algorithm 1 being successful, one performs a basis reduction step as precomputation. Therefore, the complete hybrid attack, presented in Algorithm 2, is in fact a combination of a basis reduction step and Algorithm 1.

Algorithm 1 (The hybrid attack on SVP without basis reduction.).

Algorithm 2 (The hybrid attack on SVP including basis reduction.).

Table 1

Attack parameters and parameters in the runtime analysis.

Parameter Meaning
m lattice dimension
r meet-in-the-middle dimension
𝐁 lattice basis of the whole lattice
𝐁 partially reduced lattice basis of the sublattice
c i number of i-entries guessed during attack
y infinity norm bound on 𝐯 l
k infinity norm bound on 𝐯 g
Y expected Euclidean norm of 𝐯 l
R i Gram–Schmidt lengths corresponding to 𝐁
r i scaled Gram–Schmidt lengths corresponding to 𝐁

The hybrid attack on BDD

The hybrid attack can also be applied to BDD instead of SVP by rewriting a BDD instance into an SVP instance. This can be done in the following way (see for example [1]): Let 𝐁 be a lattice basis of the form

𝐁 = ( 𝐁 𝐂 𝟎 𝐈 r ) m × m

with 𝐁 ( m - r ) × ( m - r ) , 𝐂 ( m - r ) × r , and let 𝐭 be a target vector for BDD. Suppose 𝐭 - 𝐯 Λ ( 𝐁 ) , where 𝐯 is the short (bounded) vector we are looking for. Then the short vector ( 𝐯 , 1 ) t is contained in the lattice Λ ( 𝐁 ′′ ) spanned by

𝐁 ′′ = ( 𝐁 𝐭 𝟎 1 ) ( m + 1 ) × ( m + 1 ) ,

which is of the required form for the hybrid attack on SVP. Therefore, we can apply the hybrid attack on SVP to find ( 𝐯 , 1 ) t , solving the BDD problem. The SVP lattice Λ ( 𝐁 ′′ ) has the same determinant as the BDD lattice Λ ( 𝐁 ) and dimension m + 1 instead of m. However, the additional dimension can be ignored, since we know the last entry of ( 𝐯 , 1 ) t and therefore do not have to guess it during the meet-in-the-middle phase. Note that by definition of BDD, it is very likely that ± 𝐯 are the only short vectors in the lattice Λ ( 𝐁 ′′ ) . By fixing the last coordinate to be plus one, only 𝐯 , not also - 𝐯 , can be found by the attack.

4 Analysis

In this section, we analyze the runtime of the hybrid attack. First, in Heuristic 1 in Section 4.1, we estimate the runtime of the attack in case sufficient success conditions are satisfied. In Section 4.2, we then show how to determine the probability that those sufficient conditions are satisfied, i.e., how to determine (a lower bound on) the success probability. We conclude the runtime analysis of the attack by showing how to optimize the attack parameters to minimize its runtime in Section 4.3. We end the section by highlighting our improvements over previous analyses of the hybrid attack, see Section 4.4.

4.1 Runtime analysis

We now present our main result about the runtime of the generalized hybrid attack. It shows that under sufficient conditions, the attack is successful and estimates the expected runtime.

Heuristic 1.

Let all inputs be denoted as in Algorithm 1, Y R 0 , and let R 1 , , R m - r denote the lengths of the Gram–Schmidt basis vectors of the basis B . Further, let S Λ ( B ) denote the set of all non-zero lattice vectors v = ( v l , v g ) t Λ ( B ) , where v l Z m - r and v g Z r with v l y , v l Y , v g k , exactly 2 c i entries of v g are equal to i for all i { ± 1 , , ± k } , and NP B ( Cv g ) = v l . Assume that the set S is non-empty.

Then Algorithm 1 is successful, and the expected number of loops can be estimated by

L = ( r c - k , , c k ) ( p | S | i { ± 1 , , ± k } ( 2 c i c i ) ) - 1 2 ,

where

p = i = 1 m - r ( 1 - 1 r i B ( ( m - r ) - 1 2 , 1 2 ) - r i - 1 - r i max ( - 1 , z - r i ) z + r i ( 1 - t 2 ) ( m - r ) - 3 2 d t d z ) ,

B ( , ) denotes the Euler beta function [28], and

r i = R i 2 Y for all i { 1 , , m - r } .

Furthermore, the expected number of operations of Algorithm 1 can be estimated by T NP L , where T NP denotes the number of operations of one nearest plane call in the lattice Λ ( B ) of dimension m - r .

In the following remark, we explain the meaning of the (attack) parameters that appear in Heuristic 1 in more detail.

Remark 4.1.

  1. (1)

    The parameters r, y, k, c - k , , c k are the attack parameters that can be chosen by the attacker. The meet-in-the-middle dimension and the remaining lattice dimension are determined by the parameter r. The remaining parameters must be chosen in such a way that the requirements of Heuristic 1 are likely to be fulfilled in order to obtain a high success probability of the attack. Choosing those parameters depends heavily on the distribution of the short vectors 𝐯 S . In order to obtain more flexibility, this distribution is not specified in Heuristic 1. However, in Section 5, we show how one can choose the attack parameters and calculate the success probability for several distributions arising in various cryptographic schemes. At this point, we only want to remark that y should be (an upper bound on) 𝐯 l , k (an upper bound on) 𝐯 g and 2 c i the (expected) number of entries of 𝐯 g that is equal to i for i { ± 1 , , ± k } .

  2. (2)

    The attacker can further influence the lengths R 1 , , R m - r of the Gram–Schmidt vectors by providing a different basis than 𝐁 with Gram–Schmidt lengths that lead to a more efficient attack. This is typically done by performing a basis reduction on 𝐁 or parts of 𝐁 as precomputation, see Algorithm 2. The lengths of the Gram–Schmidt vectors achieved by the basis reduction with Hermite delta δ are typically estimated by the GSA (see Section 2). However, for bases of q-ary lattices of a special form, the GSA may be modified. For further details, see Appendix A.2. Notice that spending more time on basis reduction increases the probability p in Heuristic 1, and the probability that the condition NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l holds, as can be seen later in this section and Section 4.2.

  3. (3)

    Because of the previous remark, the complete attack – presented in Algorithm 2 – is actually a combination of precomputation (basis reduction) and Algorithm 1. Therefore, the runtime of both steps must be considered, and they have to be balanced in order to estimate the total runtime. In particular, the amount of precomputation must be chosen such that the precomputed basis offers the best trade-off between its quality with respect to the hybrid attack (i.e., amplifying the success probability and decreasing the number of operations) and the cost to compute this basis. We show how to optimize the total runtime in Section 4.3.

In the following, we show how Heuristic 1 can be derived. For the rest of this section, let all notations be as in Heuristic 1. We further assume in the following that the assumption of Heuristic 1, i.e., S , is satisfied. We first provide the following useful definition already given in [20, 10]. We use the notation of [10].

Definition 4.2.

Let n . A vector 𝐱 n is called 𝐲 -admissible (with respect to the basis 𝐁 ) for some vector 𝐲 n if NP 𝐁 ( 𝐱 ) = NP 𝐁 ( 𝐱 - 𝐲 ) + 𝐲 .

This means, that if 𝐱 is 𝐲 -admissible then NP 𝐁 ( 𝐱 ) and NP 𝐁 ( 𝐱 - 𝐲 ) yield the same lattice vector. We recall the following Lemma from [10] about Definition 4.2. It showcases the relevance of the definition by relating it to the equation NP 𝐁 ( 𝐭 1 ) + NP 𝐁 ( 𝐭 2 ) = NP ( 𝐭 1 + 𝐭 2 ) , which is necessary to hold for our attack to work.

Lemma 4.3 ([10, Lemma 2]).

Let t 1 R n , t 2 R n be two arbitrary target vectors. Then the following are equivalent.

  1. (i)

    NP 𝐁 ( 𝐭 1 ) + NP 𝐁 ( 𝐭 2 ) = NP 𝐁 ( 𝐭 1 + 𝐭 2 ) .

  2. (ii)

    𝐭 1 is NP ( 𝐭 1 + 𝐭 2 ) -admissible.

  3. (iii)

    𝐭 2 is NP ( 𝐭 1 + 𝐭 2 ) -admissible.

Success of the attack and number of loops

We now estimate the expected number of loops in case Algorithm 1 terminates. In the following, we use the subscript 𝐁 for probabilities to indicate that the probability is taken over the randomness of the basis (with Gram–Schmidt length R 1 , , R m - r ). In each loop of the algorithm, we sample a vector 𝐯 g in the set

W = { 𝐰 r exactly c i entries of 𝐰 are equal to i for all i { - k , , k } } .

The attack succeeds if 𝐯 g W and 𝐯 g ′′ W such that 𝐯 g + 𝐯 g ′′ = 𝐯 g and

NP 𝐁 ( 𝐂𝐯 g ) + NP 𝐁 ( 𝐂𝐯 g ′′ ) = NP 𝐁 ( 𝐂𝐯 g + 𝐂𝐯 g ′′ ) = 𝐯 l

for some vector 𝐯 = ( 𝐯 l , 𝐯 g ) t S are sampled in different loops of the algorithm. By Lemma 4.3, the second condition is equivalent to the fact that 𝐂𝐯 g is 𝐯 l -admissible. We assume that the algorithm only succeeds in this case. We are therefore interested in the following subset of W:

V = { 𝐰 W 𝐯 g - 𝐰 W and 𝐂𝐰 is 𝐯 l -admissible for some 𝐯 = ( 𝐯 l , 𝐯 g ) t S } .

For all 𝐯 = ( 𝐯 l , 𝐯 g ) t S with 𝐯 l m - r and 𝐯 g r , let p ( 𝐯 ) denote the probability

p ( 𝐯 ) = Pr 𝐁 , 𝐰 W [ 𝐂𝐰 is 𝐯 l -admissible ] ,

and let p 1 ( 𝐯 ) denote the probability

p 1 ( 𝐯 ) = Pr 𝐰 W [ 𝐯 g - 𝐰 W ] .

By construction, we have that p 1 ( 𝐯 ) is constant for all 𝐯 S , so we can simply write p 1 instead of p 1 ( 𝐯 ) . We make the following reasonable assumption on p ( 𝐯 ) and p 1 .

Assumption 1.

For all v = ( v l , v g ) t S with v l Z m - r and v g Z r , we assume that the independence condition

p ( 𝐯 ) = Pr 𝐁 , 𝐰 W [ 𝐂𝐰 𝑖𝑠 𝐯 l -admissible 𝐯 g - 𝐰 W ]

holds. We further assume that p ( v ) is equal to some constant probability p (as in Heuristic 1) for all v S .

Assuming independence of p and p 1 and disjoint events for the elements of S, we can make the following reasonable assumption (analogously to [20, Lemma 6 and Theorem 3]).

Assumption 2.

We assume that

| V | | W | = Pr 𝐰 W [ 𝐰 V ] = p 1 p | S | .

The probability p 1 is calculated by

p 1 = i { ± 1 , , ± k } ( 2 c i c i ) | W | , where | W | = ( r c - k , , c k ) .

From Assumption 2, it follows that | V | p 1 p | W | | S | . As long as the product p 1 p is not too small, we can therefore assume that V .

Assumption 3.

We assume that V .

Assumption 3 implies that the attack is successful, since by Lemma 4.3, if 𝐯 g V , then also 𝐯 g ′′ = 𝐯 g - 𝐯 g V for all 𝐯 = ( 𝐯 l , 𝐯 g ) t S . Such two vectors 𝐯 g and 𝐯 g ′′ in V will eventually be guessed in two separate loops of the algorithm, and they are recognized as a collision, since by the assumption 𝐯 l y of Heuristic 1, they share at least one common address. By Assumption 2, we expect that during the algorithm, we sample in V every 1 p 1 p | S | loops, and by the birthday paradox, we expect to find a collision 𝐯 g V and 𝐯 g ′′ V with 𝐯 g ′′ + 𝐯 g = 𝐯 g after L 1 p 1 p | S | | V | loops. In conclusion, we can estimate the expected number of loops by

L | V | p 1 p | S | = | W | p 1 p | S | = ( r c - k , , c k ) ( p | S | i { ± 1 , , ± k } ( 2 c i c i ) ) - 1 2 .

It remains to calculate the probability p. This can be done analogously to [10, Heuristic 3] and the calculations following it. For a detailed and convincing justification of the heuristic and the intuition behind it, including a geometric intuition behind the 𝐲 -admissibility and its mathematical modeling, we refer to [10]. Following the calculations of [10], we obtain the following assumption.

Assumption 4.

We assume that the probability p is approximately

p i = 1 m - r ( 1 - 1 r i B ( ( m - r ) - 1 2 , 1 2 ) - r i - 1 - r i max ( - 1 , z - r i ) z + r i ( 1 - t 2 ) ( m - r ) - 3 2 d t d z ) ,

where B ( , ) and r 1 , , r m - r are defined as in Heuristic 1.

The integrals in the above formula for p can, for instance, be calculated using SageMath [31]. In order to calculate p, one needs to estimate the lengths r i , as discussed in the Remark 4.1. In Appendix A.3, we provide the results of some preliminary experiments supporting the validity of Assumption 4.

Number of operations

We now estimate the expected total number of operations of the hybrid attack under the conditions of Heuristic 1. In order to do so, we need to estimate the runtime of one inner loop and multiply it by the expected number of loops. As in [20] and [10], we make the following assumption, which is plausible as long the sets of addresses are not extremely large.

Assumption 5.

We assume that the number of operations of one inner loop of Algorithm 1 is dominated by the number of operations of one nearest plane call.

We remark that we see Assumption 5 as one of the more critical ones. Obviously, it does not hold for all parameter choices[2], but it is reasonable to believe that it holds for many relevant parameter sets, as claimed in [20] and [10]. However, the claim in [20] is based on the observation that for random vectors in q m , it is highly unlikely that adding a binary vector will flip the sign of many coordinates (i.e., that a random vector in q m has many minus one coordinates). While this is true, the vectors in question are in fact not random vectors in q m but outputs of a nearest plane call, and thus potentially shorter than typical vectors in q m . Therefore, it can be expected that adding a binary vector will flip more signs. Additionally, in general, it is not only a binary vector that is added, but a vector of infinity norm y, which makes flipping signs even more likely. However, we believe that Assumption 5 is still plausible for most relevant parameter sets and small y, and even in the worst case the assumption leads to more conservative security estimates.

In [17], Hirschhorn et al. give an experimentally verified number of bit operations (defined as in [23]) of one nearest plane call and state a conservative assumption on the runtime of the nearest plane algorithm using precomputation. Based on their results, we use the following assumption for our security estimates. We provide two different kinds of security estimates, one which we call “standard” (std) and one which we call “conservative” (cons). The latter accounts for possible cryptanalytic improvements which are plausible but not yet known to be applicable.

Assumption 6.

Let d N be the lattice dimension. For our standard security estimates, we assume that the number of bit operations of one nearest plane call is approximately d 2 / 2 1.06 . For our conservative security estimates, we assume that the number of bit operations of one nearest plane call is approximately d / 2 1.06 .

4.2 Determining the success probability

In Heuristic 1, it is guaranteed that Algorithm 1 is successful if the lattice Λ contains a non-empty set S of short vectors of the form 𝐯 = ( 𝐯 l , 𝐯 g ) t , where 𝐯 l m - r and 𝐯 g r with 𝐯 l Y , 𝐯 l y , 𝐯 g k , exactly 2 c i entries of 𝐯 g are equal to i for all i { ± 1 , , ± k } , and NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l . In order to determine a lower bound on the success probability, one must calculate the probability that the set S of such vectors is non-empty, since

p succ Pr [ S ] .

However, this probability depends heavily on the distribution of the short vectors contained in Λ and is therefore not done in Heuristic 1, allowing for more flexibility. In consequence, this analysis must be performed for the specific distribution at hand, originating from the cryptographic scheme that is to be analyzed. The most involved part in calculating the success probability is typically calculating the probability p NP that NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l . As shown in [10], the probability p NP is approximately

(4.1) p NP i = 1 m - r ( 1 - 2 B ( ( m - r ) - 1 2 , 1 2 ) - 1 max ( - r i , - 1 ) ( 1 - t 2 ) ( m - r ) - 3 2 d t ) ,

where r i are defined as in Heuristic 1.

In [24], Lindner and Peikert calculated the success probability of the nearest plane(s) algorithm for the case that the difference vector is drawn from a discrete Gaussian distribution with standard deviation σ. In our case, this would result in the formula

(4.2) p NP = Pr [ NP 𝐁 𝐂𝐯 g 𝐯 l ] = i = 1 m - r erf ( R i 2 σ ) .

In the following, we compare the formulas (4.1) and (4.2) in the case of discrete Gaussian distributions with standard deviation σ. To this end, we evaluated both formulas for a lattice of dimension d = m - r = 200 of determinant 128 100 for different standard deviations. For formula (4.1), we assumed that the norm of 𝐯 l is σ 200 as expected, and that the basis follows the GSA with Hermite delta 1.008. The results, presented in Table 2, show that both formulas virtually give the same results for the analyzed instances. This indicates that formula (4.1) is a good generalization of the one provided in [24].

Table 2

Comparison of (4.1) and (4.2) for standard deviation σ = s / 2 π and varying Gaussian parameter s.

Gaussian parameter s = 1 s = 2 s = 4 s = 8 s = 16
p NP according to (4.1) 2 - 0.033 2 - 3.658 2 - 27.775 2 - 87.506 2 - 188.445
p NP according to (4.2) 2 - 0.036 2 - 3.669 2 - 27.680 2 - 87.217 2 - 187.932

4.3 Optimizing the runtime

The final step in our analysis is to determine the runtime of the complete hybrid attack (Algorithm 2) including precomputation, which involves the runtime of the basis reduction T red , the runtime of the actual attack T hyb and the success probability p succ . All these quantities depend on the attack parameter r and the quality of the basis 𝐁 given by the lengths of the Gram–Schmidt vectors achieved by the basis reduction performed in the precomputation step of the attack. The quality of the basis can be measured by its Hermite delta δ (see Section 2). In order to unfold the full potential of the attack, one must minimize the runtime over all possible attack parameters r and δ. For our standard security estimates, we assume that the total runtime (which is to be minimized) is given by

T total , std ( δ , r ) = T red , std ( δ , r ) + T hyb , std ( δ , r ) p succ ( δ , r ) .

For our conservative security estimates, we assume that given a reduced basis with quality δ, it is significantly easier to find another reduced basis with same quality δ than it is to find one given an arbitrary non-reduced basis. We therefore assume that even if the attack is not successful and needs to be repeated, the large precomputation cost for the basis reduction only needs to be paid once, and hence

T total , cons ( δ , r ) = T red , cons ( δ , r ) + T hyb , cons ( δ , r ) p succ ( δ , r ) .

In order to calculate T total , cons ( δ , r ) and T total , std ( δ , r ) , one must calculate T hyb , cons ( δ , r ) , T hyb , std ( δ , r ) , T red , cons ( δ , r ) , T red , std ( δ , r ) and p succ ( δ , r ) . How to calculate T total , cons ( δ , r ) and T total , std ( δ , r ) is shown in Heuristic 1. The success probability p succ ( δ , r ) is calculated in Section 4.2.

Basis reduction.

Estimating the necessary runtime T red ( δ , r ) for a basis reduction of quality δ is highly non-trivial and still an active research area, and precise estimates are hard to derive. For this reason, our framework is designed such that the cost model for basis reduction can be replaced by a different one while the rest of the analysis remains intact. Thus, if future research shows significant improvements in estimating the cost of basis reduction, these cost models can be applied in our framework. To illustrate our method, we fix two common approaches to estimate the cost of basis reduction. For our standard security estimates, we apply the following approach. We first determine the (minimal) block size β necessary to achieve the targeted Hermite delta δ via

δ ( β ( π β ) 1 β 2 π e ) 1 2 ( β - 1 )

according to Chen’s thesis [12] (see also, e.g., [27]). We then use the BKZ 2.0 simulator[3] of the full version of [13] to determine the corresponding necessary number of rounds k. Finally, we use the estimate

Estimate std ( β , n , k ) = 0.187281 β log ( β ) - 1.0192 β + log ( n k ) + 16.1

provided in [2] to determine the (base-two) logarithm of the runtime, where n is the lattice dimension.

For the conservative security estimates, we assume that only one round of BKZ 2.0 with the determined block size β is needed. The reason for this assumption is that one can use progressive BKZ strategies to reduce the number of rounds needed with blocksize β by running BKZ with block sizes smaller than β in advance, see [12, 4]. Since BKZ with smaller block sizes is considerably cheaper, we do not consider the BKZ costs with smaller block sizes in our conservative security estimates. Furthermore, for our conservative security estimates, we assume that the number of rounds can be decreased to one, giving

Estimate cons ( β , n ) = 0.187281 β log ( β ) - 1.0192 β + log ( n + 1 - β ) + 16.1 .

Runtime optimization.

The optimization of the total runtime T total ( δ , r ) is performed in the following way. For each possible r, we find the optimal δ r that minimizes the runtime T total ( δ , r ) . Consequently, the optimal runtime is given by min { T total ( δ r , r ) } , the smallest of those minimized runtimes. Note that for fixed r, the optimal δ r for our conservative security estimates can easily be found in the following way. For fixed r, the function T red , cons ( δ , r ) is monotonically decreasing in δ, and the function T hyb , cons ( δ , r ) p succ ( δ , r ) is monotonically increasing in δ. Therefore, T total , cons ( δ , r ) is (close to) optimal when both those functions are balanced, i.e., take the same value. Thus the optimal δ r can, for example, be found by a simple binary search.

For our standard security estimates, we assume the function T red , std ( δ , r ) p succ ( δ , r ) is monotonically decreasing in δ in the relevant range, hence the optimal T total , std ( δ , r ) can be found by balancing the functions T red , std ( δ , r ) p succ ( δ , r ) and T hyb , std ( δ , r ) p succ ( δ , r ) as above. Note that this assumption might note be true, but it surely leads to upper bounds on the optimal runtime of the attack.

4.4 Typical flaws in previous analyses of the hybrid attack

We end this section by listing some typical over-simplifications which can be found in previous analyses of the hybrid attack. We remark that some simplifying assumptions lead to overestimating the security of the schemes and others to underestimating it. In some analyses, both types occurred at the same time and somewhat magically almost canceled out each others effect on the security estimates for some parameter sets.

Ignoring the probability p

One of the most frequently encountered simplifications that appeared in several works is the lack of a (correct) calculation of the probability p defined in Assumption 1. As can be seen in Heuristic 1, this probability plays a crucial role in the runtime analysis of the attack. Nevertheless, in several works [21, 14, 18, 30, 8], the authors ignore the presence of this probability by setting p = 1 for the sake of simplicity. However, even though we took the probability into account when optimizing the attack parameters[4], for the parameter sets we analyze in Section 5, the probability p was sometimes as low as 2 - 80 , see Table 4. Note that the wrong assumption p = 1 gives more power to the attacker, since it assumes that collisions can always be detected by the attacker, although this is not the case, resulting in security underestimates. We also remark that in some works, the probability p is not completely ignored but determined in a purely experimental way [20] or calculated using additional assumptions [17].

Unrealistic demands on the basis reduction

In most works [20, 21, 17, 14, 18, 30, 8], the authors demand a sufficiently good basis reduction such that the nearest plane algorithm must unveil the searched short vector (or at least with very high probability). To be more precise, [20, Lemma 1] is used to determine what sufficiently good exactly means. In our opinion, this demand is unrealistic, and instead we account for the probability of this event in the success probability, which reflects the attacker’s power in a more accurate way. In addition, we note that in most cases, [20, Lemma 1] is not applicable the way it is claimed in several works. We briefly sketch why this is the case. Often, [20, Lemma 1] is applied to determine the necessary quality of a reduced basis such that the nearest plane (on correct input) unveils a vector 𝐯 of infinity norm at most y. However, this lemma is only applicable if the basis matrix is in triangular form, which is not the case is general. Therefore, one needs to transform the basis with an orthonormal matrix 𝐘 in order to obtain a triangular basis. This basis however does not span the same lattice but an isomorphic one, which contains the transformed vector 𝐯𝐘 , but (in general) not the vector 𝐯 . While the transformation 𝐘 preserves the Euclidean norm of the vector 𝐯 , it does not preserve its infinity norm. Therefore, the lemma cannot be applied with the same infinity norm bound y, which is done in most works. In fact, in the worst case, the new infinity norm bound can be up to m y , where m is the lattice dimension. In consequence, one would have to apply [20, Lemma 1] with infinity norm bound m y instead of y in order to get a rigorous statement, which demands a much better basis reduction. This problem is already mentioned – but not solved – in [30]. Note that the worst case, where (i) the vector 𝐯 has Euclidean norm m y , and (ii) all the weight of the transformed vector is on one coordinate such that m y is a tight bound on the infinity norm after transformation, is highly unlikely. In the following, we give an example to illustrate the different success conditions for the nearest plane algorithm.

Example.

Let d = 512 and q = 1024 . We consider the nearest plane algorithm on a BDD instance 𝐭 Λ + 𝐞 in a d-dimensional lattice Λ of determinant q d / 2 , where 𝐞 is a random binary vector. Naively applying [20, Lemma 1] with infinity norm bound 1 would suggest that a basis reduction of quality δ 1 1.0068 is sufficient to recover 𝐞 . Applying the cost model used for our conservative security estimates described in Section 4.3, this would take roughly T 1 2 91 operations. However, as described above, the lemma cannot be applied with that naive bound. Instead, using the worst case bound m y on the infinity norm and applying [20, Lemma 1] would lead to a basis reduction of quality δ 2 1.0007 , taking roughly T 2 2 357 operations to guarantee the success of the nearest plane algorithm. This shows the impracticality of this approach. Instead, taking the success probability of the nearest plane algorithm into account, as done in this work, one can achieve the following results. Assuming that the Euclidean norm of a random binary vector is roughly 𝐞 m 2 , one can balance the quality of the basis reduction and the success probability of the nearest plane algorithm to obtain the optimal trade-off δ 3 1.0067 , taking roughly T 3 2 94 operations, with a success probability of roughly 2 - 31 .

Missing or incorrect optimization

In some works such as [20, 14], the optimization of the attack parameters is either completely missing, ignoring the fact that there is a trade-off between the time spent on basis reduction and the actual attack, or incorrect. As a result, one only obtains upper bounds on the estimated security level but not precise estimates.

Other inaccuracies

Further inaccuracies we encountered include the following:

  1. (1)

    Implicitly assuming that the meet-in-the-middle part 𝐯 g of the short vector has the right number of i-entries for each i [21, 14, 18, 30, 8]. This is not the case in general and therefore needs to be accounted for in the success probability.

  2. (2)

    Simplifying the structure of the secret key when convenient in order to ease the analysis [18, 30]. This can drastically change the norm of the secret vector and in consequence manipulate the runtime estimates.

  3. (3)

    Assuming that an attacker could maybe utilize some algebraic structure without any evidence that this is the case [17, 18, 30]. This assumption results in security underestimates if the assumption is in fact wrong.

5 Updating security estimates against the hybrid attack

In this section, we apply our improved analysis of the hybrid attack to various cryptographic schemes in order to reevaluate their security and derive updated security estimates. The section is structured as follows: Each scheme is analyzed in a separate subsection. We begin with subsections on the encryption schemes NTRU, NTRU Prime and R-BinLWEEnc, and end with subsections on the signature schemes BLISS and GLP. In each subsection, we first give a brief introduction to the scheme. We then apply the hybrid attack to the scheme and analyze its complexity according to Section 4. This analysis is performed with the following four steps:

  1. (1)

    Constructing the lattice. We first construct a lattice of the required form which contains the secret key as a short vector.

  2. (2)

    Determining the attack parameters. We find suitable attack parameters c i (depending on the meet-in-the-middle dimension r), infinity norm bounds y and k, and estimate the Euclidean Y.

  3. (3)

    Determining the success probability. We determine the success probability of the attack according to Section 4.2.

  4. (4)

    Optimizing the runtime. We optimize the runtime of the attack for our standard and conservative security estimates according to Section 4.3.

We end each subsection by providing a table of updated security estimates against the hybrid attack obtained by our analysis. In the tables, we also provide the optimal attack parameters ( δ r , r ) derived by our optimization process and the corresponding probability p with which collisions can be detected. For comparison, we further provide the security estimates of the previous works. In our runtime optimization of the attack, we optimized with a precision of up to one bit. As a result, there may not be one unique optimal attack parameter pair ( δ r , r ) , and for the table, we simply pick one that minimizes the runtime (up to one bit precision).

5.1 NTRU

The NTRU encryption system was officially introduced in [19] and is one of the most important lattice-based encryption schemes today due to its high efficiency. The hybrid attack was first developed to attack NTRU [20] and has been applied to various proposed parameter sets since [20, 21, 17, 18, 30]. In this work, we restrict our studies to the NTRU EESS # 1 parameter sets given in [18, Table 3].

Constructing the lattice

The NTRU cryptosystem is defined over the ring R q = q [ X ] / ( X N - 1 ) , where N , q , and N is prime. The parameters N and q are public. Furthermore, there exist public parameters d 1 , d 2 , d 3 , d g . For the parameter sets considered in [18], the private key is a pair of polynomials ( f , g ) R q 2 , where g is a trinary polynomial with exactly d g + 1 ones and d g minus ones and f = 1 + 3 F invertible in R q with F = A 1 A 2 + A 3 for some trinary polynomials A i with exactly d i one and d i minus one entries. The corresponding public key is ( 1 , h ) , where h = f - 1 g . In the following, we assume that h and 3 are invertible in R q . We further identify polynomials with their coefficient vectors. We can recover the private key by finding the secret vector 𝐯 = ( 𝐅 , 𝐠 ) t .[5] Since h = ( 1 + 3 F ) - 1 g , we have 3 - 1 h - 1 g = F + 3 - 1 , and therefore it holds that

𝐯 + ( 𝟑 - 1 𝟎 ) = ( 3 - 1 𝐡 - 1 𝐠 + q 𝐰 𝐠 ) = ( q 𝐈 n 3 - 1 𝐇 ¯ 𝟎 𝐈 n ) ( 𝐰 𝐠 )

for some 𝐰 n , where 𝐇 ¯ is the rotation matrix of 𝐡 - 1 . Hence, 𝐯 can be recovered by solving the BDD on input ( - 𝟑 - 1 , 𝟎 ) t in the q-ary lattice

Λ = Λ ( ( q 𝐈 n 3 - 1 𝐇 ¯ 𝟎 𝐈 n ) ) ,

since ( - 𝟑 - 1 , 𝟎 ) t - 𝐯 Λ .[6] A similar way to recover the private key was already mentioned in [30]. The lattice Λ has dimension 2 n and determinant q n . Since we take the BDD approach for the hybrid attack, we assume that only 𝐯 , not its rotations or additive inverse, can be found by the attack, see Section 3. Hence, we assume that the set S, as defined in Heuristic 1, consists of at most one element.

Determining the attack parameters

Let 𝐯 = ( 𝐅 , 𝐠 ) t = ( 𝐯 l , 𝐯 g ) t with 𝐯 l 2 n - r and 𝐯 g r . Since 𝐠 is a trinary vector, we can set the infinity norm bound k on 𝐯 g equal to one. In contrast, determining an infinity norm bound on the vector 𝐯 l is not that trivial, since 𝐅 is not trinary but of product form. For a specific parameter set, this can either be done theoretically or experimentally. The same holds for estimating the Euclidean norm of 𝐯 l . For our runtime estimates, we determined the expected Euclidean norm of 𝐅 experimentally and set the expected Euclidean norm of 𝐯 l to

𝐯 l 𝐅 2 + n - r n ( 2 d g + 1 ) .

We set 2 c - 1 = r n ( d g + 1 ) and 2 c 1 = r n d g to be equal to the expected number of minus one entries and one entries, respectively, in 𝐠 .[7] For simplicity, we assume that c - 1 and c 1 are integers in the following in order to avoid writing down the rounding operates.

Determining the success probability

The next step is to determine the success probability p succ , i.e., the probability that 𝐯 has exactly 2 c - 1 entries equal to minus one, 2 c 1 entries equal to one, and NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l holds, where 𝐁 is as given in Heuristic 1. Assuming independence, the success probability is approximately

p succ p c p NP ,

where p c is the probability that 𝐯 has exactly 2 c - 1 entries equal to minus one and 2 c 1 entries equal to one, and p NP is defined and calculated as in Section 4.2. Obviously, p c is given by

p c = ( r 2 c ~ 0 , 2 c - 1 , 2 c 1 ) ( n - r d 0 - 2 c ~ 0 , d g - 2 c - 1 , d g + 1 - 2 c 1 ) ( n d 0 , d g , d g + 1 ) ,

where 2 c ~ 0 = r - 2 c - 1 - 2 c 1 and d 0 = n - ( d g + 1 ) - d g . As explained earlier, since we use the BDD approach of the hybrid attack, we assume that | S | = 1 in case the attack is successful.

Optimizing the runtime

We determined the optimal attack parameters to estimate the minimal runtime of the hybrid attack for the NTRU EESS # 1 parameter sets given in [18, Table 3]. The results, including the optimal r, corresponding δ r and resulting probability p that collisions can be found, are presented in Table 3. Our analysis shows that the security levels against the hybrid attack claimed in [18] are lower than the actual security levels for all parameter sets. In addition, our results show that for all of the analyzed parameter sets, the hybrid attack does not perform better than a purely combinatorial meet-in-the-middle search, see [18, Table 3]. Our results therefore disprove the common claim that the hybrid attack is necessarily the best attack on NTRU.

Table 3

Optimal attack parameters and security levels against the hybrid attack for NTRU.

Parameter set 𝐧 = 401 𝐧 = 439 𝐧 = 593 𝐧 = 743
Optimal r cons / r std 104/122 122/140 206/219 290/308
Optimal δ r , cons 1.00544 1.00509 1.00412 1.00352
Optimal δ r , std 1.00552 1.00518 1.00420 1.00357
Corresp. p succ cons/std 2 - 70 / 2 - 43 2 - 56 / 2 - 47 2 - 67 / 2 - 62 2 - 78 / 2 - 69
Security cons/std in bits 145/162 165/182 249/267 335/354
In [18] cons/std 116/127 133/145 204/236 280/330
r cons / r std used in [18] 154/166 175/192 264/303 360/423

5.2 NTRU Prime

The NTRU Prime encryption scheme was recently introduced [8] in order to eliminate worrisome algebraic structures that exist within NTRU [19] or Ring-LWE based encryption schemes such as [25, 3].

Constructing the lattice

The Streamlined NTRU Prime family of cryptosystems is parameterized by three integers ( n , q , t ) 3 , where n and q are odd primes. The base ring for Streamlined NTRU Prime is R q = q [ X ] / ( X n - X - 1 ) . The private key is (essentially) a pair of polynomials ( g , f ) R q 2 , where g is drawn uniformly at random from the set of all trinary polynomials, and f is drawn uniformly at random from the set of all trinary polynomials with exactly 2 t non-zero coefficients. The corresponding public key is h = g ( 3 f ) - 1 R q . In the following, we identify polynomials with their coefficient vectors. As described in [8], the secret vector 𝐯 = ( 𝐠 , 𝐟 ) is contained in the q-ary lattice

Λ = Λ ( ( q 𝐈 n 3 𝐇 𝟎 𝐈 n ) ) ,

where 𝐇 is the rotation matrix of h, since

( q 𝐈 n 3 𝐇 𝟎 𝐈 n ) ( 𝐰 𝐟 ) = ( q 𝐰 + 3 𝐡𝐟 𝐟 ) = ( 𝐠 𝐟 ) = 𝐯

for some 𝐰 n . The determinant of the lattice Λ is given by q n , and its dimension is equal to 2 n . Note that in the case of Streamlined NTRU Prime, the rotations of a trinary polynomial are not necessarily trinary, but it is likely that some are. The authors of [8] conservatively assume that the maximum number of good rotations of 𝐯 that can be utilized by the attack is n - t , which we also assume in the following. Counting their additive inverses leaves us with 2 ( n - t ) short vectors that can be found by the attack.

Determining the attack parameters

Let 𝐯 = ( 𝐟 , 𝐠 ) t = ( 𝐯 l , 𝐯 g ) t with 𝐯 l 2 n - r and 𝐯 g r . Since 𝐯 is trinary, we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of 𝐯 l is given by

𝐯 l 2 3 n + n - r n 2 t .

We set 2 c 1 = 2 c - 1 = r n t 2 equal to the expected number of one entries (or minus one entries, respectively) in 𝐟 . For simplicity, we assume that c 1 is an integer in the following.

Determining the success probability

Next, we determine the success probability p succ = Pr [ S ] , where S denotes the following subset of the lattice Λ:

S = { 𝐰 Λ 𝐰 = ( 𝐰 l , 𝐰 g ) t with 𝐰 l { 0 , ± 1 } 2 n - r , 𝐰 g { 0 , ± 1 } r , exactly  2 c i entries of 𝐰 g equal to i for all i { - 1 , 1 } , NP 𝐁 ( 𝐂𝐰 g ) = 𝐰 l } ,

where 𝐁 is as defined in Heuristic 1. We assume that S is a subset of all the rotations of 𝐯 that can be utilized by the attack and their additive inverses. In particular, we assume that S has at most 2 ( n - t ) elements. Note that if some vector 𝐰 is contained in S, then we also have - 𝐰 S . Assuming independence, the probability p S that 𝐯 S is approximately given by

p S ( r 2 c ~ 0 , 2 c - 1 , 2 c 1 ) ( n - r 2 t - 4 c 1 ) 2 2 t - 4 c 1 ( n 2 t ) 2 2 t p NP ,

where d 0 = n - 2 t and 2 c ~ 0 = r - 4 c 1 , and p NP is defined and calculated as in Section 4.2. Assuming independence, all of the n - t good rotations of 𝐯 are contained in S with probability p S as well. Therefore, the probability p succ that we have at least one good rotation is approximately

p succ = Pr [ S ] 1 - ( 1 - p S ) n - t .

Next, we estimate the size of the set S in the case S , i.e., Algorithm 1 is successful. In that case, at least one rotation is contained in S. Then also its additive inverse is contained in S, hence | S | 2 . We can estimate the size of S in case of success to be

| S | 2 + 2 ( n - t - 1 ) p S ,

where p S is defined as above.

Optimizing the runtime

We applied our new techniques to estimate the minimal runtimes for several NTRU Prime parameter sets proposed in [8, Appendix D]. Besides the “case study parameter set”, for our analysis, we picked one parameter set that offers the lowest bit security and one that offers the highest according to the analysis of [8]. Our resulting security estimates are presented in Table 4. Our analysis shows that the authors of [8] underestimate the security of their scheme for all parameter sets we evaluated.

Table 4

Optimal attack parameters and security levels against the hybrid attack for NTRU Prime.

Parameter set 𝐧 = 607 𝐧 = 739 𝐧 = 929
𝐪 = 18749 𝐪 = 9829 𝐪 = 12953
Optimal r cons / r std 148/162 235/257 328/353
Optimal δ r , cons 1.00466 1.00405 1.00346
Optimal δ r , std 1.00466 1.00407 1.00346
Corresponding p succ cons/std 2 - 63 / 2 - 54 2 - 73 / 2 - 60 2 - 80 / 2 - 65
Security cons/std in bits 197/211 258/273 346/363
In [8] 128 228 310

5.3 R-BinLWEEnc

In [9], Buchmann et al. presented R-BinLWEEnc, a lightweight public key encryption scheme based on binary Ring-LWE[8]. To determine the security of their scheme, the authors evaluate the hardness of binary LWE against the hybrid attack using the methodology of [10].

Constructing the lattice

Let m , n , q with m > n and ( 𝐀 , 𝐛 = 𝐀𝐬 + 𝐞 mod q ) be a binary LWE instance with 𝐀 q m × n , 𝐬 q n and binary error 𝐞 { 0 , 1 } .[9] To obtain a more efficient attack, we first subtract the vector ( 0.5 , , 0.5 , 0 , , 0 ) with m - r non-zero and r zero entries from both sides of the equation 𝐛 = 𝐀𝐬 + 𝐞 mod q to obtain a new LWE instance ( 𝐀 , 𝐛 = 𝐀𝐬 + 𝐞 mod q ) , where 𝐞 { ± 0.5 } m - r × { 0 , 1 } r . This way, the expected norm of the first m - r entries is reduced while the last r entries, which are guessed during the attack, remain unchanged. In the following, we only consider this transformed LWE instance with smaller error. Obviously, the vector ( 𝐞 , 1 ) is contained in the q-ary lattice

Λ = Λ q ( 𝐀 ) = { 𝐯 m + 1 there exists 𝐰 n + 1 such that 𝐯 = 𝐀 𝐰 mod q } ,

where

𝐀 = ( 𝐀 𝐛 𝟎 1 ) q ( m + 1 ) × ( n + 1 ) .

Note that constructing the lattice this way, we only need the error vector 𝐞 to be binary and not also the secret 𝐬 as in [9, 10]. The dimension of the lattice Λ is equal to m + 1 , and with high probability, its determinant is q m - ( n + 1 ) , see, for example, [6]. However, as we know the last component of ( 𝐞 , 1 ) , it does not need to be guessed, and we may hence ignore it for the hybrid attack and consider the lattice to be of dimension m.

Determining the attack parameters

Let 𝐯 = 𝐞 = ( 𝐯 l , 𝐯 g ) t with 𝐯 l { ± 1 2 } m - r and 𝐯 g { ± 1 2 } r . Then obviously, we have 𝐯 1 2 , so we set the infinity norm bounds y = k = 1 2 . Since 𝐯 l is a uniformly random vector in { ± 1 2 } m - r , the expected Euclidean norm of 𝐯 l is

𝐯 l m - r 4 .

We set 2 c - 1 / 2 = 2 c 1 / 2 = r 2 to be the expected number of - 1 2 and 1 2 entries of 𝐯 g . In the following, we assume that c - 1 / 2 = c 1 / 2 is an integer in order to not have to deal with rounding operators.

Determining the success probability

We can approximate the success probability p succ by p succ p c p NP , where p c is the probability that 𝐯 g has exactly 2 c - 1 / 2 entries equal to - 1 2 and 2 c 1 / 2 entries equal to 1 2 , and p NP is defined as in Section 4.2. Using the fact that 2 c - 1 / 2 + 2 c 1 / 2 = r , we therefore obtain

p succ p c p NP = 2 - r ( r 2 c 1 / 2 ) p NP .

We assume that if the attack is successful, then | S | = 2 , where S is defined as in Heuristic 1, since 𝐞 and - 𝐞 are assumed to be the only vectors that can be found by the attack.

Optimizing the runtime

We reevaluated the security of the R-BinLWEEnc parameter sets proposed in [9]. Our security estimates, the optimal attack parameters r and δ r , and the corresponding probability p are presented in Table 5. The original security estimates given in [9] are within the security range we determined.

Table 5

Optimal attack parameters and security levels against the hybrid attack for R-BinLWEEnc.

Parameter set Set-I Set-II Set-III
Optimal r cons / r std 104/116 88/96 276/268
Optimal δ r , cons 1.00691 1.00731 1.00478
Optimal δ r , std 1.00698 1.00741 1.00487
Corresponding p succ cons/std 2 - 33 / 2 - 27 2 - 31 / 2 - 28 2 - 38 / 2 - 43
Security cons/std in bits 88/99 79/90 187/197
In [9] 94 84 190

5.4 BLISS

The signature scheme BLISS, introduced in [14], is one of the most important lattice-based signature schemes. In the original paper, the authors considered the hybrid attack on their signature scheme for their security estimates; however, their analysis is rather vague.

Constructing the lattice

In the BLISS signature scheme, the setup is the following: Let n be a power of two, d 1 , d 2 such that d 1 + d 2 n holds, q a prime modulus with q 1 mod 2 n , and q = q [ x ] / ( x n + 1 ) . The signing key is of the form ( s 1 , s 2 ) = ( f , 2 g + 1 ) , where f q × , g q , each with d 1 coefficients in { ± 1 } and d 2 coefficients in { ± 2 } , and the remaining coefficients equal to 0. The public key is essentially a = s 2 s 1 q . We assume that a is invertible in q , which is the case with very high probability. Hence, we obtain the equation s 1 = s 2 a - 1 q , or equivalently f = 2 g a - 1 + a - 1 mod q . In the following, we identify polynomials with their coefficient vectors.

In order to recover the signing key, it is sufficient to find the vector 𝐯 = ( 𝐟 , 𝐠 ) t . Similar to our previous analysis of NTRU in Section 5.1, we have that

𝐯 + ( - 𝐚 - 1 𝟎 ) = ( 2 𝐠𝐚 - 1 + q 𝐰 𝐠 ) = ( q 𝐈 n 2 𝐀 𝟎 𝐈 n ) ( 𝐰 𝐠 )

for some 𝐰 n , where 𝐀 is the rotation matrix of 𝐚 - 1 . Hence, 𝐯 can be recovered by solving the BDD on input ( 𝐚 - 1 , 𝟎 ) t in the q-ary lattice

Λ = Λ ( ( q 𝐈 n 2 𝐀 𝟎 𝐈 n ) ) ,

since ( 𝐚 - 1 , 𝟎 ) t - 𝐯 Λ . The determinant of the lattice Λ is q n , and its dimension is equal to 2 n .

Determining the attack parameters

In the following, let 𝐯 = ( 𝐟 , 𝐠 ) t = ( 𝐯 l , 𝐯 g ) t with 𝐯 l m - r and 𝐯 g r . Since we are using the hybrid attack to solve a BDD problem, the rotations of 𝐯 cannot be utilized in the attack (or at least it is not known how), see Section 3. We therefore assume that 𝐯 is the only rotation useful in the attack, i.e., that the set of good rotations S contains at most 𝐯 . The first step is to determine proper bounds y on 𝐯 l and k on 𝐯 g and find suitable guessing parameters c i . By construction, we obviously have 𝐯 2 , thus we can set the infinity norm bounds y = k = 2 . The expected Euclidean norm of 𝐯 l is given by

𝐯 l d 1 + 4 d 2 + n - r n ( 1 d 1 + 4 d 2 ) .

We set 2 c i equal to the expected number of i-entries in 𝐯 g , i.e., c - 2 = c 2 = r n 1 4 d 2 and c - 1 = c 1 = r n 1 4 d 1 . For simplicity, we assume that c 1 and c 2 are integers in the following.

Determining the success probability

Next, we determine the success probability p succ , which is the probability that NP 𝐁 ( 𝐂𝐯 g ) = 𝐯 l and exactly 2 c i entries of 𝐯 g are equal to i for i { ± 1 , , ± k } . The probability p c that exactly 2 c i entries of the vector 𝐯 g are equal to i for all i { ± 1 , , ± k } is given by

( r 2 c ~ 0 , 2 c - 2 , 2 c 2 , 2 c - 4 , 2 c 4 ) ( n - r d 0 - 2 c ~ 0 , d 1 - 4 c 2 , d 2 - 4 c 4 ) 2 d 1 + d 2 - 4 ( c 2 + c 4 ) ( n d 0 , d 1 , d 2 ) 2 d 1 + d 2 ,

where d 0 = n - d 1 - d 2 and 2 c ~ 0 = r - 2 ( c - 2 + c 2 + c - 4 + c 4 ) . Assuming independence, the success probability is approximately given by

p succ p c p NP ,

where p NP is defined as in Section 4.2. As explained earlier, we assume that S { 𝐯 } ; so if Algorithm 1 is successful, we have | S | = 1 .

Optimizing the runtime

We performed the optimization process for the BLISS parameter sets proposed in [14]. The results are presented in Table 6. Besides the security levels against the hybrid attack, we provide the optimal attack parameters r and δ r leading to a minimal runtime of the attack, as well as the probability p. Our results show that the security estimates for the BLISS-I, BLISS-II, and BLISS-III parameter sets given in [14] are within the range of security we determined, whereas the BLISS-IV parameter set is less secure than originally claimed. In addition, the authors of [14] claim that there are at least 17 bits of security margins built into their security estimates, which is incorrect for all parameter sets according to our analysis.

Table 6

Optimal attack parameters and security levels against the hybrid attack for BLISS.

Parameter set BLISS-I BLISS-II BLISS-III BLISS-IV
Optimal r cons / r std 152/152 152/152 109/144 99/137
Optimal δ r , cons 1.00588 1.00588 1.00532 1.00518
Optimal δ r , std 1.00600 1.00600 1.00541 1.00524
Corresp. p succ cons/std 2 - 35 / 2 - 38 2 - 35 / 2 - 38 2 - 58 / 2 - 40 2 - 67 / 2 - 44
Security cons/std in bits 124/139 124/139 152/170 160/182
In [14] 128 128 160 192
r used in [14] 194 194 183 201

5.5 GLP

The GLP signature scheme was introduced in [16]. In the original work, the authors did not consider the hybrid attack when deriving their security estimates. Later, in [14], the hybrid attack was also applied to the GLP-I parameter set. The GLP-II parameter set has not been analyzed regarding the hybrid attack so far.

Constructing the lattice

For the GLP signature scheme the setup is the following: Let n be a power of two, q a prime modulus with q 1 mod 2 n , and q = q [ x ] / ( x n + 1 ) . The signing key is of the form ( s 1 , s 2 ) , where s 1 and s 1 are sampled uniformly at random among all polynomials of q with coefficients in { - 1 , 0 , 1 } . The corresponding public key is then of the form ( a , b = a s 1 + s 2 ) q 2 , where a is drawn uniformly at random in q . So we know that 0 = - b + a s 1 + s 2 . Identifying polynomials with their coefficient vectors, we therefore have that

𝐯 :- ( - 1 𝐬 1 𝐬 2 ) Λ :- Λ q ( 𝐀 ) = { 𝐰 2 n + 1 𝐀𝐰 𝟎 mod q } 2 n + 1 ,

where 𝐀 = ( 𝐛 | rot ( 𝐚 ) | 𝐈 n ) , and rot ( 𝐚 ) is the rotation matrix of 𝐚 . Because of how the lattice is constructed, we do not assume that rotations of 𝐯 can by utilized by the attack.[10] Therefore, with very high probability, 𝐯 and - 𝐯 are the only non-zero trinary vectors contained in Λ, which we assume in the following. Since q is prime and 𝐀 has full rank, we have that det Λ = q n , see for example [7]. In Appendix A.1, we show how to construct a basis of the form

𝐁 = ( q 𝐈 n 𝟎 𝐈 n + 1 ) ( 2 n + 1 ) × ( 2 n + 1 )

for the q-ary lattice Λ.

Determining the attack parameters

Ignoring the first -1 coordinate, the short vector 𝐯 is drawn uniformly from { - 1 , 0 , 1 } 2 n + 1 . Let 𝐯 = ( 𝐯 l , 𝐯 g ) t with 𝐯 l m - r and 𝐯 g r . Then obviously, 𝐯 l 1 and 𝐯 g 1 hold; so we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of 𝐯 l is approximately

𝐯 l 2 ( m - r ) / 3 .

We set 2 c - 1 = 2 c 1 = r 3 to be the expected number of ones and minus ones. For simplicity, we assume that c - 1 = c 1 is an integer in the following.

Determining the success probability

The success probability p succ of the attack is approximately p succ p c p NP , where p c is the probability that 𝐯 g hat exactly 2 c - 1 minus one entries and 2 c 1 one entries, and p NP is defined as in Section 4.2. Calculating p c yields

p succ p c p NP = 3 - r ( r r 3 , r 3 , r 3 ) p NP .

As previously mentioned, we assume that if the attack is successful, then | S | = 2 .

Optimizing the runtime

We performed the optimization for the GLP parameter sets proposed in [16]. The results, including the optimal attack parameters r and δ r and the probability p, are shown in Table 7. The security level of the GLP-I parameter set claimed in [14] is within the range of security we determined. In [14], the authors did not analyze the hybrid attack for the GLP-II parameter set. Güneysu et al. [16] claimed a security level of at least 256 bits (not considering the hybrid attack) for the GLP-II parameter set, whereas we show that it offers at most 233 bits of security against the hybrid attack.

Table 7

Optimal attack parameters and security levels against the hybrid attack for GLP.

Parameter set GLP-I GLP-II
Optimal r cons / r std 30/54 168/192
Optimal δ r , cons 1.00776 1.00450
Optimal δ r , std 1.00769 1.00451
Corresponding p succ cons/std 2 - 41 / 2 - 25 2 - 61 / 2 - 49
Security cons/std in bits 71/88 212/233
In [14, 16] 75 to 80 256
r used in [14] 85

6 Conclusion and future work

In this work, we described a general version of the hybrid attack and presented improved techniques to analyze its runtime. We further reevaluated various cryptographic schemes regarding their security against the hybrid attack. Our analysis shows that several of the old security estimates of previous works were in fact unreliable. By updating these unreliable estimates, we contributed to the trustworthiness of security estimates of lattice-based cryptography.

For future work, we hope that more provable statements about the practicality of the hybrid attack can be derived. For instance, our results show that the hybrid attack is not the best known attack on all NTRU instances as previously thought. It would be interesting to prove that under certain conditions on the key structure, the hybrid attack is always outperformed by some other attack. Another possible line of future work is applying the hybrid attack to a broader range of cryptographic schemes than already done in this work. Furthermore, analyzing the memory requirements of the hybrid attack was out of the scope of this work. In future research, the memory requirements can be analyzed and reduced (as for example done in [32] for binary NTRU keys). If required, the memory consumption can then be taken into account when optimizing the attack parameters for the hybrid attack.

Funding source: Deutsche Forschungsgemeinschaft

Award Identifier / Grant number: CRC 1119

A Appendix: On q-ary lattices

A.1 Constructing a basis of the required form

In the following lemma, we show that for q-ary lattices, where q is prime, there always exists a basis of the form required for the attack. The size of the identity in the bottom right corner of the basis depends on the determinant of the lattice. In the proof, we also show how to construct such a basis.

Lemma A.1.

Let q be prime, m N , and let Λ Z m be a q-ary lattice.

  1. (i)

    There exists some k , 0 n m such that det ( Λ ) = q k .

  2. (ii)

    Let det ( Λ ) = q k . Then there is a matrix 𝐀 q m × ( m - k ) of rank m - k (over q ) such that Λ = Λ q ( 𝐀 ) .

  3. (iii)

    Let det ( Λ ) = q k and 𝐀 = ( 𝐀 1 𝐀 2 ) with 𝐀 1 q k × ( m - k ) , 𝐀 2 q ( m - k ) × ( m - k ) be a matrix of rank m - k (over q ) such that Λ = Λ q ( 𝐀 ) . If 𝐀 2 is invertible over q , then the columns of the matrix

    𝐁 = ( q 𝐈 k 𝐀 1 𝐀 2 - 1 𝟎 𝐈 m - k ) m × m

    form a basis of the lattice Λ.

Proof.

(i) Obviously, det ( Λ ) det ( q m ) = q m , since q m Λ , and therefore det ( Λ ) is some non-negative power of q, because q is prime.

(ii) We have ( m : q m ) = ( m : Λ ) ( Λ : q m ) , and therefore

( Λ : q m ) = ( m : q m ) ( m : Λ ) = det ( q m ) det ( Λ ) = q m - k .

Let 𝐀 q m × m be some lattice basis of Λ. Since Λ / q m is in one-to-one correspondence to the q -vector space spanned by 𝐀 , this vector space has to be of dimension m - k , and therefore 𝐀 has rank m - k over q . This implies that there is some matrix 𝐀 consisting of m - k columns of 𝐀 such that Λ = Λ ( q 𝐈 m 𝐀 ) = Λ q ( 𝐀 ) .

(iii) By assumption, 𝐀 2 is invertible, and thus we have

Λ = { 𝐯 m there exists a 𝐰 ( m - k ) such that 𝐯 = 𝐀𝐰 mod q } = { 𝐯 m | there exists a 𝐰 ( m - k ) such that 𝐯 = ( 𝐀 1 𝐀 2 ) 𝐀 2 - 1 𝐰 mod q } = { ( 𝐀 1 𝐀 2 - 1 𝐈 m - k ) 𝐰 | 𝐰 ( m - k ) } + q m .

Therefore, the columns of the matrix

( q 𝐈 m | 𝐀 1 𝐀 2 - 1 𝐈 m - k ) m × ( m + ( m - k ) )

form a generating set of the lattice Λ, which can be reduced to the basis 𝐁 . ∎

A.2 Modifying the GSA for q-ary lattices

Typically, the Gram–Schmidt lengths obtained after performing a basis reduction with quality δ can be approximated by the geometric series assumption (GSA), see Section 2. However, for q-ary lattices of the above form, this assumption may be modified. This has already been considered and confirmed with experimental results in previous works, see for example [20, 17, 18, 30]. However, in this work, we derive simple formulas predicting the quality of the reduction, and therefore explain how to obtain these formulas in more detail. We begin by sketching the reason for modifying the GSA for q-ary lattices, given a lattice basis 𝐁 of the form

𝐁 = ( q 𝐈 a 𝟎 𝐈 b ) d × d ,

where d = a + b . If the basis reduction is not strong enough, i.e., the Hermite delta is too large, the GSA predicts that the first Gram–Schmidt vectors of the reduced basis have norm bigger than q. However, in practice, this will not happen, since in this case, the first vectors will simply not be reduced. This means that instead of reducing the whole basis 𝐁 , one can just reduce the last vectors that will actually be reduced. Let k denote the (so far unknown) number of the last vectors that are actually reduced (i.e., their corresponding Gram–Schmidt vectors according to the GSA have norm smaller than q). We assume that the basis reduction is sufficiently weak such that k < d and sufficiently strong such that k > b . We write 𝐁 in the form

𝐁 = ( q 𝐈 d - k 𝐃 𝟎 𝐁 1 )

for some 𝐁 1 k × k and 𝐃 ( d - k ) × k . Now, instead of 𝐁 , we only reduce 𝐁 1 to 𝐁 1 = 𝐁 1 𝐔 for some unimodular 𝐔 k × k . This yields a reduced basis

𝐁 = ( q 𝐈 d - k 𝐃𝐔 𝟎 𝐁 1 )

of 𝐁 . The Gram–Schmidt basis of this new basis 𝐁 is given by

𝐁 ¯ = ( q 𝐈 d - k 𝟎 𝟎 𝐁 ¯ 1 ) .

Therefore, the lengths of the Gram–Schmidt basis vectors 𝐁 ¯ are q for the first d - k vectors and then equal to the lengths of the Gram–Schmidt basis vectors 𝐁 ¯ 1 , which are smaller than q. In order to predict the lengths of 𝐁 ¯ , we can apply the GSA to the lengths of the Gram–Schmidt basis vectors 𝐁 ¯ 1 , since they are actually reduced. What remains is to determine k. Assume we apply a basis reduction on 𝐁 1 that results in a reduced basis 𝐁 1 of Hermite delta δ. By our construction, we can assume that the first Gram–Schmidt basis vector of 𝐁 ¯ 1 has norm roughly equal to q, so the GSA implies

δ k det ( Λ ( 𝐁 1 ) ) 1 k = q .

Using the fact that det ( Λ ( 𝐁 1 ) ) = q k - b and k < d , we can solve for k and obtain

(A.1) k = min ( b log q ( δ ) , d ) .

Summarizing, we expect that after the basis reduction, our Gram–Schmidt basis 𝐁 ¯ 1 has lengths R 1 , , R d , where

R i = { q if i d - k , δ - 2 ( i - ( d - k ) - 1 ) + k q k - b k else ,

and k is given as in equation (A.1).

Note that it might also happen that the last Gram–Schmidt lengths are predicted to be smaller than 1. In this case, these last vectors will also not be reduced in reality, since the basis matrix has the identity in the bottom right corner. Therefore, in this case, the GSA may be further modified. However, for realistic attack parameters, this phenomenon never occurred during our runtime optimizations, and therefore we do not include it in our formulas and leave it to the reader to do the easy calculations if needed.

A.3 Experimental results for the probability p

In the following, we provide the results of some preliminary experiments supporting the validity of the formula for the collision-finding probability p provided in Assumption 4. To this end, for n = 40 , n = 60 and n = 80 , we created q-ary lattices Λ n that contain a short binary vector by embedding binary LWE instances into an SVP problem according to Section 5.3 (however, we did not shift the first components of the error vector by 0.5). For each n, we chose a random binary LWE instance with LWE parameters n , m = 2 n , q = 521 and created a basis 𝐁 𝐧 of the corresponding SVP lattice of the form

𝐁 𝐧 = ( 𝐁 n 𝐂 n 𝟎 𝐈 r ) ( m + 1 ) × ( m + 1 )

with r = 28 . We then BKZ-reduced the upper-left part 𝐁 n of the basis with block size 20. Let ( 𝐯 l ( n ) , 𝐯 g ( n ) ) Λ n with 𝐯 g ( n ) { 0 , 1 } r be the short binary vector in Λ n . For each n, we repeated the following experiment 10,000 times and recorded the number of success cases: Guess a vector 𝐰 { 0 , 1 } r with r 4 = 7 non-zero entries as in the hybrid attack and check if the vector 𝐂𝐰 is 𝐯 l ( n ) -admissible with respect to the basis 𝐁 n by checking if NP ( 𝐂 n 𝐰 ) = NP ( 𝐂 n 𝐰 - 𝐯 l ( n ) ) + 𝐯 l ( n ) . The experiments were performed using SageMath [31]. For comparison, we calculated the probability p according to the formula provided in Assumption 4 using the actual Gram–Schmidt norms of the basis 𝐁 n and the norm of 𝐯 l ( n ) to calculate the r i . The results are presented in Table 8. They suggest that for the analyzed instances, Assumption 4 is a good approximation of the actual probability.

Table 8

Comparison of Assumption 4 and our experimental results for binary LWE instances with LWE parameters n , m = 2 n , q = 521 for n = 40 , n = 60 , and n = 80 .

n = 40 n = 60 n = 80
p according to Assumption 4 2 - 0.399 2 - 1.709 2 - 3.857
p according to experiments 2 - 0.400 2 - 1.563 2 - 4.088

Acknowledgements

We thank Florian Göpfert and John Schanck for helpful discussions and comments.

References

[1] M. R. Albrecht, R. Fitzpatrick and F. Göpfert, On the efficacy of solving LWE by reduction to unique-SVP, Information Security and Cryptology—ICISC 2013, Lecture Notes in Comput. Sci. 8565, Springer, Cham (2014), 293–310. Search in Google Scholar

[2] M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, J. Math. Cryptol. 9 (2015), no. 3, 169–203. Search in Google Scholar

[3] E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange - A new hope, Proceedings of the 25th USENIX Security Symposium, USENIX, Berkeley (2016), 327–343. Search in Google Scholar

[4] Y. Aono, Y. Wang, T. Hayashi and T. Takagi, Improved progressive BKZ algorithms and their precise cost estimation by sharp simulator, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 789–819. Search in Google Scholar

[5] L. Babai, On Lovász’ lattice reduction and the nearest lattice point problem, Annual Symposium on Theoretical Aspects of Computer Science—STACS 85, Lecture Notes in Comput. Sci. 182, Springer, Berlin, (1985) 13–20. Search in Google Scholar

[6] S. Bai and S. D. Galbraith, Lattice decoding attacks on binary LWE, Information Security and Privacy—ACISP 2014, Lecture Notes in Comput. Sci. 8544, Springer, Berlin (2014), 322–337. Search in Google Scholar

[7] D. J. Bernstein, J. Buchmann and E. Dahmen, Post-Quantum Cryptography, Springer, Berlin, 2009. Search in Google Scholar

[8] D. J. Bernstein, C. Chuengsatiansup, T. Lange and C. van Vredendaal, NTRU prime: Reducing attack surface at low cost, Selected Areas in Cryptography—SAC 2017, Lecture Notes in Comput. Sci. 10719, Springer, Cham (2018), 235–260. Search in Google Scholar

[9] J. Buchmann, F. Göpfert, T. Güneysu, T. Oder and T. Pöppelmann, High-performance and lightweight lattice-based public-key encryption, Proceedings of the 2nd ACM International Workshop on IoT Privacy, Trust, and Security, ACM, New York (2016), 2–9. Search in Google Scholar

[10] J. Buchmann, F. Göpfert, R. Player and T. Wunderer, On the hardness of LWE with binary error: Revisiting the hybrid lattice-reduction and meet-in-the-middle attack, Progress in Cryptology—AFRICACRYPT 2016, Lecture Notes in Comput. Sci. 9646, Springer, Cham (2016), 24–43. Search in Google Scholar

[11] R. Canetti and J. A. Garay, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg, 2013. Search in Google Scholar

[12] Y. Chen, Réduction de réseau et sécurité concrete du chiffrement completement homomorphe, PhD thesis, Paris 7, 2013. Search in Google Scholar

[13] Y. Chen and P. Q. Nguyen, BKZ 2.0: Better lattice security estimates, Advances in Cryptology—ASIACRYPT 2011, Lecture Notes in Comput. Sci. 7073, Springer, Heidelberg (2011), 1–20. Search in Google Scholar

[14] L. Ducas, A. Durmus, T. Lepoint and V. Lyubashevsky, Lattice signatures and bimodal Gaussians, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 40–56. Search in Google Scholar

[15] M. Fischlin and J.-S. Coron, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin, 2016. Search in Google Scholar

[16] T. Güneysu, V. Lyubashevsky and T. Pöppelmann, Practical lattice-based cryptography: A signature scheme for embedded systems, Cryptographic Hardware and Embedded Systems—CHES 2012, Lecture Notes in Comput. Sci. 7428, Springer, Berlin (2012), 530–547. Search in Google Scholar

[17] P. S. Hirschhorn, J. Hoffstein, N. Howgrave-Graham and W. Whyte, Choosing NTRUencrypt parameters in light of combined lattice reduction and MITM approaches, Applied Cryptography and Network Security—ACNS 2009, Lecture Notes in Comput. Sci. 5536, Springer, Berlin (2009), 437–455. Search in Google Scholar

[18] J. Hoffstein, J. Pipher, J. M. Schanck, J. H. Silverman, W. Whyte and Z. Zhang, Choosing parameters for 𝙽𝚃𝚁𝚄𝙴𝚗𝚌𝚛𝚢𝚙𝚝 , Topics in Cryptology—CT-RSA 2017, Lecture Notes in Comput. Sci. 10159, Springer, Cham (2017), 3–18. Search in Google Scholar

[19] J. Hoffstein, J. Pipher and J. H. Silverman, NTRU: A ring-based public key cryptosystem, Algorithmic Number Theory (Portland 1998), Lecture Notes in Comput. Sci. 1423, Springer, Berlin (1998), 267–288. Search in Google Scholar

[20] N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. Search in Google Scholar

[21] N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. Search in Google Scholar

[22] N. Howgrave-Graham, J. H. Silverman and W. Whyte, A meet-in-the-middle attack on an NTRU private key, https://www.securityinnovation.com/uploads/Crypto/NTRUTech004v2.pdf. Search in Google Scholar

[23] A. K. Lenstra and E. R. Verheul, Selecting cryptographic key sizes, J. Cryptology 14 (2001), no. 4, 255–293. 10.1007/s00145-001-0009-4 Search in Google Scholar

[24] R. Lindner and C. Peikert, Better key sizes (and attacks) for LWE-based encryption, Topics in Cryptology—CT-RSA 2011, Lecture Notes in Comput. Sci. 6558, Springer, Heidelberg (2011), 319–339. Search in Google Scholar

[25] V. Lyubashevsky, C. Peikert and O. Regev, On ideal lattices and learning with errors over rings, J. ACM 60 (2013), no. 6, Article ID 43. Search in Google Scholar

[26] D. Micciancio and C. Peikert, Hardness of SIS and LWE with small parameters, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 21–39. Search in Google Scholar

[27] D. Micciancio and M. Walter, Practical, predictable lattice basis reduction, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 820–849. Search in Google Scholar

[28] F. W. J. Olver, NIST Handbook Of Mathematical Functions, Cambridge University Press, Cambridge, 2010. Search in Google Scholar

[29] O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Proceedings of the 37th Annual ACM Symposium on Theory of Computing—STOC’05, ACM, New York (2005), 84–93. Search in Google Scholar

[30] J. Schanck, Practical lattice cryptosystems: NTRUEncrypt and NTRUMLS, Master’s thesis, University of Waterloo, 2015. Search in Google Scholar

[31] W. Stein, Sage Mathematics Software Version 7.5.1, The Sage Development Team 2017, http://www.sagemath.org. Search in Google Scholar

[32] C. van Vredendaal, Reduced memory meet-in-the-middle attack against the NTRU private key, LMS J. Comput. Math. 19 (2016), no. Suppl. A, 43–57. 10.1112/S1461157016000206 Search in Google Scholar

Received: 2016-07-28
Revised: 2018-08-28
Accepted: 2018-09-03
Published Online: 2018-10-05
Published in Print: 2019-03-01

© 2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.