Over the past decade, the hybrid lattice-reduction and meet-in-the middle attack (called hybrid attack) has been used to evaluate the security of many lattice-based cryptographic schemes such as NTRU, NTRU Prime, BLISS and more. However, unfortunately, none of the previous analyses of the hybrid attack is entirely satisfactory: They are based on simplifying assumptions that may distort the security estimates. Such simplifying assumptions include setting probabilities equal to 1, which, for the parameter sets we analyze in this work, are in fact as small as . Many of these assumptions lead to underestimating the scheme’s security. However, some lead to security overestimates, and without further analysis, it is not clear which is the case. Therefore, the current security estimates against the hybrid attack are not reliable, and the actual security levels of many lattice-based schemes are unclear. In this work, we present an improved runtime analysis of the hybrid attack that is based on more reasonable assumptions. In addition, we reevaluate the security against the hybrid attack for the NTRU, NTRU Prime and R-BinLWEEnc encryption schemes as well as for the BLISS and GLP signature schemes. Our results show that there exist both security over- and underestimates in the literature.
In 2007, Howgrave-Graham proposed the hybrid lattice-reduction and meet-in-the-middle attack  (referred to as the hybrid attack in the following) against the NTRU encryption scheme . Several works [20, 21, 17, 18, 30] claim that the hybrid attack is by far the best known attack on NTRUEncrypt. In the following years, numerous cryptographers have applied the hybrid attack to their cryptographic schemes in order to estimate their security. These considerations include more variants of the NTRU encryption scheme [17, 18, 30], the recently proposed encryption scheme NTRU Prime , a lightweight encryption scheme based on Ring-LWE with binary error [10, 9], and the signature schemes BLISS  and GLP [16, 14]. However, the above analyses of the hybrid attack make use of over-simplifying assumptions, yielding unreliable security estimates of the schemes. Many of these assumptions lead to more conservative security estimates (lower than necessary), as they give more power to the attacker. While this is not a problem from a security perspective, in those cases, the schemes might be instantiated more efficiently while preserving the desired security level. On the other hand, there also exist the more dangerous cases in which the security of a scheme is overestimated, i.e., the scheme is less secure than it is claimed to be, as we show in this work. In , Schanck summarizes the current state of the analyses of the hybrid attack as follows:
“[…] it should be noted that in the author’s opinion, no analysis of the hybrid attack presented thus far is entirely satisfactory. […] it is hoped that future work will answer some of the outstanding questions related to the attack’s probability of success as a function of the effort spent in lattice reduction and enumeration.”
One of the most common [21, 14, 18, 30, 8] over-simplifications is to assume that collisions in the meet-in-the-middle phase of the attack will always be detected. However, in reality, collisions can only be detected with some (possibly very low) probability. For instance, for the cryptographic schemes we analyze in this work, this probability is sometimes as low as , highlighting the unrealistic nature of some simplifying assumptions made in previous analyses of the hybrid attack.
In this work, we provide a detailed and more satisfying analysis of the hybrid attack. This is achieved in the following way. We present a generalized version of the hybrid attack applied to shortest vector problems (SVP) and show how it can also be used to solve bounded distance decoding (BDD) problems. This general framework for the hybrid attack can naturally be applied to many lattice-based cryptographic constructions, as we also show in this work. We further provide a detailed and improved analysis of the generalized version of the hybrid attack, which can be used to derive updated security estimates. We offer two types of formulas for our security estimates – one that reflects the current state of the art and a more conservative one that reflects potential advances in cryptanalysis – giving a possible range for the security level. In our analysis of the attack, we reduce the amount of underlying assumptions, eliminate the ones that are over-simplifying and clearly state the remaining ones in order to offer as much transparency as possible. We further provide some experimental results and comparisons to the literature to support the validity of the remaining assumptions on the involved success probabilities.
Our second main contribution is the following: Since previous analyses of the hybrid attack are unreliable, the security estimates of many lattice-based cryptographic schemes against the hybrid attack might be inaccurate, and their actual security level is unclear. We therefore apply our improved analysis to reevaluate the security of various cryptographic schemes against the hybrid attack in order to derive updated security estimates. We first revisit the security against the hybrid attack of the NTRU , NTRU Prime  and R-BinLWEEnc  encryption schemes, and end with the BLISS  and GLP  signature schemes. Our results show that there exist both security over- and underestimates against the hybrid attack across the literature.
This work is structured as follows. First, we fix notation and provide the necessary background in Section 2. In Section 3, we describe a generalized version of the hybrid attack on shortest vector problems (SVP) and further explain how it can also be used to solve bounded distance decoding (BDD) problems. Our runtime analysis of the hybrid attack is presented in Section 4. In Section 5, we apply our analysis of the hybrid attack to various cryptographic schemes in order to derive updated security estimates against the hybrid attack. We end this work by giving a conclusion and outlook for possible future work.
In this work, we write vectors in bold lowercase letters, e.g., , and matrices in bold uppercase letters, e.g., . Polynomials are written in normal lower case letters, e.g., a. We frequently identify polynomials with their coefficient vectors , indicated by using the corresponding bold letter. Let , be a polynomial of degree n and . We define the rotation matrix of a polynomial as . Then for , the matrix-vector product corresponds to the product of polynomials .
We use the abbreviation for . We further write instead of for the Euclidean norm. For and with , the multinomial coefficient is defined as
In this work, we use the following definition of lattices. A discrete additive subgroup of , for some , is called a lattice. Let m be a positive integer. For a set of vectors , the lattice spanned by is defined as
Let be a lattice. A set of vectors is called a basis of Λ if is -linearly independent and . Abusing notation, we identify lattice bases with matrices and vice versa by taking the basis vectors as the columns of the matrix. The number of vectors in a basis of a lattice is called the rank (or dimension) of the lattice. If the rank is maximal, i.e., , the lattice is called full-rank. In the following, we only consider full-rank lattices. Let q be a positive integer. The length of the shortest non-zero vector of a lattice Λ is denoted by . An integer lattice that contains is called a q-ary lattice. For a matrix , we define the q-ary lattice
For a lattice basis , its fundamental parallelepiped is defined as
The determinant of a lattice is defined as the m-dimensional volume of the fundamental parallelepiped of a basis of Λ. Note that the determinant of the lattice is well-defined, i.e., it is independent of the basis. The Hermite delta (or Hermite factor) δ of a lattice basis is defined via the equation . It provides a measure for the quality of the basis.
Lattice-based cryptography is based on the presumed hardness of computational problems in lattices. Two of the most important lattice problems are the following:
Given a lattice basis , the task is to find a shortest non-zero vector in the lattice .
Given , a lattice basis and a target vector with , the task is to find a vector with such that .
Babai’s nearest plane algorithm  (denoted by in the following) is an important building block of the hybrid attack. For more details on the algorithm, we refer to Babai’s original work  or Lindner and Peikert’s work . We use the nearest plane algorithm in a black box manner. For the reader, it is sufficient to know the following: The input for the nearest plane algorithm is a lattice basis and a target vector , and the corresponding output is a vector such that . We denote the output by . If there is no risk of confusion, we might omit the basis in the notation, writing instead of . The output of the nearest plane algorithm satisfies the following condition, as shown in .
Let be a lattice basis, and let be a target vector. Then is the unique vector that satisfies , where is the Gram–Schmidt basis of .
The lengths of the Gram–Schmidt vectors of a reduced basis can be estimated by the following heuristic (for more details, we refer to ).
Let be a reduced basis of some full-rank lattice with Hermite delta δ, and let D denote the determinant of . Further, let denote the corresponding Gram–Schmidt vectors of . Then the length of is approximately
While the GSA is widely relied upon in lattice-based cryptography (see, e.g., [2, 3, 4, 10, 13, 27, 20]), we emphasize that it does not offer precise estimates, in particular for the last indices of highly reduced bases, see, e.g., .
In this section, we present a generalized version of the hybrid attack to solve shortest vector problems. Our framework for the hybrid attack is the following: The task is to find the shortest vector in a lattice Λ, given a basis of Λ of the form
where is the meet-in-the-middle dimension, and . In Appendix A.1, we show that for q-ary lattices, where q is prime, one can always construct a basis of this form, provided that the determinant of the lattice is at most . Additionally, in Section 5, we show that our framework can be applied to many lattice-based cryptographic schemes.
The main idea of the attack is the following: Let be a short vector contained in the lattice Λ. We split the short vector into two parts with and . The second part represents the part of that is recovered by guessing (meet-in-the-middle) during the attack, while the first part is recovered with lattice techniques (solving BDD problems). Because of the special form of the basis , we have that
for some vector , hence . This means is close to the lattice , since it only differs from the lattice by the short vector , and therefore can be recovered solving a BDD problem if is know. The idea now is that if we can correctly guess the vector , we can hope to find using the nearest plane algorithm (see Section 2) via , which is the case if the basis is sufficiently reduced. Solving the BDD problem using the nearest plane algorithm is the lattice part of the attack. The lattice in which we need to solve BDD has the same determinant as the lattice in which we want to solve SVP, but it has smaller dimension, i.e., instead of m. Therefore, the newly obtained BDD problem is potentially easier to solve than the original SVP instance.
In the following, we explain how one can speed up the guessing part of the attack by Odlyzko’s meet-in-the-middle approach. Using this technique, one is able to reduce the number of necessary guesses to the square root of the number of guesses needed in a naive brute-force approach. Odlyzko’s meet-in-the-middle attack on NTRU was first described in  and applied in the hybrid lattice-reduction and meet-in-the-middle attack against NTRU in . The idea is that instead of guessing directly in a large set M of possible vectors, we guess sparser vectors and in a smaller set N of vectors such that . In our attack, the larger set M will be the set of all vectors with a fixed number of the non-zero entries equal to i for all , where . The smaller set N will be the set of all vectors with only half as many, i.e., only , of the non-zero entries equal to i for all . Assume that . First, we guess vectors and in the smaller set N. We then compute and . We hope that if , then also , i.e., that the nearest plane algorithm is additively homomorphic on those inputs. The probability that this additive property holds is one crucial element in the runtime analysis of the attack. We further need to detect when this property holds during the attack, i.e., we need to be able to recognize matching vectors and with and , which we call a collision. In order to do so, we store and in (hash) boxes whose addresses depend on and , respectively, such that they collide in at least one box. To define those addresses properly, note that in case of a collision, we have . Thus and differ only by a vector of infinity norm . Therefore, the addresses must be crafted such that for any and with , it holds that the intersection of the addresses of and is non-empty, i.e., . Furthermore, the set of addresses should not be unnecessarily large so the hash tables do not grow too big and unwanted collisions are unlikely to happen. The following definition satisfies these properties, as can easily be verified.
Let . For a vector , the set is defined as
We illustrate Definition 3.1 with some examples.
Let be fixed. For varying bounds y and input vectors , we have
The hybrid attack on SVP without precomputation is presented in Algorithm 1. A list of the attack parameters and the parameters used in the runtime analysis of the attack and their meaning is given in Table 1. In order to increase the chance of Algorithm 1 being successful, one performs a basis reduction step as precomputation. Therefore, the complete hybrid attack, presented in Algorithm 2, is in fact a combination of a basis reduction step and Algorithm 1.
|lattice basis of the whole lattice|
|partially reduced lattice basis of the sublattice|
|number of i-entries guessed during attack|
|y||infinity norm bound on|
|k||infinity norm bound on|
|Y||expected Euclidean norm of|
|Gram–Schmidt lengths corresponding to|
|scaled Gram–Schmidt lengths corresponding to|
The hybrid attack can also be applied to BDD instead of SVP by rewriting a BDD instance into an SVP instance. This can be done in the following way (see for example ): Let be a lattice basis of the form
with , and let be a target vector for BDD. Suppose , where is the short (bounded) vector we are looking for. Then the short vector is contained in the lattice spanned by
which is of the required form for the hybrid attack on SVP. Therefore, we can apply the hybrid attack on SVP to find , solving the BDD problem. The SVP lattice has the same determinant as the BDD lattice and dimension instead of m. However, the additional dimension can be ignored, since we know the last entry of and therefore do not have to guess it during the meet-in-the-middle phase. Note that by definition of BDD, it is very likely that are the only short vectors in the lattice . By fixing the last coordinate to be plus one, only , not also , can be found by the attack.
In this section, we analyze the runtime of the hybrid attack. First, in Heuristic 1 in Section 4.1, we estimate the runtime of the attack in case sufficient success conditions are satisfied. In Section 4.2, we then show how to determine the probability that those sufficient conditions are satisfied, i.e., how to determine (a lower bound on) the success probability. We conclude the runtime analysis of the attack by showing how to optimize the attack parameters to minimize its runtime in Section 4.3. We end the section by highlighting our improvements over previous analyses of the hybrid attack, see Section 4.4.
We now present our main result about the runtime of the generalized hybrid attack. It shows that under sufficient conditions, the attack is successful and estimates the expected runtime.
Let all inputs be denoted as in Algorithm 1, , and let denote the lengths of the Gram–Schmidt basis vectors of the basis . Further, let denote the set of all non-zero lattice vectors , where and with , , , exactly entries of are equal to i for all , and . Assume that the set S is non-empty.
Then Algorithm 1 is successful, and the expected number of loops can be estimated by
denotes the Euler beta function , and
Furthermore, the expected number of operations of Algorithm 1 can be estimated by , where denotes the number of operations of one nearest plane call in the lattice of dimension .
In the following remark, we explain the meaning of the (attack) parameters that appear in Heuristic 1 in more detail.
The parameters r, y, k, are the attack parameters that can be chosen by the attacker. The meet-in-the-middle dimension and the remaining lattice dimension are determined by the parameter r. The remaining parameters must be chosen in such a way that the requirements of Heuristic 1 are likely to be fulfilled in order to obtain a high success probability of the attack. Choosing those parameters depends heavily on the distribution of the short vectors . In order to obtain more flexibility, this distribution is not specified in Heuristic 1. However, in Section 5, we show how one can choose the attack parameters and calculate the success probability for several distributions arising in various cryptographic schemes. At this point, we only want to remark that y should be (an upper bound on) , k (an upper bound on) and the (expected) number of entries of that is equal to i for .
The attacker can further influence the lengths of the Gram–Schmidt vectors by providing a different basis than with Gram–Schmidt lengths that lead to a more efficient attack. This is typically done by performing a basis reduction on or parts of as precomputation, see Algorithm 2. The lengths of the Gram–Schmidt vectors achieved by the basis reduction with Hermite delta δ are typically estimated by the GSA (see Section 2). However, for bases of q-ary lattices of a special form, the GSA may be modified. For further details, see Appendix A.2. Notice that spending more time on basis reduction increases the probability p in Heuristic 1, and the probability that the condition holds, as can be seen later in this section and Section 4.2.
Because of the previous remark, the complete attack – presented in Algorithm 2 – is actually a combination of precomputation (basis reduction) and Algorithm 1. Therefore, the runtime of both steps must be considered, and they have to be balanced in order to estimate the total runtime. In particular, the amount of precomputation must be chosen such that the precomputed basis offers the best trade-off between its quality with respect to the hybrid attack (i.e., amplifying the success probability and decreasing the number of operations) and the cost to compute this basis. We show how to optimize the total runtime in Section 4.3.
In the following, we show how Heuristic 1 can be derived. For the rest of this section, let all notations be as in Heuristic 1. We further assume in the following that the assumption of Heuristic 1, i.e., , is satisfied. We first provide the following useful definition already given in [20, 10]. We use the notation of .
Let . A vector is called -admissible (with respect to the basis ) for some vector if .
This means, that if is -admissible then and yield the same lattice vector. We recall the following Lemma from  about Definition 4.2. It showcases the relevance of the definition by relating it to the equation , which is necessary to hold for our attack to work.
Let be two arbitrary target vectors. Then the following are equivalent.
We now estimate the expected number of loops in case Algorithm 1 terminates. In the following, we use the subscript for probabilities to indicate that the probability is taken over the randomness of the basis (with Gram–Schmidt length ). In each loop of the algorithm, we sample a vector in the set
The attack succeeds if and such that and
for some vector are sampled in different loops of the algorithm. By Lemma 4.3, the second condition is equivalent to the fact that is -admissible. We assume that the algorithm only succeeds in this case. We are therefore interested in the following subset of W:
For all with and , let denote the probability
and let denote the probability
By construction, we have that is constant for all , so we can simply write instead of . We make the following reasonable assumption on and .
For all with and , we assume that the independence condition
holds. We further assume that is equal to some constant probability p (as in Heuristic 1) for all .
Assuming independence of p and and disjoint events for the elements of S, we can make the following reasonable assumption (analogously to [20, Lemma 6 and Theorem 3]).
We assume that
The probability is calculated by
From Assumption 2, it follows that . As long as the product is not too small, we can therefore assume that .
We assume that .
Assumption 3 implies that the attack is successful, since by Lemma 4.3, if , then also for all . Such two vectors and in V will eventually be guessed in two separate loops of the algorithm, and they are recognized as a collision, since by the assumption of Heuristic 1, they share at least one common address. By Assumption 2, we expect that during the algorithm, we sample in V every loops, and by the birthday paradox, we expect to find a collision and with after loops. In conclusion, we can estimate the expected number of loops by
It remains to calculate the probability p. This can be done analogously to [10, Heuristic 3] and the calculations following it. For a detailed and convincing justification of the heuristic and the intuition behind it, including a geometric intuition behind the -admissibility and its mathematical modeling, we refer to . Following the calculations of , we obtain the following assumption.
We assume that the probability p is approximately
where and are defined as in Heuristic 1.
The integrals in the above formula for p can, for instance, be calculated using SageMath . In order to calculate p, one needs to estimate the lengths , as discussed in the Remark 4.1. In Appendix A.3, we provide the results of some preliminary experiments supporting the validity of Assumption 4.
We now estimate the expected total number of operations of the hybrid attack under the conditions of Heuristic 1. In order to do so, we need to estimate the runtime of one inner loop and multiply it by the expected number of loops. As in  and , we make the following assumption, which is plausible as long the sets of addresses are not extremely large.
We assume that the number of operations of one inner loop of Algorithm 1 is dominated by the number of operations of one nearest plane call.
We remark that we see Assumption 5 as one of the more critical ones. Obviously, it does not hold for all parameter choices, but it is reasonable to believe that it holds for many relevant parameter sets, as claimed in  and . However, the claim in  is based on the observation that for random vectors in , it is highly unlikely that adding a binary vector will flip the sign of many coordinates (i.e., that a random vector in has many minus one coordinates). While this is true, the vectors in question are in fact not random vectors in but outputs of a nearest plane call, and thus potentially shorter than typical vectors in . Therefore, it can be expected that adding a binary vector will flip more signs. Additionally, in general, it is not only a binary vector that is added, but a vector of infinity norm y, which makes flipping signs even more likely. However, we believe that Assumption 5 is still plausible for most relevant parameter sets and small y, and even in the worst case the assumption leads to more conservative security estimates.
In , Hirschhorn et al. give an experimentally verified number of bit operations (defined as in ) of one nearest plane call and state a conservative assumption on the runtime of the nearest plane algorithm using precomputation. Based on their results, we use the following assumption for our security estimates. We provide two different kinds of security estimates, one which we call “standard” (std) and one which we call “conservative” (cons). The latter accounts for possible cryptanalytic improvements which are plausible but not yet known to be applicable.
Let be the lattice dimension. For our standard security estimates, we assume that the number of bit operations of one nearest plane call is approximately . For our conservative security estimates, we assume that the number of bit operations of one nearest plane call is approximately .
In Heuristic 1, it is guaranteed that Algorithm 1 is successful if the lattice Λ contains a non-empty set S of short vectors of the form , where and with , , , exactly entries of are equal to i for all , and . In order to determine a lower bound on the success probability, one must calculate the probability that the set S of such vectors is non-empty, since
However, this probability depends heavily on the distribution of the short vectors contained in Λ and is therefore not done in Heuristic 1, allowing for more flexibility. In consequence, this analysis must be performed for the specific distribution at hand, originating from the cryptographic scheme that is to be analyzed. The most involved part in calculating the success probability is typically calculating the probability that . As shown in , the probability is approximately
where are defined as in Heuristic 1.
In , Lindner and Peikert calculated the success probability of the nearest plane(s) algorithm for the case that the difference vector is drawn from a discrete Gaussian distribution with standard deviation σ. In our case, this would result in the formula
In the following, we compare the formulas (4.1) and (4.2) in the case of discrete Gaussian distributions with standard deviation σ. To this end, we evaluated both formulas for a lattice of dimension of determinant for different standard deviations. For formula (4.1), we assumed that the norm of is as expected, and that the basis follows the GSA with Hermite delta 1.008. The results, presented in Table 2, show that both formulas virtually give the same results for the analyzed instances. This indicates that formula (4.1) is a good generalization of the one provided in .
The final step in our analysis is to determine the runtime of the complete hybrid attack (Algorithm 2) including precomputation, which involves the runtime of the basis reduction , the runtime of the actual attack and the success probability . All these quantities depend on the attack parameter r and the quality of the basis given by the lengths of the Gram–Schmidt vectors achieved by the basis reduction performed in the precomputation step of the attack. The quality of the basis can be measured by its Hermite delta δ (see Section 2). In order to unfold the full potential of the attack, one must minimize the runtime over all possible attack parameters r and δ. For our standard security estimates, we assume that the total runtime (which is to be minimized) is given by
For our conservative security estimates, we assume that given a reduced basis with quality δ, it is significantly easier to find another reduced basis with same quality δ than it is to find one given an arbitrary non-reduced basis. We therefore assume that even if the attack is not successful and needs to be repeated, the large precomputation cost for the basis reduction only needs to be paid once, and hence
Estimating the necessary runtime for a basis reduction of quality δ is highly non-trivial and still an active research area, and precise estimates are hard to derive. For this reason, our framework is designed such that the cost model for basis reduction can be replaced by a different one while the rest of the analysis remains intact. Thus, if future research shows significant improvements in estimating the cost of basis reduction, these cost models can be applied in our framework. To illustrate our method, we fix two common approaches to estimate the cost of basis reduction. For our standard security estimates, we apply the following approach. We first determine the (minimal) block size β necessary to achieve the targeted Hermite delta δ via
according to Chen’s thesis  (see also, e.g., ). We then use the BKZ 2.0 simulator of the full version of  to determine the corresponding necessary number of rounds k. Finally, we use the estimate
provided in  to determine the (base-two) logarithm of the runtime, where n is the lattice dimension.
For the conservative security estimates, we assume that only one round of BKZ 2.0 with the determined block size β is needed. The reason for this assumption is that one can use progressive BKZ strategies to reduce the number of rounds needed with blocksize β by running BKZ with block sizes smaller than β in advance, see [12, 4]. Since BKZ with smaller block sizes is considerably cheaper, we do not consider the BKZ costs with smaller block sizes in our conservative security estimates. Furthermore, for our conservative security estimates, we assume that the number of rounds can be decreased to one, giving
The optimization of the total runtime is performed in the following way. For each possible r, we find the optimal that minimizes the runtime . Consequently, the optimal runtime is given by , the smallest of those minimized runtimes. Note that for fixed r, the optimal for our conservative security estimates can easily be found in the following way. For fixed r, the function is monotonically decreasing in δ, and the function is monotonically increasing in δ. Therefore, is (close to) optimal when both those functions are balanced, i.e., take the same value. Thus the optimal can, for example, be found by a simple binary search.
For our standard security estimates, we assume the function is monotonically decreasing in δ in the relevant range, hence the optimal can be found by balancing the functions and as above. Note that this assumption might note be true, but it surely leads to upper bounds on the optimal runtime of the attack.
We end this section by listing some typical over-simplifications which can be found in previous analyses of the hybrid attack. We remark that some simplifying assumptions lead to overestimating the security of the schemes and others to underestimating it. In some analyses, both types occurred at the same time and somewhat magically almost canceled out each others effect on the security estimates for some parameter sets.
One of the most frequently encountered simplifications that appeared in several works is the lack of a (correct) calculation of the probability p defined in Assumption 1. As can be seen in Heuristic 1, this probability plays a crucial role in the runtime analysis of the attack. Nevertheless, in several works [21, 14, 18, 30, 8], the authors ignore the presence of this probability by setting for the sake of simplicity. However, even though we took the probability into account when optimizing the attack parameters, for the parameter sets we analyze in Section 5, the probability p was sometimes as low as , see Table 4. Note that the wrong assumption gives more power to the attacker, since it assumes that collisions can always be detected by the attacker, although this is not the case, resulting in security underestimates. We also remark that in some works, the probability p is not completely ignored but determined in a purely experimental way  or calculated using additional assumptions .
In most works [20, 21, 17, 14, 18, 30, 8], the authors demand a sufficiently good basis reduction such that the nearest plane algorithm must unveil the searched short vector (or at least with very high probability). To be more precise, [20, Lemma 1] is used to determine what sufficiently good exactly means. In our opinion, this demand is unrealistic, and instead we account for the probability of this event in the success probability, which reflects the attacker’s power in a more accurate way. In addition, we note that in most cases, [20, Lemma 1] is not applicable the way it is claimed in several works. We briefly sketch why this is the case. Often, [20, Lemma 1] is applied to determine the necessary quality of a reduced basis such that the nearest plane (on correct input) unveils a vector of infinity norm at most y. However, this lemma is only applicable if the basis matrix is in triangular form, which is not the case is general. Therefore, one needs to transform the basis with an orthonormal matrix in order to obtain a triangular basis. This basis however does not span the same lattice but an isomorphic one, which contains the transformed vector , but (in general) not the vector . While the transformation preserves the Euclidean norm of the vector , it does not preserve its infinity norm. Therefore, the lemma cannot be applied with the same infinity norm bound y, which is done in most works. In fact, in the worst case, the new infinity norm bound can be up to , where m is the lattice dimension. In consequence, one would have to apply [20, Lemma 1] with infinity norm bound instead of y in order to get a rigorous statement, which demands a much better basis reduction. This problem is already mentioned – but not solved – in . Note that the worst case, where (i) the vector has Euclidean norm , and (ii) all the weight of the transformed vector is on one coordinate such that is a tight bound on the infinity norm after transformation, is highly unlikely. In the following, we give an example to illustrate the different success conditions for the nearest plane algorithm.
Let and . We consider the nearest plane algorithm on a BDD instance in a d-dimensional lattice Λ of determinant , where is a random binary vector. Naively applying [20, Lemma 1] with infinity norm bound 1 would suggest that a basis reduction of quality is sufficient to recover . Applying the cost model used for our conservative security estimates described in Section 4.3, this would take roughly operations. However, as described above, the lemma cannot be applied with that naive bound. Instead, using the worst case bound on the infinity norm and applying [20, Lemma 1] would lead to a basis reduction of quality , taking roughly operations to guarantee the success of the nearest plane algorithm. This shows the impracticality of this approach. Instead, taking the success probability of the nearest plane algorithm into account, as done in this work, one can achieve the following results. Assuming that the Euclidean norm of a random binary vector is roughly , one can balance the quality of the basis reduction and the success probability of the nearest plane algorithm to obtain the optimal trade-off , taking roughly operations, with a success probability of roughly .
In some works such as [20, 14], the optimization of the attack parameters is either completely missing, ignoring the fact that there is a trade-off between the time spent on basis reduction and the actual attack, or incorrect. As a result, one only obtains upper bounds on the estimated security level but not precise estimates.
Further inaccuracies we encountered include the following:
In this section, we apply our improved analysis of the hybrid attack to various cryptographic schemes in order to reevaluate their security and derive updated security estimates. The section is structured as follows: Each scheme is analyzed in a separate subsection. We begin with subsections on the encryption schemes NTRU, NTRU Prime and R-BinLWEEnc, and end with subsections on the signature schemes BLISS and GLP. In each subsection, we first give a brief introduction to the scheme. We then apply the hybrid attack to the scheme and analyze its complexity according to Section 4. This analysis is performed with the following four steps:
Constructing the lattice. We first construct a lattice of the required form which contains the secret key as a short vector.
Determining the attack parameters. We find suitable attack parameters (depending on the meet-in-the-middle dimension r), infinity norm bounds y and k, and estimate the Euclidean Y.
Determining the success probability. We determine the success probability of the attack according to Section 4.2.
Optimizing the runtime. We optimize the runtime of the attack for our standard and conservative security estimates according to Section 4.3.
We end each subsection by providing a table of updated security estimates against the hybrid attack obtained by our analysis. In the tables, we also provide the optimal attack parameters derived by our optimization process and the corresponding probability p with which collisions can be detected. For comparison, we further provide the security estimates of the previous works. In our runtime optimization of the attack, we optimized with a precision of up to one bit. As a result, there may not be one unique optimal attack parameter pair , and for the table, we simply pick one that minimizes the runtime (up to one bit precision).
The NTRU encryption system was officially introduced in  and is one of the most important lattice-based encryption schemes today due to its high efficiency. The hybrid attack was first developed to attack NTRU  and has been applied to various proposed parameter sets since [20, 21, 17, 18, 30]. In this work, we restrict our studies to the NTRU EESS # 1 parameter sets given in [18, Table 3].
The NTRU cryptosystem is defined over the ring , where , and N is prime. The parameters N and q are public. Furthermore, there exist public parameters . For the parameter sets considered in , the private key is a pair of polynomials , where g is a trinary polynomial with exactly ones and minus ones and invertible in with for some trinary polynomials with exactly one and minus one entries. The corresponding public key is , where . In the following, we assume that h and 3 are invertible in . We further identify polynomials with their coefficient vectors. We can recover the private key by finding the secret vector . Since , we have , and therefore it holds that
for some , where is the rotation matrix of . Hence, can be recovered by solving the BDD on input in the q-ary lattice
since . A similar way to recover the private key was already mentioned in . The lattice Λ has dimension and determinant . Since we take the BDD approach for the hybrid attack, we assume that only , not its rotations or additive inverse, can be found by the attack, see Section 3. Hence, we assume that the set S, as defined in Heuristic 1, consists of at most one element.
Let with and . Since is a trinary vector, we can set the infinity norm bound k on equal to one. In contrast, determining an infinity norm bound on the vector is not that trivial, since is not trinary but of product form. For a specific parameter set, this can either be done theoretically or experimentally. The same holds for estimating the Euclidean norm of . For our runtime estimates, we determined the expected Euclidean norm of experimentally and set the expected Euclidean norm of to
We set and to be equal to the expected number of minus one entries and one entries, respectively, in . For simplicity, we assume that and are integers in the following in order to avoid writing down the rounding operates.
The next step is to determine the success probability , i.e., the probability that has exactly entries equal to minus one, entries equal to one, and holds, where is as given in Heuristic 1. Assuming independence, the success probability is approximately
where is the probability that has exactly entries equal to minus one and entries equal to one, and is defined and calculated as in Section 4.2. Obviously, is given by
where and . As explained earlier, since we use the BDD approach of the hybrid attack, we assume that in case the attack is successful.
We determined the optimal attack parameters to estimate the minimal runtime of the hybrid attack for the NTRU EESS # 1 parameter sets given in [18, Table 3]. The results, including the optimal r, corresponding and resulting probability p that collisions can be found, are presented in Table 3. Our analysis shows that the security levels against the hybrid attack claimed in  are lower than the actual security levels for all parameter sets. In addition, our results show that for all of the analyzed parameter sets, the hybrid attack does not perform better than a purely combinatorial meet-in-the-middle search, see [18, Table 3]. Our results therefore disprove the common claim that the hybrid attack is necessarily the best attack on NTRU.
|Security cons/std in bits||145/162||165/182||249/267||335/354|
|In  cons/std||116/127||133/145||204/236||280/330|
|/ used in ||154/166||175/192||264/303||360/423|
The Streamlined NTRU Prime family of cryptosystems is parameterized by three integers , where n and q are odd primes. The base ring for Streamlined NTRU Prime is . The private key is (essentially) a pair of polynomials , where g is drawn uniformly at random from the set of all trinary polynomials, and f is drawn uniformly at random from the set of all trinary polynomials with exactly non-zero coefficients. The corresponding public key is . In the following, we identify polynomials with their coefficient vectors. As described in , the secret vector is contained in the q-ary lattice
where is the rotation matrix of h, since
for some . The determinant of the lattice Λ is given by , and its dimension is equal to . Note that in the case of Streamlined NTRU Prime, the rotations of a trinary polynomial are not necessarily trinary, but it is likely that some are. The authors of  conservatively assume that the maximum number of good rotations of that can be utilized by the attack is , which we also assume in the following. Counting their additive inverses leaves us with short vectors that can be found by the attack.
Let with and . Since is trinary, we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of is given by
We set equal to the expected number of one entries (or minus one entries, respectively) in . For simplicity, we assume that is an integer in the following.
Next, we determine the success probability , where S denotes the following subset of the lattice Λ:
where is as defined in Heuristic 1. We assume that S is a subset of all the rotations of that can be utilized by the attack and their additive inverses. In particular, we assume that S has at most elements. Note that if some vector is contained in S, then we also have . Assuming independence, the probability that is approximately given by
where and , and is defined and calculated as in Section 4.2. Assuming independence, all of the good rotations of are contained in S with probability as well. Therefore, the probability that we have at least one good rotation is approximately
Next, we estimate the size of the set S in the case , i.e., Algorithm 1 is successful. In that case, at least one rotation is contained in S. Then also its additive inverse is contained in S, hence . We can estimate the size of S in case of success to be
where is defined as above.
We applied our new techniques to estimate the minimal runtimes for several NTRU Prime parameter sets proposed in [8, Appendix D]. Besides the “case study parameter set”, for our analysis, we picked one parameter set that offers the lowest bit security and one that offers the highest according to the analysis of . Our resulting security estimates are presented in Table 4. Our analysis shows that the authors of  underestimate the security of their scheme for all parameter sets we evaluated.
|Security cons/std in bits||197/211||258/273||346/363|
In , Buchmann et al. presented R-BinLWEEnc, a lightweight public key encryption scheme based on binary Ring-LWE. To determine the security of their scheme, the authors evaluate the hardness of binary LWE against the hybrid attack using the methodology of .
Let with and be a binary LWE instance with , and binary error . To obtain a more efficient attack, we first subtract the vector with non-zero and r zero entries from both sides of the equation to obtain a new LWE instance , where . This way, the expected norm of the first entries is reduced while the last r entries, which are guessed during the attack, remain unchanged. In the following, we only consider this transformed LWE instance with smaller error. Obviously, the vector is contained in the q-ary lattice
Note that constructing the lattice this way, we only need the error vector to be binary and not also the secret as in [9, 10]. The dimension of the lattice Λ is equal to , and with high probability, its determinant is , see, for example, . However, as we know the last component of , it does not need to be guessed, and we may hence ignore it for the hybrid attack and consider the lattice to be of dimension m.
Let with and . Then obviously, we have , so we set the infinity norm bounds . Since is a uniformly random vector in , the expected Euclidean norm of is
We set to be the expected number of and entries of . In the following, we assume that is an integer in order to not have to deal with rounding operators.
We can approximate the success probability by , where is the probability that has exactly entries equal to and entries equal to , and is defined as in Section 4.2. Using the fact that , we therefore obtain
We assume that if the attack is successful, then , where S is defined as in Heuristic 1, since and are assumed to be the only vectors that can be found by the attack.
We reevaluated the security of the R-BinLWEEnc parameter sets proposed in . Our security estimates, the optimal attack parameters r and , and the corresponding probability p are presented in Table 5. The original security estimates given in  are within the security range we determined.
|Security cons/std in bits||88/99||79/90||187/197|
The signature scheme BLISS, introduced in , is one of the most important lattice-based signature schemes. In the original paper, the authors considered the hybrid attack on their signature scheme for their security estimates; however, their analysis is rather vague.
In the BLISS signature scheme, the setup is the following: Let n be a power of two, such that holds, q a prime modulus with , and . The signing key is of the form , where , each with coefficients in and coefficients in , and the remaining coefficients equal to 0. The public key is essentially . We assume that a is invertible in , which is the case with very high probability. Hence, we obtain the equation , or equivalently . In the following, we identify polynomials with their coefficient vectors.
In order to recover the signing key, it is sufficient to find the vector . Similar to our previous analysis of NTRU in Section 5.1, we have that
for some , where is the rotation matrix of . Hence, can be recovered by solving the BDD on input in the q-ary lattice
since . The determinant of the lattice Λ is , and its dimension is equal to .
In the following, let with and . Since we are using the hybrid attack to solve a BDD problem, the rotations of cannot be utilized in the attack (or at least it is not known how), see Section 3. We therefore assume that is the only rotation useful in the attack, i.e., that the set of good rotations S contains at most . The first step is to determine proper bounds y on and k on and find suitable guessing parameters . By construction, we obviously have , thus we can set the infinity norm bounds . The expected Euclidean norm of is given by
We set equal to the expected number of i-entries in , i.e., and . For simplicity, we assume that and are integers in the following.
Next, we determine the success probability , which is the probability that and exactly entries of are equal to i for . The probability that exactly entries of the vector are equal to i for all is given by
where and . Assuming independence, the success probability is approximately given by
We performed the optimization process for the BLISS parameter sets proposed in . The results are presented in Table 6. Besides the security levels against the hybrid attack, we provide the optimal attack parameters r and leading to a minimal runtime of the attack, as well as the probability p. Our results show that the security estimates for the BLISS-I, BLISS-II, and BLISS-III parameter sets given in  are within the range of security we determined, whereas the BLISS-IV parameter set is less secure than originally claimed. In addition, the authors of  claim that there are at least 17 bits of security margins built into their security estimates, which is incorrect for all parameter sets according to our analysis.
|Security cons/std in bits||124/139||124/139||152/170||160/182|
|r used in ||194||194||183||201|
The GLP signature scheme was introduced in . In the original work, the authors did not consider the hybrid attack when deriving their security estimates. Later, in , the hybrid attack was also applied to the GLP-I parameter set. The GLP-II parameter set has not been analyzed regarding the hybrid attack so far.
For the GLP signature scheme the setup is the following: Let n be a power of two, q a prime modulus with , and . The signing key is of the form , where and are sampled uniformly at random among all polynomials of with coefficients in . The corresponding public key is then of the form , where a is drawn uniformly at random in . So we know that . Identifying polynomials with their coefficient vectors, we therefore have that
where , and is the rotation matrix of . Because of how the lattice is constructed, we do not assume that rotations of can by utilized by the attack. Therefore, with very high probability, and are the only non-zero trinary vectors contained in Λ, which we assume in the following. Since q is prime and has full rank, we have that , see for example . In Appendix A.1, we show how to construct a basis of the form
for the q-ary lattice Λ.
Ignoring the first -1 coordinate, the short vector is drawn uniformly from . Let with and . Then obviously, and hold; so we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of is approximately
We set to be the expected number of ones and minus ones. For simplicity, we assume that is an integer in the following.
The success probability of the attack is approximately , where is the probability that hat exactly minus one entries and one entries, and is defined as in Section 4.2. Calculating yields
As previously mentioned, we assume that if the attack is successful, then .
We performed the optimization for the GLP parameter sets proposed in . The results, including the optimal attack parameters r and and the probability p, are shown in Table 7. The security level of the GLP-I parameter set claimed in  is within the range of security we determined. In , the authors did not analyze the hybrid attack for the GLP-II parameter set. Güneysu et al.  claimed a security level of at least 256 bits (not considering the hybrid attack) for the GLP-II parameter set, whereas we show that it offers at most 233 bits of security against the hybrid attack.
In this work, we described a general version of the hybrid attack and presented improved techniques to analyze its runtime. We further reevaluated various cryptographic schemes regarding their security against the hybrid attack. Our analysis shows that several of the old security estimates of previous works were in fact unreliable. By updating these unreliable estimates, we contributed to the trustworthiness of security estimates of lattice-based cryptography.
For future work, we hope that more provable statements about the practicality of the hybrid attack can be derived. For instance, our results show that the hybrid attack is not the best known attack on all NTRU instances as previously thought. It would be interesting to prove that under certain conditions on the key structure, the hybrid attack is always outperformed by some other attack. Another possible line of future work is applying the hybrid attack to a broader range of cryptographic schemes than already done in this work. Furthermore, analyzing the memory requirements of the hybrid attack was out of the scope of this work. In future research, the memory requirements can be analyzed and reduced (as for example done in  for binary NTRU keys). If required, the memory consumption can then be taken into account when optimizing the attack parameters for the hybrid attack.
Funding source: Deutsche Forschungsgemeinschaft
Award Identifier / Grant number: CRC 1119
In the following lemma, we show that for q-ary lattices, where q is prime, there always exists a basis of the form required for the attack. The size of the identity in the bottom right corner of the basis depends on the determinant of the lattice. In the proof, we also show how to construct such a basis.
Let q be prime, , and let be a q-ary lattice.
There exists some such that .
Let . Then there is a matrix of rank (over ) such that .
Let and with be a matrix of rank (over ) such that . If is invertible over , then the columns of the matrix
form a basis of the lattice Λ.
(i) Obviously, , since , and therefore is some non-negative power of q, because q is prime.
(ii) We have , and therefore
Let be some lattice basis of Λ. Since is in one-to-one correspondence to the -vector space spanned by , this vector space has to be of dimension , and therefore has rank over . This implies that there is some matrix consisting of columns of such that .
(iii) By assumption, is invertible, and thus we have
Therefore, the columns of the matrix
form a generating set of the lattice Λ, which can be reduced to the basis . ∎
Typically, the Gram–Schmidt lengths obtained after performing a basis reduction with quality δ can be approximated by the geometric series assumption (GSA), see Section 2. However, for q-ary lattices of the above form, this assumption may be modified. This has already been considered and confirmed with experimental results in previous works, see for example [20, 17, 18, 30]. However, in this work, we derive simple formulas predicting the quality of the reduction, and therefore explain how to obtain these formulas in more detail. We begin by sketching the reason for modifying the GSA for q-ary lattices, given a lattice basis of the form
where . If the basis reduction is not strong enough, i.e., the Hermite delta is too large, the GSA predicts that the first Gram–Schmidt vectors of the reduced basis have norm bigger than q. However, in practice, this will not happen, since in this case, the first vectors will simply not be reduced. This means that instead of reducing the whole basis , one can just reduce the last vectors that will actually be reduced. Let k denote the (so far unknown) number of the last vectors that are actually reduced (i.e., their corresponding Gram–Schmidt vectors according to the GSA have norm smaller than q). We assume that the basis reduction is sufficiently weak such that and sufficiently strong such that . We write in the form
for some and . Now, instead of , we only reduce to for some unimodular . This yields a reduced basis
of . The Gram–Schmidt basis of this new basis is given by
Therefore, the lengths of the Gram–Schmidt basis vectors are q for the first vectors and then equal to the lengths of the Gram–Schmidt basis vectors , which are smaller than q. In order to predict the lengths of , we can apply the GSA to the lengths of the Gram–Schmidt basis vectors , since they are actually reduced. What remains is to determine k. Assume we apply a basis reduction on that results in a reduced basis of Hermite delta δ. By our construction, we can assume that the first Gram–Schmidt basis vector of has norm roughly equal to q, so the GSA implies
Using the fact that and , we can solve for k and obtain
Summarizing, we expect that after the basis reduction, our Gram–Schmidt basis has lengths , where
and k is given as in equation (A.1).
Note that it might also happen that the last Gram–Schmidt lengths are predicted to be smaller than 1. In this case, these last vectors will also not be reduced in reality, since the basis matrix has the identity in the bottom right corner. Therefore, in this case, the GSA may be further modified. However, for realistic attack parameters, this phenomenon never occurred during our runtime optimizations, and therefore we do not include it in our formulas and leave it to the reader to do the easy calculations if needed.
In the following, we provide the results of some preliminary experiments supporting the validity of the formula for the collision-finding probability p provided in Assumption 4. To this end, for , and , we created q-ary lattices that contain a short binary vector by embedding binary LWE instances into an SVP problem according to Section 5.3 (however, we did not shift the first components of the error vector by 0.5). For each n, we chose a random binary LWE instance with LWE parameters and created a basis of the corresponding SVP lattice of the form
with . We then BKZ-reduced the upper-left part of the basis with block size 20. Let with be the short binary vector in . For each n, we repeated the following experiment 10,000 times and recorded the number of success cases: Guess a vector with non-zero entries as in the hybrid attack and check if the vector is -admissible with respect to the basis by checking if . The experiments were performed using SageMath . For comparison, we calculated the probability p according to the formula provided in Assumption 4 using the actual Gram–Schmidt norms of the basis and the norm of to calculate the . The results are presented in Table 8. They suggest that for the analyzed instances, Assumption 4 is a good approximation of the actual probability.
|p according to Assumption 4|
|p according to experiments|
We thank Florian Göpfert and John Schanck for helpful discussions and comments.
 M. R. Albrecht, R. Fitzpatrick and F. Göpfert, On the efficacy of solving LWE by reduction to unique-SVP, Information Security and Cryptology—ICISC 2013, Lecture Notes in Comput. Sci. 8565, Springer, Cham (2014), 293–310. Search in Google Scholar
 M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, J. Math. Cryptol. 9 (2015), no. 3, 169–203. Search in Google Scholar
 E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange - A new hope, Proceedings of the 25th USENIX Security Symposium, USENIX, Berkeley (2016), 327–343. Search in Google Scholar
 Y. Aono, Y. Wang, T. Hayashi and T. Takagi, Improved progressive BKZ algorithms and their precise cost estimation by sharp simulator, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 789–819. Search in Google Scholar
 L. Babai, On Lovász’ lattice reduction and the nearest lattice point problem, Annual Symposium on Theoretical Aspects of Computer Science—STACS 85, Lecture Notes in Comput. Sci. 182, Springer, Berlin, (1985) 13–20. Search in Google Scholar
 S. Bai and S. D. Galbraith, Lattice decoding attacks on binary LWE, Information Security and Privacy—ACISP 2014, Lecture Notes in Comput. Sci. 8544, Springer, Berlin (2014), 322–337. Search in Google Scholar
 D. J. Bernstein, J. Buchmann and E. Dahmen, Post-Quantum Cryptography, Springer, Berlin, 2009. Search in Google Scholar
 D. J. Bernstein, C. Chuengsatiansup, T. Lange and C. van Vredendaal, NTRU prime: Reducing attack surface at low cost, Selected Areas in Cryptography—SAC 2017, Lecture Notes in Comput. Sci. 10719, Springer, Cham (2018), 235–260. Search in Google Scholar
 J. Buchmann, F. Göpfert, T. Güneysu, T. Oder and T. Pöppelmann, High-performance and lightweight lattice-based public-key encryption, Proceedings of the 2nd ACM International Workshop on IoT Privacy, Trust, and Security, ACM, New York (2016), 2–9. Search in Google Scholar
 J. Buchmann, F. Göpfert, R. Player and T. Wunderer, On the hardness of LWE with binary error: Revisiting the hybrid lattice-reduction and meet-in-the-middle attack, Progress in Cryptology—AFRICACRYPT 2016, Lecture Notes in Comput. Sci. 9646, Springer, Cham (2016), 24–43. Search in Google Scholar
 R. Canetti and J. A. Garay, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg, 2013. Search in Google Scholar
 Y. Chen, Réduction de réseau et sécurité concrete du chiffrement completement homomorphe, PhD thesis, Paris 7, 2013. Search in Google Scholar
 Y. Chen and P. Q. Nguyen, BKZ 2.0: Better lattice security estimates, Advances in Cryptology—ASIACRYPT 2011, Lecture Notes in Comput. Sci. 7073, Springer, Heidelberg (2011), 1–20. Search in Google Scholar
 L. Ducas, A. Durmus, T. Lepoint and V. Lyubashevsky, Lattice signatures and bimodal Gaussians, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 40–56. Search in Google Scholar
 M. Fischlin and J.-S. Coron, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin, 2016. Search in Google Scholar
 T. Güneysu, V. Lyubashevsky and T. Pöppelmann, Practical lattice-based cryptography: A signature scheme for embedded systems, Cryptographic Hardware and Embedded Systems—CHES 2012, Lecture Notes in Comput. Sci. 7428, Springer, Berlin (2012), 530–547. Search in Google Scholar
 P. S. Hirschhorn, J. Hoffstein, N. Howgrave-Graham and W. Whyte, Choosing NTRUencrypt parameters in light of combined lattice reduction and MITM approaches, Applied Cryptography and Network Security—ACNS 2009, Lecture Notes in Comput. Sci. 5536, Springer, Berlin (2009), 437–455. Search in Google Scholar
 J. Hoffstein, J. Pipher, J. M. Schanck, J. H. Silverman, W. Whyte and Z. Zhang, Choosing parameters for , Topics in Cryptology—CT-RSA 2017, Lecture Notes in Comput. Sci. 10159, Springer, Cham (2017), 3–18. Search in Google Scholar
 J. Hoffstein, J. Pipher and J. H. Silverman, NTRU: A ring-based public key cryptosystem, Algorithmic Number Theory (Portland 1998), Lecture Notes in Comput. Sci. 1423, Springer, Berlin (1998), 267–288. Search in Google Scholar
 N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. Search in Google Scholar
 N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. Search in Google Scholar
 N. Howgrave-Graham, J. H. Silverman and W. Whyte, A meet-in-the-middle attack on an NTRU private key, https://www.securityinnovation.com/uploads/Crypto/NTRUTech004v2.pdf. Search in Google Scholar
 R. Lindner and C. Peikert, Better key sizes (and attacks) for LWE-based encryption, Topics in Cryptology—CT-RSA 2011, Lecture Notes in Comput. Sci. 6558, Springer, Heidelberg (2011), 319–339. Search in Google Scholar
 V. Lyubashevsky, C. Peikert and O. Regev, On ideal lattices and learning with errors over rings, J. ACM 60 (2013), no. 6, Article ID 43. Search in Google Scholar
 D. Micciancio and C. Peikert, Hardness of SIS and LWE with small parameters, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 21–39. Search in Google Scholar
 D. Micciancio and M. Walter, Practical, predictable lattice basis reduction, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 820–849. Search in Google Scholar
 F. W. J. Olver, NIST Handbook Of Mathematical Functions, Cambridge University Press, Cambridge, 2010. Search in Google Scholar
 O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Proceedings of the 37th Annual ACM Symposium on Theory of Computing—STOC’05, ACM, New York (2005), 84–93. Search in Google Scholar
 J. Schanck, Practical lattice cryptosystems: NTRUEncrypt and NTRUMLS, Master’s thesis, University of Waterloo, 2015. Search in Google Scholar
© 2019 Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.