Skip to content
BY 4.0 license Open Access Published by De Gruyter November 17, 2020

Approximate Voronoi cells for lattices, revisited

  • Thijs Laarhoven EMAIL logo

Abstract

We revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis–Laarhoven–De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than 2 0.076 d + o ( d ) memory, and we show how to obtain time–memory trade-offs even when using less than 2 0.048 d + o ( d ) memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as d d / 2 + o ( d ) matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.

MSC 2010: 11H06; 52B11; 52C07; 94A60

1 Introduction

Ever since the discovery of polynomial-time quantum attacks on widely deployed public-key cryptosys-tems [36], researchers have been looking for ways to construct cryptographic schemes whose security relies on problems which remain hard even when large-scale quantum computers become a reality [8, 14, 29]. A prominent class of potentially “post-quantum” cryptosystems [2, 33, 38] relies on the hardness of lattice problems, such as the shortest (SVP) and closest vector problems (CVP). Understanding their hardness is essential for an efficient and reliable deployment of lattice-based cryptographic schemes in practice.

Over time, the practical hardness of SVP and CVP has been quite well studied, with two classes of algorithms emerging as the most competitive: enumeration [5, 6, 15, 16, 19, 27], running in superexponential time 2 Θ ( d log d ) in the lattice dimension d (the main security parameter), using a negligible amount of space; and sieving [3, 4, 13, 18, 20, 26, 28], running in only exponential time 2 Θ ( d ) but also requiring an amount of memory scaling as 2 Θ ( d ) . The best asymptotic time complexities for enumeration d d / 2 e + o ( d ) for SVP, d d / 2 + o ( d ) for CVP [17]) and sieving ( 3 / 2 ) d / 2 + o ( d ) for both SVP and CVP [7, 21]) have remained unchanged since 2007 and 2016 respectively, [1] and recent work has mainly focused on decreasing second-order terms in the time and space complexities [4, 5, 13, 16, 22].

A close relative to CVP, the closest vector problem with preprocessing (CVPP), has received far less attention [1, 10, 24, 39] – from a practical point of view, only a few recent works have studied how preprocessing can be used to speed up CVP [12, 21]. Since a fast CVPP algorithm would imply faster lattice enumeration

algorithms for SVP/CVP [12, 16, 21], faster approximate-SVP algorithms for ideal lattices [30, 39], and even faster isogeny-based cryptography [9], a better understanding of the hardness of CVPP is needed.

1.1 Approximate Voronoi cells

A natural approach for solving nearest-point queries for large data sets is to use Voronoi cells; partitioning the space in regions, where each cell contains all points closer to the point in this cell than to any other point in the data set. Micciancio–Voulgaris [25] proposed an algorithm for constructing the Voronoi cell V of a lattice in time 2 2 d + o ( d ) and space 2 2 d + o ( d ) which can then be used to solve CVPP in time 2 2 d + o ( d ) Bonifas– Dadush [10] later improved the query time complexity to only 2 d + o ( d ) but with the best heuristic algorithms for CVP running in time and space less than 2 0.3 d + o ( d ) using exact Voronoi cells seems impractical.

To make the Voronoi cells approach practical, Laarhoven [21] and Doulgerakis–Laarhoven–De Weger (DLW) [12] proposed constructing approximate Voronoi cells of the lattice, and using a randomized version of the iterative slicer algorithm of Sommer–Feder–Shalvi [37] for solving CVP queries. These cells 𝒱 L, defined by a list of lattice vectors L ⊂ 𝓛, can be seen as rough, low-memory approximations to the exact Voronoi cell 𝒱 – low-quality representations of the same object, which attempt to model the object as well as possible within the limited space available. These approximate representations are lossy, but are also smaller and easier to store (less memory) and faster to process (less time).

For analyzing the performance of this approach, DLW conjectured a relation between the performance of the algorithm and how well 𝒱L approximates 𝒱:

(1) p = Pr ( the iterative slicer, with input L, solves CVP ) ? v o l ( V ) v o l ( V L ) .

They then obtained upper bounds on the volume of 𝒱L relative to 𝒱 by studying the success probability of the randomized slicer. An open problem from DLW was to better study the volumes of these approximate Voronoi cells, as this may lead to tighter bounds on their CVPP algorithm. Furthermore, the time–space trade-offs from DLW seemed somewhat unnatural — the query time complexity diverges when the memory is less than 2 0.05 d + o ( d ) and a second open problem was to obtain time complexities scaling as 2 Θ ( d ) for arbitrary memory complexities 2 Ω ( d )

1.2 Volumes of approximate Voronoi cells

In this paper we take a fundamental approach to studying the shape of approximate Voronoi cells.We model this problem as estimating the volume of the intersection of a large number of random half-spaces, and we solve the latter problem exactly for the main regimes of interest. In particular, without any heuristic assumptions, we prove the following result regarding the volume of a random polytope obtained by intersecting a large number of random half-spaces. Assuming that the distribution of lattice points inside a large ball can be approximated well by a uniform distribution over the ball, this then leads to a tight asymptotic estimate of the volume of approximate Voronoi cells.

Theorem 1.1

(Volume of approximate Voronoi cells) Let α > 1, and let L L { 0 } consist of the αd shortest non-zero vectors of a lattice 𝓛. Then, assuming the Gaussian heuristic holds, with probability 1 − o(1) we have:

(2) α 2 v o l ( V L ) = α 4 4 α 2 4 d / 2 + o ( d ) v o l ( V ) ;
(3) α 2 v o l ( V L ) = ( 1 + o ( 1 ) ) d + o ( d ) v o l ( V ) .

Assuming [12, Heuristic assumption 1] holds (which has been restated here as Heuristic 4.2), this result would then imply what are the exact asymptotic time and space complexities of the randomized slicer. However, under the same assumption, DLW derived the following asymptotic upper bound on the relative volume of approximate Voronoi cells, for α ( 1 , 2 )

(4) v o l ( V L ) v o l ( V ) ? 16 α 4 α 2 1 9 α 8 + 64 α 6 104 α 4 + 64 α 2 16 d / 2 + o ( d ) .

Looking closely, (2) in fact contradicts the above upper bound for α > 1 3 10 1.054 . The source of this contradiction can be found in [12, Heuristic assumption 1]: while this assumption states that the success probability p of the randomized slicer is exactly p = vol(𝒱)/ vol(𝒱L), the randomized slicer is in fact more likely to converge to short solutions than to long solutions: we may well have p ≫ vol(𝒱)/ vol(𝒱L), and the gap between both quantities may be exponentially large in d. A lower bound on p therefore does not necessarily translate to a lower bound on vol(𝒱)/ vol(𝒱L), or to an upper bound on its reciprocal.

1.3 Application to CVPP

Although (4) is incorrect as an upper bound on the volume of approximate Voronoi cells, on closer inspection we see that to bound the complexity of their algorithm, DLW in fact proved that p is at most the RHS of (4): the bound on the volume of the approximate Voronoi cell was then only obtained through transitivity by applying [12, Heuristic assumption 1]. Thus, letting pα denote the success probability of the randomized slicer when using a list of the n = α d shortest non-zero vectors in the lattice, we now have two heuristic lower bounds on pα:

(5) ( D L W ) p α 9 α 8 + 64 α 6 104 α 4 + 64 α 2 16 16 α 4 α 2 1 d / 2 + o ( d ) ;
(6) ( o u r s ) p α 4 α 2 4 α 4 d / 2 + o ( d ) .

These bounds are both conditional on the Gaussian heuristic, and the second result holds conditional on p α v o l ( V ) / v o l ( V L ) . By applying similar techniques from [12], we obtain the following CVPP complexities, where δ = α 2 1 / α

Theorem 1.2

(CVPP complexities) Let α ( 1 , 2 ) and u ( δ , 1 δ ) . Then we can heuristically solve CVPP with query space and time S and T, where:

(7) S = α α ( α 2 1 ) ( α u 2 2 u α 2 1 + α ) d / 2 + o ( d ) ,
(8) T = α 4 4 α 2 4 α + u α 2 1 α 3 + α 2 u α 2 1 + 2 α d / 2 + o ( d ) .

The best query complexities (S, T) together form the blue curve in Figure 1.

Figure 1 Query complexities for solving CVPP. The labeled curves and points correspond to the papers [7, 12, 18, 21]. Our new upper bound on the query time complexity improves upon DLW when using less than 
  (
  10
  
    /
  
  9
  
    )
    
      d
      
        /
      
      2
      +
      o
      (
      d
      )
    
  
  ≈
  
    2
    
      0.076
      d
      +
      o
      (
      d
      )
    
  
$(10/9)^{d/2 + o(d)} \approx 2^{0.076d + o(d)}$ memory. Note that, whereas the red DLW-curve diverges as the memory approaches the dashed asymptote 
  
    2
    
      0.048
      d
      +
      o
      (
      d
      )
    
  
$2^{0.048d + o(d)}$ from above, our trade-offs heuristically continue all the way to the regime of subexponential memory.
Figure 1

Query complexities for solving CVPP. The labeled curves and points correspond to the papers [7, 12, 18, 21]. Our new upper bound on the query time complexity improves upon DLW when using less than ( 10 / 9 ) d / 2 + o ( d ) 2 0.076 d + o ( d ) memory. Note that, whereas the red DLW-curve diverges as the memory approaches the dashed asymptote 2 0.048 d + o ( d ) from above, our trade-offs heuristically continue all the way to the regime of subexponential memory.

As we can see in Figure 1, for the low-memory regime of less than 2 0.076 d + o ( d ) memory, we obtain strictly better query time complexities than [12]. The trade-offs from [12] were further limited to the regime of using at least 2 0.076 d + o ( d ) memory, whereas Theorem 1.2 describes a continuous trade-off between the query time and space complexities: for arbitrary memory complexities 2 ε d + o ( d ) with ε > 0, we obtain a query time complexity 2 Θ ( d ) Extending Theorem 1.2 to the regime of α = 1 + o ( 1 ) we obtain the following result.

Corollary 1.3

(Polynomial advice for CVPP). Using d Θ ( 1 ) memory, we can heuristically solve CVPP in time d d / 2 + o ( d )

This matches the asymptotic worst-case time complexities for solving CVP with enumeration of Hanrot– Stehlé [17], and with an average-case scaling for enumeration of d d / ( 2 e ) + o ( d ) this is only off by a factor 1/e in the exponent compared to practical enumeration methods. We further see that if we use a preprocessed list of size e.g. 2 Θ ( d γ ) for constant γ ( 0 , 1 ) we heuristically obtain a CVPP time complexity scaling as 2 1 2 ( 1 γ ) d log 2 d + o ( d log d )

Outline.

Section 2 first defines notation and preliminary results. Section 3 studies the volume of intersections of random halfspaces. Section 4 describes the application of these results to solving CVPP and the resulting trade-offs. The appendices describe further details on prior work, to make the paper self-contained.

2 Preliminaries

Given a set B = { b 1 , , b d } R d of linearly independent vectors, we define L = ( B ) := { i = 1 d λ i b i : λ Z d } as the lattice generated by B .We write · for the Euclidean norm. Given a basis of a lattice and a target vector t R d the closest vector problem (CVP) is to find the vector v 𝓛 closest to t. In the preprocessing version (CVPP), the problem is split into two parts: the preprocessing phase (without knowing t) and the query phase (with knowledge of t). For CVPP, the task is to do preprocessing such that CVP queries can then be answered more efficiently than when solving CVP directly.

Let us define some basic high-dimensional objects below, where v d.

(9) ( u n i t s p h e r e ) S := { x R d : x = 1 } ,
(10) ( u n i t b a l l ) B := { x R d : x 1 } ,
(11) ( h a l f s p a c e ) H b f v := { x R d : x x v } ,
(12) ( c o n v e x p o l y t o p e ) V L := v L H v , ( 0 L )
(13) ( s p h e r i c a l c a p ) C v := H v ¯ B ,
(14) ( V o r o n o i c e l l ) V := V L { 0 } .

We further define the complements H v ¯ := R d H v and V L ¯ := R d V L in Rd, and C v ¯ := B C v on the ball. Note that the definition of a polytope 𝒱L is generic, and the list L need not be from a lattice. 𝒱L may further be unbounded (and its volume may be infinite), although for sufficiently large, randomly chosen lists L it will usually be finite. For L L { 0 } the polytope 𝒱L defines an approximate Voronoi cell of the lattice L [12], satisfying V V L with equality iff R L where ℛ is the set of relevant vectors of the lattice [25].

To analyze volumes of intersections on the ball, we will use the following asymptotic formula [34, Equation (28)], where α = 1 2 v ( 0 , 1 )

(15) C ( α ) := v o l ( C v ) v o l ( B ) 1 α 2 2 π α 2 d ( ˙ 1 α 2 ) d / 2 . ( d )

For constant α ∈ (0, 1) and large d, Equation (15) can alternatively be written as C ( α ) = O ( ( 1 α 2 ) d / 2 / d ) = ( 1 α 2 ) d / 2 + o ( d )

Finally, the Gaussian heuristic states that for sufficiently smooth and random regions K R d the number of lattice points inside 𝒦 scales as vol(𝒦)/vol(𝒱).

3 Volumes of random polytopes

To study the asymptotic behavior of volumes of approximate Voronoi cells, we will first study the more fundamental problem of estimating the volume of polytopes 𝒱L defined as the intersection of a large number of random half-spaces. We will study two specific cases for the list L below:

  1. Uniformly random points from the (unit) sphere;

  2. Uniformly random points from the (unit) ball.

The volume of such random polytopes has been previously studied in e.g. [31, 32, 35, 40, 41], and in particular the case of points from the sphere was analyzed in [31]. For the application to approximate Voronoi cells we need bounds for the case when points are drawn uniformly at random from a ball, which to the best of our knowledge has not been explicitly studied before. For completeness, and to illustrate how the analysis changes between the case of the sphere and the ball, we treat the case of random points from the unit sphere here as well.

3.1 Uniformly random points from the (unit) sphere

First, let us study the case where L is sampled uniformly at random from the unit sphere S. This setting was previously studied in [31, Section 3.2], but for extending the analysis to the case of the unit ball we explicitly analyze this problem here as well. Note that for L S d 1 we have the trivial lower bound v o l ( V L ) 2 d v o l ( B ) as v o l ( V L ) 2 d v o l ( B ) For a slightly less trivial upper bound, note that the polytope 𝒱L is unbounded iff all points in L lie in a certain hemisphere. The probability that this happens was computed by Wendel [42] as:

(16) Pr L S ( v o l ( V L ) < ) = 1 2 n + 1 k = 0 d 1 n 1 k .

In particular, it is extremely unlikely that for lists of size n = ω(d), the corresponding polytopes are unbounded. For lists of exponential size, we obtain the following result, similar to [31, Theorem 3.9].

Theorem 3.1

(Random points from the sphere) Let α > 1, and let L S consist of n = α d uniformly random vectors from 𝒮. Then, with probability 1 − o(1) over the randomness of L, we have:

(17) v o l ( V L ) = α 2 4 α 2 4 d / 2 + o ( d ) v o l ( B ) .

Proof. To prove Theorem 3.1, we will prove the following, equivalent statement:

(18) v o l ( V L ) = v o l ( r 0 B ) 1 + o ( 1 ) , r 0 = α 2 4 α 2 4 .

Note that v o l ( r B ) = r d v o l ( B ) for arbitrary r, hence the equivalence. Below we will further use the quantity V L ( r ) = V L r B V L as the intersection of the polytope with the ball of radius r > 0. Observe that for sufficiently small r ≪ r0 we have V L ( r ) = r B V L while for large r ≫ r0 we have r r 0 The quantity r0 is intuitively the radius r for which vol( v o l ( V L ( r ) ) v o l ( V L ) v o l ( r B )

First, some simple manipulations give:

(19) V L ( r ) = v L H v ( r B ) = v L r B r C v / r = r ( B v L C v / r ) K .

Note that the vectors v/r all have norm 1 /r, and the spherical caps C v / r thus have a fixed base radius of 1/(2r). To prove the lower bound on vol(𝒱L),we will use elementary volume arguments to argue that vol(𝒦) ≈ vol(𝒝). For the upper bound, we have v o l ( K ) v o l ( B ) and we will argue that with high probability over the randomness of L, v o l ( V L ) v o l ( V L ( r ) )

Lower bound (≥): Ignoring spherical cap intersections, we have:

(20) v o l ( K ) = v o l B v L C v / r v o l ( B ) n v o l ( C v / r ) = v o l ( B ) 1 α d 1 1 4 r 2 d / 2 + o ( d ) .

For 1 / α 2 = 1 1 / ( 4 r 2 ) + o ( 1 ) or equivalently r = r 0 o ( 1 ) we thus get v o l ( K ( 1 o ( 1 ) ) v o l ( B )

Upper bound (≤): Clearly v o l ( V L ( r ) ) = r d v o l ( K ) r d v o l ( B ) ; the difficulty lies in showing that v o l ( V L ) v o l ( V L ( r ) ) . Note that when n is large, then the spherical caps in (19) will cover (almost) the entire surface of B – if e.g. only a fraction 2 Θ ( d 2 ) of the sphere remains uncovered, then the parts of 𝒱L extending beyond r B will contribute a negligible amount to the volume of VL.

Given a point on 𝒮 the probability of it not being covered by one of n spherical caps C v / r is given by [ 1 v o l ( C v / r ) / v o l ( B ) ] n For n = v o l ( B ) / v o l ( C v / r ) this can be upper bounded by 1/e, hence for n = 2 d 2 v o l ( B ) / v o l ( C v / r ) the expected quantity not covered on the sphere is at most e 2 d 2 By Markov’s inequality, the probability that more than a fraction e d 2 of the sphere is covered is at most e 2 d 2 + d 2 = e d 2 and so the upper bound follows.□

3.2 Uniformly random points from the (unit) ball

As sampling from 𝒝 and 𝒮 is similar in high-dimensional spaces (almost all the volume of the ball is concentrated near the surface of the sphere), in most cases the asymptotics for the unit sphere and the unit ball are the same. However, when n is very large, a significant number of vectors will have norm significantly less than 1, and these will then determine the shape of the resulting polytope.

The following main result shows that if n 2 d / 2 then the volume of the Voronoi cell for 0 scales like vol(𝒝)/n. Note that 𝒱L can be seen as the Voronoi cell for 0 in the data set L { 0 } and for n 2 d / 2 the Voronoi cell of the 0-vector is therefore no larger than the Voronoi cells of the other n points in the ball – each of the points covers an equal fraction vol(𝒝)/n of the ball. For small n, the portion of the ball covered by 0 is an exponential factor larger than the average.

Theorem 3.2

(Random points from the unit ball) Let α > 1, and let L B consist of n = α d uniformly random vectors from B. Then, with probability 1 o ( 1 ) over the randomness of L, we have:

(21) α 2 v o l ( V L ) = α 2 4 α 2 4 d / 2 + o ( d ) v o l ( B ) ;
(22) α 2 v o l ( V L ) = 1 α 2 d / 2 + o ( d ) v o l ( B ) .

Proof. For γ < 1 close to 1, let us divide the set L into sets L i = { v L : γ i v γ i 1 } for i = 0, 1, . . . , i.e.we partition Li according to a sequence of thin spherical shells. With high probability over the randomness of L, each of these lists Li will contain ( γ i α ) d + o ( d ) vectors. The original polytope can now equivalently be described as V L = i = 0 V L i . To estimate the volume of 𝒱L, note that by Theorem 3.1, each of these cells V L i is roughly shaped like a ball of a certain radius ri. As a result, the volume of 𝒱L is determined by the smallest radius min i N r i of these balls, corresponding to one of the lists Li.

To find the list defining the smallest polytope, recall that by Li applying Theorem 3.1 with n i = ( γ i α ) d + o ( d ) vectors to a sphere of radius ϒi, we have the following relation, where β = γ 2 i

(23) v o l ( V L i ) = α 2 γ 2 i 4 α 2 γ 2 i 4 d / 2 + o ( d ) v o l ( γ i B ) = ( β 2 α 2 4 β α 2 4 f ( β ) ) d / 2 + o ( d ) v o l ( B ) .

To find the value β resulting in the smallest radius, note that the derivative of f) satisfies f ( β ) = β α 2 ( 2 β α 2 ) / 4 ( β α 2 1 ) 2 , which is negative for small β < 2 / α 2 i.e. f (β) is decreasing with β, and the volume of the V L i increases with i. Now f ( β ) = 0 has one solution at β = 2 / α 2 , which is attained by one of the lists Li iff α 2 . In the regime α < 2 the smallest radius is obtained for the first list L0, resulting in the same bound as in Theorem 3.1, while for α 2 the non-trivial minimum value lies at β = γ 2 i = 2 / α 2 resulting in f ( β ) = 1 / α 2 and v o l ( V L ) = α d + o ( d ) v o l ( B )

Let us finally state separately what happens when we draw points uniformly at random from a ball of a different radius. This directly follows from Theorem 3.2.

Corollary 3.3

(Random points from the β-ball). Let α > 1, and let L B consist of n = α d uniformly random vectors from β B . Then, with probability 1 − o(1) over the randomness of L, we have:

(24) α 2 v o l ( V L ) = α 2 β 2 4 α 2 4 d / 2 + o ( d ) v o l ( B ) ;
(25) α 2 v o l ( V L ) = β 2 α 2 d / 2 + o ( d ) v o l ( B ) .

Proof. Relative to the β-ball, we have v o l ( V L ) = r d + o ( d ) v o l ( β B ) with r as in Theorem 3.2. Noting that vol(βB) = βd vol(B), the result follows.

4 Approximate Voronoi cells, revisited

With the results from Section 3, we can immediately deduce asymptotics for the volume of approximate Voronoi cells, where these results can now be derived using only the Gaussian heuristic, which has been used and verified on far more occasions than [12, Heuristic 1]. [2]

Corollary 4.1

(Points from a lattice). Let α > 1, and let L L { 0 } consist of the αd shortest non-zero vectors of a lattice L. Then, assuming the Gaussian heuristic holds, with probability 1 − o(1) we have:

(26) α 2 v o l ( V L ) = α 4 4 α 2 4 d / 2 + o ( d ) v o l ( V ) ;
(27) α 2 v o l ( V L ) = ( 1 + o ( 1 ) ) d / 2 + o ( d ) v o l ( V ) .

Proof. Without loss of generality, suppose that v o l ( V ) = v o l ( B ) . Under the Gaussian heuristic, the points L are then essentially uniformly distributed in the ball of radius. Applying Corollary 3.3 with α = β, the result then follows.

4.1 Heuristic assumptions

Assuming that [12, Heuristic assumption 1] holds, as discussed in the introduction this would give us tight bounds on the success probability of the randomized iterative slicer from [12]. However, these results would then contradict the claimed lower bound on the success probability from [12, Equation (37)]. The source of this contradiction is [12, Heuristic assumption 1], which reads as follows. [3]

Heuristic assumption 4.2 (Randomized slicing, DLW) For L ⊂ 𝒧 and large s,

(28) Pr t D t + L , s [ Slice L ( t ) V ] v o l ( V ) v o l ( V L ) .

In fact, the randomized slicer is biased towards finding as short solutions as possible, and the probability of returning the unique representative from 𝒱 may be much larger than vol(𝒱)/ vol(𝒱L). We therefore propose using the following heuristic assumption instead:

Heuristic assumption 4.3 (Randomized slicing, new) For L ⊂ 𝒧 and large s,

(29) Pr t D t + L , s [ Slice L ( t ) V ] v o l ( V ) v o l ( V L ) .

To motivate this new assumption, consider the reverse process of starting at the sliced solution vector t = Slice L ( t ) and adding lattice vectors of length at most α λ 1 ( L ) to obtain longer and longer vectors in the coset t + L . Now, given an initial sampled vector t D t + L , s the probability of reaching t′′ out of all possible solution vectors in t + 𝒧 is essentially proportional to the number of paths from t′′ to t through the above process of adding lattice vectors of length at most α λ 1 ( L ) to t′′. Starting from a shorter vector, the tree of potential paths to t′′ is likely to be wider, and there are likely more such paths reaching t′′.

Assuming that indeed, the success probability is at least proportional to the ratio of these volumes, we obtain the CVPP complexities described in Theorem 1.2 in the introduction. Here we simply replaced the upper bound on pα from [12] by the upper bound obtained via the volume of approximate Voronoi cells, and otherwise applied the same techniques of nearest neighbor speed-ups.

4.2 The low-memory regime

As Theorem 1.2 describes complexities even for the regime of 2 ε d + o ( d ) memory with small ε, let us study the asymptotic behavior as the memory is actually subexponential or even polynomial in d.

First, note that for the lower bound on the volume, we essentially only needed Equation (15),which holds even when α = o ( 1 ) scales with d. (See also [31, Lemmas 4.1 and 4.2] for absolute bounds.) For the upper bounds, we needed that the list L properly covers the sphere, and we argued that n = 2 d 2 v o l ( B ) / v o l ( C v ) suffices to cover enough of the sphere with high probability. We can therefore extend these results all the way up to the regime of polynomial space. Note that for small α = 1 + ε . Theorem 3.2 gives:

(30) v o l ( V L ) = 1 8 ε + O ( ε ) d + o ( d ) v o l ( B ) .

Substituting suitable values of α, we get the following results.

Proposition 4.4

(Polynomially many points from the unit ball). Let L B consist of n = d Θ ( 1 ) uniformly randomvectors from B , Then, with probability 1 −o(1) over the randomness of L, we have v o l ( V L ) = 2 1 2 d log 2 d + o ( d log d ) v o l ( B )

Proof. This follows from substituting α = d Θ ( 1 / d ) = 1 + Θ ( log d ) / d

In the application of CVPP algorithms, Proposition 4.4 shows that heuristically, we obtain a smooth trade-off between enumeration and using exact Voronoi cells – Hanrot–Stehlé [17, Theorem 4] previously showed that enumeration has a cost of d d / 2 + o ( d ) time for solving CVP in the worst case, with polynomial memory.

Proposition 4.5

(Subexponentially many points from the unit ball). Let L B consist of n = 2 Θ ( d γ ) uniformly random vectors from B . Then, with probability 1 − o(1) over the randomness of L, we have v o l ( V L ) = 2 1 2 ( 1 γ ) d log 2 d + o ( d log d ) v o l ( B )

Proof. This follows from substituting α = exp Θ ( d γ 1 ) = 1 + Θ ( d γ 1 )

This matches results from e.g. [11]. To illustrate Proposition 4.5 with an example, we expect to be able to solve CVPP with query time d d / 4 + o ( d ) when using 2 Θ ( d ) memory, or we can match the average-case complexity of enumeration with a query time complexity of d d / ( 2 e ) + o ( d ) using 2 Θ ( d 1 1 / e ) 2 Θ ( d 0.63 ) memory.


The author is supported by a Veni grant from NWO under project number 016.Veni.192.005.


Acknowledgement

The author thanks Léo Ducas for insightful discussions on the topic of approximate Voronoi cells.

References

[1] Dorit Aharonov and Oded Regev, Lattice problems in NP coNP, in: FOCS pp. 362–371, 2004.Search in Google Scholar

[2] Miklós Ajtai and Cynthia Dwork, A public-key cryptosystem with worst-case/average-case equivalence, in: STOC pp. 284–293, 1997.10.1145/258533.258604Search in Google Scholar

[3] Miklós Ajtai, Ravi Kumar and Dandapani Sivakumar, A sieve algorithm for the shortest lattice vector problem, in: STOC pp. 601–610, 2001.10.1145/380752.380857Search in Google Scholar

[4] Martin R. Albrecht, Léo Ducas, Gottfried Herold, Elena Kirshanova, Eamonn Postlethwaite and Marc Stevens, The general sieve kernel and new records in lattice reduction, in: EUROCRYPT pp. 717–746, 2019.10.1007/978-3-030-17656-3_25Search in Google Scholar

[5] Yoshinori Aono and Phong Q. Nguyên, Random sampling revisited: lattice enumeration with discrete pruning, in: EURO-CRYPT pp. 65–102, 2017.10.1007/978-3-319-56614-6_3Search in Google Scholar

[6] Yoshinori Aono, Phong Q. Nguyen and Yixin Shen, Quantum lattice enumeration and tweaking discrete pruning, in: ASI-ACRYPT pp. 405–434, 2018.10.1007/978-3-030-03326-2_14Search in Google Scholar

[7] Anja Becker, Léo Ducas, Nicolas Gama and Thijs Laarhoven, New directions in nearest neighbor searching with applications to lattice sieving, in: SODA pp. 10–24, 2016.10.1137/1.9781611974331.ch2Search in Google Scholar

[8] Daniel J. Bernstein, Johannes Buchmann and Erik Dahmen (eds.), Post-quantum cryptography Springer, 2009.10.1007/978-3-540-88702-7Search in Google Scholar

[9] Ward Beullens, Thorsten Kleinjung and Frederik Vercauteren, CSI-FiSh: Efficient isogeny based signatures through class group computations, Cryptology ePrint Archive, Report 2019/498 (2019).Search in Google Scholar

[10] Nicolas Bonifas and Daniel Dadush, Short paths on the Voronoi graph and the closest vector problem with preprocessing, in: SODA pp. 295–314, 2015.Search in Google Scholar

[11] Daniel Dadush, Oded Regev and Noah Stephens-Davidowitz, On the closest vector problem with a distance guarantee, in: CCC pp. 98–109, 2014.10.1109/CCC.2014.18Search in Google Scholar

[12] Emmanouil Doulgerakis, Thijs Laarhoven and Benne de Weger, Finding closest lattice vectors using approximate Voronoi cells, in: PQCrypto 2019.10.1007/978-3-030-25510-7_1Search in Google Scholar

[13] Léo Ducas, Shortest vector from lattice sieving: a few dimensions for free, in: EUROCRYPT pp. 125–145, 2018.10.1007/978-3-319-78381-9_5Search in Google Scholar

[14] The European Telecommunications Standards Institute (ETSI), Quantum-Safe Cryptography 2019.Search in Google Scholar

[15] Ulrich Fincke and Michael Pohst, Improved methods for calculating vectors of short length in a lattice, Mathematics of Computation 44 (1985), 463–471.10.1090/S0025-5718-1985-0777278-8Search in Google Scholar

[16] Nicolas Gama, Phong Q. Nguyên and Oded Regev, Lattice enumeration using extreme pruning, in: EUROCRYPT pp. 257–278, 2010.10.1007/978-3-642-13190-5_13Search in Google Scholar

[17] Guillaume Hanrot and Damien Stehlé, Improved analysis of Kannan’s shortest lattice vector algorithm, in: CRYPTO pp. 170–186, 2007.10.1007/978-3-540-74143-5_10Search in Google Scholar

[18] Gottfried Herold, Elena Kirshanova and Thijs Laarhoven, Speed-ups and time-memory trade-offs for tuple lattice sieving, in: PKC pp. 407–436, 2018.10.1007/978-3-319-76578-5_14Search in Google Scholar

[19] Ravi Kannan, Improved algorithms for integer programming and related lattice problems, in: STOC pp. 193–206, 1983.10.1145/800061.808749Search in Google Scholar

[20] Thijs Laarhoven, Sieving for shortest vectors in lattices using angular locality-sensitive hashing, in: CRYPTO pp. 3–22, 2015.10.1007/978-3-662-47989-6_1Search in Google Scholar

[21] Thijs Laarhoven, Sieving for closest lattice vectors (with preprocessing), in: SAC pp. 523–542, 2016.10.1007/978-3-319-69453-5_28Search in Google Scholar

[22] Thijs Laarhoven and Artur Mariano, Progressive lattice sieving, in: PQCrypto pp. 292–311, 2018.10.1007/978-3-319-79063-3_14Search in Google Scholar

[23] Thijs Laarhoven, Michele Mosca and Joop van de Pol, Finding shortest lattice vectors faster using quantum search, Designs, Codes and Cryptography 77 (2015), 375–400.10.1007/s10623-015-0067-5Search in Google Scholar PubMed PubMed Central

[24] Daniele Micciancio, The hardness of the closest vector problem with preprocessing, IEEE Transactions on Information Theory 47 (2001), 1212–1215.10.1109/18.915688Search in Google Scholar

[25] Daniele Micciancio and Panagiotis Voulgaris, A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations, in: STOC pp. 351–358, 2010.10.1145/1806689.1806739Search in Google Scholar

[26] Daniele Micciancio and Panagiotis Voulgaris, Faster exponential time algorithms for the shortest vector problem, in: SODA pp. 1468–1480, 2010.Search in Google Scholar

[27] Daniele Micciancio and Michael Walter, Fast lattice point enumeration with minimal overhead, in: SODA pp. 276–294, 2015.10.1137/1.9781611973730.21Search in Google Scholar

[28] Phong Q. Nguyên and Thomas Vidick, Sieve algorithms for the shortest vector problem are practical, Journal ofMathematical Cryptology 2 (2008), 181–207.Search in Google Scholar

[29] The National Institute of Standards and Technology (NIST), Post-Quantum Cryptography 2017.Search in Google Scholar

[30] Alice Pellet-Mary, Guillaume Hanrot and Damien Stehlé, Approx-SVP in ideal lattices with pre-processing, in: EUROCRYPT pp. 685–716, 2019.10.1007/978-3-030-17656-3_24Search in Google Scholar

[31] Peter Pivovarov, Volume thresholds for Gaussian and spherical random polytopes and their duals, Studia Mathematica 183 (2007), 15–34.10.4064/sm183-1-2Search in Google Scholar

[32] Peter Pivovarov, Volume distribution and the geometry of high-dimensional random polytopes Ph.D. thesis, 2010.Search in Google Scholar

[33] Oded Regev, On lattices, learning with errors, random linear codes, and cryptography, in: STOC pp. 84–93, 2005.10.1145/1060590.1060603Search in Google Scholar

[34] Claude E. Shannon, Probability of error for optimal codes in a Gaussian channel, Bell System Technical Journal 38 (1959), 611–656.10.1002/j.1538-7305.1959.tb03905.xSearch in Google Scholar

[35] Maria Shcherbina and Brunello Tirozzi, On the volume of the intersection of a sphere with random half spaces, Comptes Rendus Mathematique 334 (2002), 803–806.10.1016/S1631-073X(02)02345-2Search in Google Scholar

[36] Peter W. Shor, Algorithms for quantum computation: discrete logarithms and factoring, in: FOCS pp. 124–134, 1994.Search in Google Scholar

[37] Naftali Sommer, Meir Feder and Ofir Shalvi, Finding the closest lattice point by iterative slicing, SIAM Journal of Discrete Mathematics 23 (2009), 715–731.10.1137/060676362Search in Google Scholar

[38] Damien Stehlé, Ron Steinfeld, Keisuke Tanaka and Keita Xagawa, Efficient public key encryption based on ideal lattices, in: ASIACRYPT pp. 617–635, 2009.10.1007/978-3-642-10366-7_36Search in Google Scholar

[39] Noah Stephens-Davidowitz, A time-distance trade-off for GDD with preprocessing - Instantiating the DLW heuristic, in: CCC 2019.Search in Google Scholar

[40] Michel Talagrand, Intersecting random half-spaces: toward the Gardner–Derrida formula, The Annals of Probability 28 (2000), 725–758.10.1214/aop/1019160259Search in Google Scholar

[41] Nicola Turchi, High-dimensional asymptotics for random polytopes Ph.D. thesis, 2019.Search in Google Scholar

[42] J. G. Wendel, A problem in geometric probability, Mathematica Scandinavica 11 (1962), 109–112.10.7146/math.scand.a-10655Search in Google Scholar

A The Sommer–Feder–Shalvi iterative slicer

We briefly describe some more details on previous, related work in these appendices, starting with the iterative slicer of Sommer–Feder–Shalvi [37]. This algorithm provides an elementary, greedy strategy to attempt to find a closest vector to a given target vector t, given a list of lattice points L L which always finds a solution when L = ℛ is the set of relevant vectors of the lattice. To do this, note that the shortest representative t in the coset of the lattice t + 𝓛 is necessarily contained in the Voronoi cell of the lattice, and therefore 0 is the closest lattice vector to t. This implies that tt is the closest lattice vector to t, and so finding the shortest representative t t + L is equivalent to solving CVP for t.

To find this shortest representative, given t and a list of lattice vectors L ⊂ 𝓛, the algorithm follows the same approach of e.g. lattice sieving algorithms [21, 26, 28]: we start with t = t, and we repeatedly try to find vectors v ∈ L such that t t v is a shorter vector in the coset t + 𝓛. If no more such reductions can be done, we terminate and hope that the algorithm found the shortest representative.

Summarizing, the iterative slicer can be succinctly described through the pseudocode of Algorithm 1.

Algorithm 1 The Sommer–Feder–Shalvi iterative slicer [37]
Require: The relevant vectors ℛ 𝓛 and a target t d
Ensure: The algorithm outputs a closest lattice vector s 𝓛 to t
1: Initialize t t
2: for each r do
3: if t r < t then
4: Replace t t r and restart the for-loop
5: end if
6: end for
7: return s = t t

B The Doulgerakis–Laarhoven–De Weger randomized slicer

As the iterative slicer of Sommer–Feder–Shalvi often does not succeed, when using as input only a subset of the relevant vectors of the lattice, Doulgerakis–Laarhoven–DeWeger proposed the following heuristic variant of the slicer. Instead of using the list of relevant vectors for reductions, first we only use a subset of the relevant vectors. Since there is no guarantee that the slicer then returns a vector from the exact Voronoi cell, and the output may not be a solution, we repeat the algorithm many times on rerandomized versions of the same target vector. What this means is that instead of reducing t = t with the iterative slicer, we sample t t + L at random (e.g. from a discrete Gaussian distribution over the coset t + 𝓛) and repeat the algorithm on many such samples. This algorithm is given in pseudocode in Algorithm 2.

In the worst case, each of these reductions will end up on the same path and reduce to the same, wrong solutions, thus making no progress. In practice however it was observed that, if the iterative slicer find a solution in a single run with probability p ≪ 1, then repeating the algorithm K times with such randomized target vectors leads to an overall success probability proportional to K × p. This is purely an experimental, heuristic tweak – there are no theoretical guarantees that reducing such a shifted target vector gives “fresh” results.

Algorithm 2 The Doulgerakis–Laarhoven–De Weger randomized slicer [12]
Require: A list L ⊂ 𝓛 and a target tRd
Ensure: The algorithm outputs a closest lattice vector s 𝓛 to t
1: s 0
2: repeat
3: Sample t D t + L , s
4: for each rL do
5: if t r < t then
6: Replace t t r and restart the for-loop
7: end if
8: end for
9: if t 0 < t s then
10: s t t
11: end if
12: until s is a closest lattice vector to t
13: return s

C The Doulgerakis–Laarhoven–De Weger complexity analysis

To analyze the heuristic time and space complexities of the randomized slicer, Doulgerakis–Laarhoven–De Weger made the following assumptions. First, the vectors from the list L ⊂ 𝓛 are assumed to follow a spherically symmetric distribution, and their lengths are assumed to follow the prediction obtained via the Gaussian heuristic. Similarly, the exact Voronoi cell of the lattice is modeled as a ball of a certain radius, such that the volume of the ball matches the volume of the lattice. Containment of the reduced vector t t + L in 𝒱 was then estimated to be equivalent to the condition t λ 1 ( L )

Then, to analyze the success probability of the slicing routine, first it was observed that if t has a rather large norm, then it is likely that L contains a vector v such that t v is shorter than t; progress can then still be made with ease. There is a phase transition at a certain value β such that

  1. If t > β then with probability at least d Θ ( 1 ) there exists a vector vL such that t v t

  2. If t < β then with probability at most 2 Θ ( d ) there exists a vector v ∈ L such that t v t

After reaching norm β, the algorithm may still find a solution, but each additional reduction step is exponentially small to occur. To obtain a bound on the overall success probability of the algorithm, the authors studied the probability that after exactly one more reduction with the list L, we reach the desired norm 1(L), so that t is expected to be contained in V. This is of course only one way for the algorithm to “reach” the Voronoi cell, and it may also happen that after two, three, and any number of additional reductions we still reach the solution, albeit with exponentially small probability. The analysis based on finding the solution in exactly one step, jumping from norm β to λ1(𝓛), is therefore only a lower bound on the overall success probability of the algorithm. This directly leads to the bound on the success probability stated in Equation (5).

Then, given this analysis of the algorithm, the authors obtained a lower bound on the success probability p of a single run of the (randomized) iterative slicer. If one then makes the additional assumption that the success probability of the algorithm is equal to the ratio of the volume of the exact cell over the volume of the approximate Voronoi cell, then this would immediately yield a lower bound on the ratio of these volumes as well. This would then lead to the conjectured lower bound on the ratio of the volumes given in Equation (4).

As shown in this paper, the latter step is incorrect, as we give tight bounds on the ratio of these volumes, and show that the inverse of the expression from (4) is not a lower bound on the ratio of the volumes.

Received: 2019-06-05
Accepted: 2019-07-01
Published Online: 2020-11-17

© 2020 T. Laarhoven, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.6.2023 from https://www.degruyter.com/document/doi/10.1515/jmc-2020-0074/html
Scroll to top button