Lattice Sieving in Three Dimensions for Discrete Log in Medium Characteristic

: Lattice sieving in two dimensions has proven to be an indispensable practical aid in integer factorization and discrete log computations involving the number field sieve. The main contribution of this article is to show that a different method of lattice sieving in three dimensions will provide a significant speedup in medium characteristic. Our method is to use the successive minima and shortest vectors of the lattice instead of transition vectors to iterate through lattice points. We showcase the new method by a record computation in a 133-bit subgroup of F p 6 , with p 6 having 423 bits. Our overall timing is nearly 3 times faster than the previous record of a 132-bit subgroup in a 422-bit field. The approach generalizes to dimensions 4 or more, overcoming one key obstruction to the implementation of the tower number field sieve.


Introduction
The most widely adopted public-key cryptography algorithms in current use are critically dependent on the (assumed) intractability of either the integer factorization problem (IFP), the finite field discrete logarithm problem (DLP) or the elliptic curve discrete logarithm problem (ECDLP). The most effective known attacks against IFP and DLP use the same basic algorithm, namely the Number Field Sieve (NFS). This algorithm has subexponential complexity in the input size. On the other hand, all known methods to attack the ECDLP in the general case have exponential complexity. However there are special instances of the ECDLP which can be attacked by effectively transferring the problem to a finite field, allowing the NFS to be used. For example such instances arise in the context of pairing-based cryptography, where certain elliptic curves can be used to realize 'Identity-Based Encryption' (IBE). There is a trade-off between the reduced security due to the size of the finite field on which the security is dependent, and increased efficiency of the pairing arithmetic. The optimal parameters have been the subject of intense scrutiny over the last few years, which have seen a succession of improvements in the NFS for the DLP in the medium characteristic case. This is directly relevant in the case of pairings, where the finite field on which the security of the protocol depends is typically a small degree extension of a prime field.
A key part of the NFS is lattice sieving. The main contribution of this article is to demonstrate that different methods of lattice enumeration can make a significant difference to the speed of lattice sieving. This paper is organized as follows. In section 2, we give a very brief overview of the Number Field Sieve algorithm in the medium-characteristic case. A more detailed explanation can be found in [11]. One of the main bottlenecks of this algorithm is lattice sieving, which involves enumerating points in a (low-dimensional) lattice. We propose in section 3 a straightforward idea to significantly increase enumeration speed in dimension three and above. The idea is to change the angle of planes that are sieved through in order to reduce the number of planes. This idea has been used before for lattice enumeration in a sphere [17], however it has not been applied successfully to lattice sieving for the NFS. We show that the idea can work well by using integer linear programming to find an initial point for iteration in a plane within the sieve cuboid. In section 4 we propose a novel method to amortize memory communication overhead which applies regardless of dimension. In section 5 we give details of a new record discrete log computation in F p 6 . The previous record due to Grémy et al [11] had p 6 with 422 bits, and this paper has p 6 with 423 bits. We deliberately chose a field size just one bit larger in order to attempt a direct comparison of methods and timings. In section 6 we present a record pairing break with the same prime p. Finally we conclude in section 7 and mention some possible future research ideas.

Number Field Sieve
We start with a very brief sketch of the NFS in the most naive form suitable for computing discrete logs in F p n . Consider the following commutative diagram: The polynomials f 0 (x) and f 1 (x) are irreducible in Z[x] of degree n, and they define the number fields Q(α) and Q(β) respectively. We require that f 0 (x) and f 1 (x), when reduced modulo p, share a factor ψ(x) of degree n which is irreducible over Fp. This defines the finite field F p n as (Z/pZ)[x]/⟨ψ(x)⟩. Usually ψ(x) is simply the reduction of f 0 (x) modulo p.
In two-dimensional sieving, for a bound B, we inspect many pairs of integers (a, b) with 0 < a ≤ B and −B ≤ b ≤ B in the hope of finding many pairs such that and Res (f1, a − bx) are both divisible only by primes up to a bound B 2 . This B 2 is called the smoothness bound, or the large prime bound. The set of all polynomials a − bx that we inspect is called the search space. There is also a bound B 1 , called the factor base bound, which is the largest prime used in sieving. The factor base consists of all primes up to this bound B 1 . In brief, we seek two different representations of (the image of) a − bx over the factor base. There are excellent detailed explanations of the NFS, see [3,10,11,16]. We refer the reader to those papers for further details. Recently, new variations of NFS have been described where the norms (i.e. resultants) are even smaller in certain fields, see [18,21].

Lattice Sieving
The original lattice sieve (due to J.M. Pollard [20]) has developed into the 'special-q' lattice sieve which we now outline. Let q be a rational prime, let r be an integer with f 0 (r) ≡ 0 mod q, and let q = ⟨q, θ − r⟩ be an ideal of K = Q(θ) ∼ = Q[x]/⟨f 0 ⟩ lying over q. We choose q to be smaller than our smoothness bound. We fix a factor base, which will consist of all prime ideals in the ring of integers of K whose norm is smaller than our pre-determined factor base bound. We look for (integral) ideals of K that are divisible by q, and we do this by looking for ideals whose norm is divisible by q. We also would like the norm to be divisible by many other small primes p. We fix q and iterate over all p in the factor base using a sieve.
In 3-dimensional sieving, the pairs (a, b) corresponding to a − bx (in section 2) become triples (a, b, c) corresponding to a + bx + cx 2 . A sieving ideal will have the form ⟨a + bx + cx 2 ⟩. Relation collection examines a subset S of the whole set of polynomials A(x) of degree 2. The subset S is called the search space and is made of the polynomials A(x) of bounded coefficients.
We perform sieving in three dimensions as follows. We use a fixed-size sieve region [ where each lattice point will correspond to a norm which is always divisible by q and hopefully divisible by many p. Define lattices Λq and Λpq by where f 0 (r) ≡ 0 mod q and f 0 (t) ≡ 0 mod pq, and the columns are a basis. Compute an LLL-reduced basis for both Λq and Λpq to get matrices L ′ q and L ′ pq . Then let which is an integer matrix by construction. Let Λ ′ be the lattice with basis L ′ . We mark all (i, j, k) in H ∩ Λ ′ . As a result, for a sieve location (i, j, k) that has been marked, if we let then we know that the norm of ⟨a + bθ + cθ 2 ⟩ is divisible by both q and p. We compute and reduce Lq once per special-q, and compute Lpq etc for each p. We compute (a, b, c) only for (i, j, k) that have been marked for many p (above a pre-determined threshold).
Our new results have two aspects. First, in section 3 we improve the speed of enumeration of points in dimensions higher than two. Second, in section 4 we give a new way of avoiding cache locality issues by the use of a histogram of lattice point hits. This applies regardless of dimension.

Faster enumeration
In a lattice Λ of rank n recall that the i-th successive minimum is defined by where Br = {x ∈ R n : ||x|| ≤ r}. In particular, λ 1 (Λ) is the length of a shortest nonzero vector in Λ. A basis v 1 , . . . , vn for Λ is said to be a Minkowski-reduced basis if, for k = 1, 2, . . . , n, v k is the shortest lattice element that can be extended to a basis with v 1 , . . . , v k−1 .
We assume we are sieving in three dimensions. We fix a bound B and let be the sieving region. Let Λ ′ be the lattice defined in section 2. The problem is to list the elements of Λ ′ ∩ H in an efficient way. In previous work this is done by going through the planes parallel to the xy-plane, and enumerating the lattice points in each of these planes. We propose a different method which uses fewer planes.
Let v 1 , v 2 , v 3 be vectors having lengths λ 1 (Λ ′ ), λ 2 (Λ ′ ), λ 3 (Λ ′ ), the first three successive minima of Λ ′ . These three vectors are guaranteed to exist and we can either find all three, or an acceptably close approximation (see Remark 1). The origin together with v 1 and v 2 define a plane which we call P. Let We refer to the plane G = P − cmax · v 3 as the 'ground plane'.
Our approach is very simple: to enumerate all lattice points in H, we enumerate all points in the ground plane G that lie in H, and then all points in the translates G + kv 3 for k = 1, 2, 3, ... that lie in H, until we reach the last translate intersecting H. Remark 1. Finding v 1 , v 2 , v 3 is done with the LLL algorithm. In practice, in very small dimension such as three, this is sufficient to find a Minkowski-reduced basis, or a close approximation which is good enough for our purposes.
Remark 2. To easily enumerate points in a plane G + kv 3 , we first locate one point p 0 that is contained within the plane and the sieving region H. For this, we use integer linear programming (described in this context below). Once we have located p 0 , we proceed to enumerate points in this plane by adding and subtracting multiples of v 1 and v 2 from p 0 , until by doing so we are no longer within H. This is done inductively, by first enumerating all p 0 + c 1 v 1 where c 1 runs over all integers such that p 0 + c 1 v 1 is in H. Then we add v 2 and enumerate all p 0 + v 2 + c 1 v 1 where c 1 runs over all integers such that p 0 + v 2 + c 1 v 1 is in H. Then we add v 2 again, and repeat. This may not be the optimal method of enumerating points in G + kv 3 , however it worked well in our computations and is sufficient for our purposes. Pseudo-code for this can be seen in Algorithm 1.
This inductive procedure will extend to higher dimensions, as long as the integer linear programming problem required to find the corresponding feasible points is tractable. We expect this to be the case certainly up to dimension six, and perhaps further.
Remark 3. If the lattice is very skewed, it is possible that the last valid sieving point in the plane is p k = p k−1 + v 1 + c · v 2 , where c ≥ 2 (and p k−1 is the previous point). It would be preferable to be able to reach all points by unit additions of v 2 so for practical purposes, we do this and ignore the cases where such 'outlier' points are missed.
We have quantified the percentage of missed points, based on test data from the record computation and comparison with an exhaustive search. We consistently found about 1.8% of points were missed per specialq. Since there are usually up to hundreds of thousands of primes on a sieving side, and multiple planes per prime, it would only take a small number of missed points per prime to account for this percentage. This likely happens at planes intersecting only corners of the sieve region.
Remark 4. In two dimensions, the sieving method of Franke and Kleinjung [7] is very efficient. Our approach works in the 2d case also, using the first two successive minima of the 2d lattice, but it will not quite compete with the method in [7] in terms of speed of enumeration because we must do a little extra work when dealing with boundaries. This shows that dealing with the boundaries of the sieving region is not trivial.
We summarize the above description in Algorithm 1.
Remark 5. The observer may notice that in Algorithm 1 as presented, there is no guarantee upon adding (v 1 , v 2 , v 3 ) to (x, y, z) to move to the next row, that (x, y, z) is still within the sieving plane and boundary. We have written the algorithm like this to get across the main idea, but in the code every time we switch to a new row, we solve a mini integer linear programming task to get a multiple of (u 1 , u 2 , u 3 ) to subtract from and pin (x, y, z) to the start of the row while keeping it within the boundary. Solving a one-dimensional integer linear programming problem can be done in-line, and does not lead to much extra code complexity.

Algorithm 1 3D Lattice Sieve
Input: List of prime divisibility events contained in M procedure from (x, y, z) until we are at the 'start of the row' for every vector in current row do Repeat the above while loop in the opposite direction in the plane That is, by subtracting (v 1 , v 2 , v 3 ) from (x, y, z) to move to the next row. We can still progress within each row by pinning to start, then adding (u 1 , u 2 , u 3 ). When finished in both directions, move to next plane: (x, y, z) ← (ws 1 , ws 2 , ws 3 ) + (w 1 , w 2 , w 3 ) end while end for end for end procedure

Previous Lattice Enumeration Methods
Lattice enumeration is widely used in algorithms to solve certain lattice problems, such as the Closest Vector Problem. However, sieving in a cuboid introduces many complications that do not occur when sieving in a ball.
Our enumeration here is similar to Fincke-Pohst-Kannan's algorithm (FPK) for lattice enumeration [6,17]. However, it is significantly different in that we sieve in a box, as opposed to a sphere. Further, we do not compute a norm for every point to test if it is within the boundary -use of a box allows us to separate many points which may be treated in fast loops with no individual boundary checking. In practice this makes a huge difference. Note that L. Grémy's space sieve is 120 times faster than the FPK algorithm (see [10]). We outperform the space sieve by over 2.5×. Note also that sieving in a rectangular region is fundamental to Franke and Kleinjung's 2d lattice sieve algorithm, and its success depends on the shape of this region.

Example
The lattice used in Figure 2 is the following:

Integer Linear Programming
This can be formulated as an integer linear programming problem, where the aim is to either minimize x, subject to depend on u, v, B, or find any feasible point, if one exists. This problem is well studied, and though it is NP-hard in general, can be solved easily in small dimensions. It is computationally trivial in dimension 3, for example, which we use in this article.
We shall give more details now. Our approach is based on 'Fourier-Motzkin' elimination. Given R = (x, y, z), u = (u 1 , u 2 , u 3 Our approach is to eliminate a, deduce b and then compute a. We first transform the system so that the entries of the first column of A are normalized (i.e. we multiply across by the least common multiple of u 1 , u 2 , u 3 and divide across respectively by |u 1 |, |u 2 |, |u 3 |. Thus each entry in column 1 of A is ±k for some k ∈ Z). We look for all pairs of rows of A such that a cancels. We achieve this with a nested loop in (i, j) and where we disregard a pair of rows if both a and b cancel (this happens e.g. if |i − j| = 3). For a given pair of rows where we have eliminated the term in a, it is then easy to compute where C i are the constants in the RHS vector, and V i are the coefficients of the vector v in A, as long as V i +V j ≠ 0 (otherwise we skip this (i, j)). When we consider all pairs, we keep the smallest value of b found. We then compute the smallest corresponding value of a by back-substitution in all 6 rows. This is summarized in algorithm 2.
Remark 6. It is possible that no feasible point exists or is found, but we can detect this by checking if p 0 is within bounds.

Algorithm 2 Integer Linear Programming
Input: such that R ∈ P, but R not necessarily in boundary. Output: (a, b) such that p 0 = R + a · u + b · v contained in both P and boundary. procedure

Dealing with Cache Locality Issues
Representing a 3d lattice in memory can be done in different ways. One problem with arranging points first by adjacent lines, then adjacent planes etc is that the storage/retrieval of points tends to result in random memory access patterns, which severely impacts performance. This is a fundamental concern in large-scale computation. Computer manufacturers address this by providing various levels of 'cache', i.e. a limited quantity of high-speed memory, too costly for main memory, which is used as a temporary store of frequently-accessed or burst-access data. Traditionally, lattice-sieving has always had to make use of cache to minimize the cost of the random memory access patterns that occur in practice. The usual way this is handled is via 'bucket sorting' [1].
Our idea to improve cache locality without using bucket sorting is simple: list and sort. We encode the x, y, z coordinates of each lattice point as a 32-bit integer. A sieve 'hit' is an ordered pair (integer, log p) where p is a rational prime that divides the norm of the ideal corresponding to the lattice point/integer. Here 2 ≤ p ≤ B 1 where B 1 is the factor base bound. We store all sieve hits in a list of increasing size. Because the list increases strictly linearly, this is ideally suited to fast memory access and is compatible with all levels of cache. By itself, this is not an advantage because we have merely collected a long list of randomly ordered pairs. However, we can sort this list by sorting the integers in increasing order, i.e., we sort these pairs by the integers only. We can then recover lattice points with a large smooth part via a linear scan of the sorted list, because if an integer appears K times as the first coordinate of a pair then it will appear with K different values of p. The more times it appears, the smoother the lattice point is.
Remark 7. Our list/sort approach must be considered carefully in terms of memory because we must store log p and full co-ordinates for every sieve event. While in practice sorting the list of all sieve events is still fast, the memory required to store the full list is considerable (it certainly does not fit in cache all at once, but cache is still essential and is used implicitly in the sorting routine, which is cache-friendly). This meant we had to ensure our sieving parameters could be set to fully utilize the compute capacity of our sieving nodes. See section 5 for further details.
Remark 8. For our record computation (see next section) we used a sieving region H of size 2 9 × 2 10 × 2 10 . A traditional lattice sieve would thus take 2 29 bytes, or exactly 512MB to store the sieve (assuming one byte to store log p per sieve location). We need to store 5 bytes per sieve hit, one byte for log p plus 4 bytes for the (x, y, z) coordinates, but note that we do not have to represent every possible sieve location, only the hits. In practice, for the parameters we used to set the record, each instance of the sieve program records roughly 500 million sieve events, and so requires about 5 × 500 million bytes = 2500MB of RAM in total per special-q. This is approximately a fivefold increase over 512MB for a traditional lattice sieve, but it was not a problem for us as our production nodes had the memory to cope.
Remark 9. We avoid sieving the smallest primes, as the corresponding lattices are very dense and greatly increase the memory required for the list + sort approach. This would not really be an issue if we used bucket sorting, but either way these smallest primes can probably be left out, because they tend to appear in almost every relation. The less dense lattices (not for small primes) should be sieved, and our lattice enumeration method will be very fast here.
Remark 10. Grémy et al's [11] sieve does not include bucket sort, and neither do we. It could be a possible improvement to replace our list/sort with enumerate/bucket sieve (and we could retain our fast enumeration idea as bucket sorting relates only to memory access).
We compare sieving statistics in Table 1 between our implementation and that of [11]. Note that all of these times give total special-q time excluding the cofactorization time. In [11], sieving and memory access are intertwined and so we compare this to our combined sieving/listing/sorting time. Our siever is over 4.5× faster. Sieving speed in our case is not the bottleneck. The extra sieving speed is gained at the expense of increased memory requirements.  6 We implemented the 3d case of our lattice sieving idea in C and used it to set a new record in solving the discrete log in the multiplicative subgroup (F p 6 ) × . Previous records were set by Zajac [24], Hayasaka et al (HAKT) [15], and Grémy et al (GGMT) [11]. All computations were done on the main compute nodes of the Kay cluster at ICHEC, the Irish Center for High-End Computing. Each node consists of dual 20-core Intel Xeon Gold 6148 (Skylake) processors @ 2.4 GHz, with 192Gb RAM per node. All timings have been normalized to a nominal 2.0GHz clock speed. With ϕ = (1 + √ 5)/2, we chose the prime p = ⌊10 21 · ϕ⌋ + 29. Our target field is F p 6 , where p 6 has 423 bits. This is comparable to the field size of the previous record at 422 bits [11]. One consequence of our choice is to allow a fair comparison of the total effort required to solve discrete logs in a field of this order of magnitude.

Polynomial selection
We implemented the Joux-Lercier-Smart-Vercauteren (JLSV 1 ) algorithm and ranking polynomials by their 3d Murphy E-score, after about 100 core hours found the following polynomial pair from the cyclic family of degree six described in [9]: We computed the 3d alpha score for these and found α(f 0 ) = −3.6 and α(f 1 ) = −12.6.

Relation collection
Our implementation was written as a standalone executable, independent of CADO-NFS, producing relations in the format that CADO-NFS can use. We carry out cofactorization using Pollard's p − 1 algorithm and two rounds of Edwards elliptic curve factorization. The cofactorisation implementation uses a standard approach and is not an improvement on CADO-NFS's cofactorization rig. Cofactorization typically takes between 120-130 seconds, this is the bottleneck.
Our program is extremely fast to sieve. This allowed us not only to use a larger factor base, but also to search for relations that are 'twice as difficult' to find, i.e. to use a smoothness bound of 2 28 as opposed to the 2 29 used in the 422-bit record. We were able to use a factor base bound of 2 24 with no major loss of speed. Typically sieving takes less than 40 seconds per special-q, including sorting.
The time per special-q was roughly constant across the entire range, at between 150-170 seconds. The total time per special-q is made up of sieving plus cofactorization. There were about 500 million sieve hits per special-q. As we explained in Remark 8 each sieve hit takes exactly 5 bytes of memory, so we need about 5×500 million bytes = 2500MB of RAM per special-q. We were able to utilize all 40 cores on our sieving nodes, where each node has 192Gb of memory. We needed about 2500MB ×40 = 100GB in total.
We were able to use a larger sieve region due to the speed of lattice enumeration. We used a sieving region of size 2 9 × 2 10 × 2 10 , compared to the region 2 10 × 2 10 × 2 8 used in the previous record.
We sieved most special-qs on the f 0 side with norm between 16M and 263M. Our program sieves only one ideal in each Galois orbit. We apply the Galois automorphism as a post-processing step. We found 7,152,855 unique relations and then applied the Galois automorphism (which is trivial in core-hours) and after removing duplicates we were left with 34,115,391 unique relations. The total sieving effort was 69,120 core hours. Table 2: Key statistics of record computations in F p 6 . All timings in core hours. Recall that S is the search space, q is the norm of q, H 0 , H 1 , H 2 are the bit lengths of sieving dimensions, and qmax is the largest special-q norm we encounter (which will be just below the smoothness bound).

Construction of matrix
We modified CADO-NFS [23] to produce a matrix arising from degree-2 sieving ideals for the linear algebra step.

Linear algebra
We used the Block Wiedemann implementation in CADO-NFS (we compiled commit d6962f667d3c... with MPI enabled), with parameters n = 10 and m = 20. Due to time constraints, we needed to minimize wall clock time so we chose to run the computation on 4 nodes, to reduce the iteration time for the Krylov sequences. Also, to avoid complications, we did not run the 10 Krylov sequences in parallel. The net result was that we spent 11,760 core hours on the Krylov step, which is suboptimal (but got us the result in time). It took 24 core hours (on one core) to compute the linear generator and 672 core hours for the solution step. This gave 2, 754, 009 of the factor base ideal virtual logarithms. We ran the log reconstruction to give a final total of 25, 215, 976 known virtual logarithms out of a possible total of 29, 246, 136 factor base ideals. We note that a similar-sized matrix was solved in [11], which used 1,920 core hours for the Krylov step. However, due to our choice of smoothness bound, set to 2 28 , our linear algebra effort to set the new record was considerably less than that of the previous record of 422 bits, which involved a Krylov step taking 23,390 core hours for a smoothness bound set to 2 29 .

Individual logarithm
Take the element g = x + 2 ∈ F p 6 = Fp[x]/⟨f 0 (x)⟩. Let ℓ = 9589868090658955488259764600093934829209, a large prime factor of p 2 − p + 1. Let h = (︁ p 6 − 1 )︁ /ℓ. Note that g is not a generator of the entire multiplicative subgroup of F p 6 , but we do have that g h is a generator of the subgroup of size ℓ. It is easy to compute vlog(g) since N 0 (g) = −3 3 .

Conclusion
We have presented a new approach to lattice sieving in higher dimensions for the number field sieve, together with a novel approach to avoiding inefficient memory access patterns which applies regardless of dimension.
In addition, we implemented the 3d case of our idea and used it to set a record in solving discrete log in F p 6 , a typical target in cryptanalysis of pairing-based cryptography, in time a factor of more than 2.5× better (in core hours) than the previous record, which was of a directly comparable size. It should be possible to improve the code further with more effort put into optimization. We have indicated that the sieving enumeration generalizes to higher dimensions as long as a certain integer linear programming problem is tractable. This has immediate implications for the possibility of implementation of the Tower Number Field Sieve and e.g. the Extended Tower Number Field Sieve, the latter of which is dependent on sieving in dimension at least four. The recent preprint [14] addresses one major prerequisite to the realization of TNFS and ExTNFS, concerning polynomial selection, while in the present work we give a strong indication that another obstruction, that of sieving efficiently in small dimensions of four and above, may be easier than first thought.