## 1 Introduction and related works

Traditional cryptographic primitives are provably analyzed in a black-box model, where the adversary has access to the primitive via restrictive and well-defined interfaces (oracles). However, this does not truly reflect the real-world scenario, where the adversary may obtain lots of unintended *side-channel* information about the cryptosystem from its implementation. This extra leakage of information is not accounted for in its analysis in the aforementioned black-box model of security. Leakage-resilient cryptography emerged as a theoretical foundation to address the issue of side-channel attacks. Here it is assumed that the adversary has access to side-channel information, which is modeled by allowing the adversary to specify arbitrary leakage functions (subject to some restrictions) and obtain leakage from the secret key of the system as dictated by these functions. Note that some restrictions must be imposed on the class of allowable leakage functions as otherwise an adversary can simply read off the entire secret key from memory. Depending upon these restrictions, many theoretical models of leakage have emerged in the recent literature [2, 6, 9, 14, 16, 30].

In this work, we focus mainly on the *bounded-memory leakage* model [2, 6]. In this model, the adversary chooses arbitrary polynomial-time computable leakage functions *f* and receives *f*(*sk*), where *sk* is the secret key. The only restriction is that the sum of output length of all these leakage functions that the adversary can ever obtain is bounded by some parameter *λ*, which is smaller than the size of *sk*. Other notable models of leakage include the “only computation leaks information” (OCLI) model, continual memory leakage model, auxiliary input leakage model, etc. We briefly survey these models in Appendix B.

Ever since the ground-breaking work of Diffie and Hellman (DH) [13], authenticated key exchange (AKE) protocols arose as an important cryptographic primitive. The DH key exchange protocol can actually be viewed as a non-interactive key exchange (NIKE) protocol, where the parties can establish a shared key among themselves without any interaction, provided the public keys of all the parties are pre-distributed and they agree on some (common) global public parameters. NIKE is very useful in any bandwidth-critical, power-critical, resource-critical systems such as embedded devices, wireless and sensor networks, where the communication must be at its minimum. Despite its real-world applications, NIKE has mostly been overlooked until recently [19]. Freire et al. [19] proposed formal security models for NIKE and efficient constructions of NIKE in these models. Although the NIKE constructions of [19] are secure in the traditional (non-leakage) setting, the security of them may completely break down in the presence of leakage. In fact, we demonstrate that the pairing-based construction of NIKE shown in [19] is insecure, even if the adversary could obtain only a single bit of leakage from the secret key of a party. Therefore, it is really important to thoroughly study on the leakage resiliency of NIKE. We note that much research has been carried out on analyzing leakage resiliency of interactive key exchange protocols [3, 4, 5, 11], but the leakage resiliency of NIKE remains largely unstudied. We note that Morita et al. [31] studied the security of NIKE protocols in the face of related key attacks (RKA) and show various implications and separation results (in the RKA setting) among the security notions of NIKE put forward by Freire et al. [19]. Morita et al. [31] have considered related-key attacks on NIKE and showed that there is a separation among the security notions. We can see that the equivalences between the different NIKE models of Freire et al. [19] easily carry forward to the leakage setting as well. Therefore, such separation does not hold in the leakage setting. We refer the interested readers to Morita et al. [31] and Freire et al. [19] for the detailed exposition.

As one of the central applications of leakage-resilient NIKE (LR-NIKE), we show how to construct a leakage-resilient IND-CCA-2-secure PKE scheme generically from LR-NIKE (in the bounded-memory leakage setting). All the previous constructions of leakage-resilient IND-CCA-2 (LR-IND-CCA-2) secure PKE schemes rely solely on hash proof techniques to achieve leakage resiliency. However, the generic approach of constructing a leakage-resilient CCA secure PKE scheme *solely* using hash proof systems (HPS) is inherently limited to a leakage rate below *o*(1) and 1 – *o*(1), respectively. They could achieve a leakage rate of *o*(1) by using HPS and one-time lossy filters (OTLF)^{1} and the optimal rate of 1 – *o*(1) by cleverly instantiating the underlying primitives, namely HPS and OTLF. However, the complexity assumption they make for their construction is rather non-standard, namely a refined subgroup indistinguishability (RSI) assumption over composite order groups. The parameters of their construction are also large due to the use of composite-order groups.

We deviate from this HPS-based approach of constructing LR-IND-CCA-2-secure PKE schemes and show that this connection is not inherent. To this end, we develop a new primitive called *leakage-resilient non-interactive key exchange* (NIKE). Our construction of leakage-resilient NIKE relies solely on a leakage-resilient chameleon hash function (which in turn relies only on a strong collision-resistant hash function) and only a constant number (to be precise, only 3) of pairing operations. We then show a very simple and generic construction of LR-IND-CCA-2-secure PKE schemes achieving the optimal leakage rate of 1 – *o*(1) based solely on the assumption that the leakage-resilient NIKE exists.

We also show the applicability of leakage-resilient NIKE to construct leakage-resilient authenticated key exchange (AKE) protocols and leakage-resilient low-latency key exchange (LLKE) protocols (in the bounded-memory leakage setting). All the previous constructions of leakage-resilient AKE protocols [4, 11] either implicitly rely on HPS (by using leakage-resilient PKE as their building block) or explicitly by using the properties of HPS. Our generic construction of leakage-resilient AKE gives an alternate way to construct AKE protocols, different from the previous constructions of leakage-resilient AKE protocols, achieving the optimal leakage rate of 1 – *o*(1). Low-latency key exchange (LLKE) is one of the most practical key exchange protocols that permits the transmission of cryptographically protected data, without prior key exchange, while providing perfect forward secrecy (PFS). This concept was discussed in Google’s QUIC^{2} protocol. Further, a low-latency mode is currently under discussion for inclusion in TLS version 1.3. Although the first formal model of LLKE was studied by Hale et al. [23], leakage resiliency of LLKE remains unstudied until present. Being a candidate for TLS 1.3, it is important to explore the leakage resiliency of LLKE protocols as side-channel attacks widely exist.

### Our contributions

The main contributions are abridged as follows.

### Leakage-resilient NIKE

As our *first* major contribution, we study the leakage resiliency of NIKE protocols. We present a leakage security model for NIKE protocols, defining the notion of leakage-resilient non-interactive key exchange. We point out a subtle point in defining leakage-resilient NIKE protocols as discussed below and show that care must taken while defining it. Finally, we show how to construct a secure NIKE protocol.

### Our model

Our model of leakage-resilient NIKE adopts and generalizes the CKS-heavy model of NIKE proposed by Freire et al. [19] in the setting of leakage. Firstly, we notice that defining the security of NIKE protocols in the setting of leakage requires more care than defining the security model for NIKE protocols in the non-leakage setting. Note that the shared key of a party in a two-party NIKE protocol is simply a deterministic function of its own secret key and the public key of the other party. However, in the setting of leakage, there is a simple attack on any LR-NIKE protocol as follows: the adversary can simply encode the (description) of the shared-key derivation function as the leakage function, with the public key of the other party hard-coded in the function. Hence, using this, it can directly leak from the shared key. To prevent this trivial attack, we must impose some meaningful restrictions. One plausible solution to circumvent the above attack could be to restrict the class of allowable leakage functions. In particular, one may assume that the leakage functions are not allowed to access the public parameters of the system (and hence the public keys of the parties) while leaking from the secret key of one party. However, this seems to be an unnatural restriction on the leakage functions since the public parameters (and in particular the public keys of all parties) are known to all the parties, and hence to the adversary also. To this end, we propose an alternative model for LR-NIKE protocols that avoids the above impossibility result. In particular, we assume that all parties participating in a NIKE protocol are equipped with a *leak-free* hardware component which can be used to shield a small part of their public keys. The leakage functions can access the public keys of all the parties, except the components stored in the leak-free hardware. Note that the leak-free hardware should be used in a minimal way in any LR-NIKE protocol^{3}. Now we informally define our security model for LR-NIKE protocols.

Our model for LR-NIKE assumes that all parties participating in the protocol are equipped with a leak-free hardware component. This hardware is used to store a small part of their public keys, hence effectively shielding these parts from the view of the adversary. The specification of which components are stored in the hardware is protocol-specific. However, we stress that the usage of the leak-free hardware should be *minimal*. Our security model for LR-NIKE protocols is a very strong model allowing the adversary to register arbitrary public keys into the system, corrupt honest parties to obtain their secret keys, issue extract queries to obtain shared keys between two honest parties and also between one honest party and another corrupt party. Besides this, we also allow the adversary to obtain additional (bounded) leakage from both the parties involved in the Test/challenge query. We also introduce the notion of *validity* of a test query reminiscent of the notion of freshness of a test session for (interactive) key exchange protocols. Finally, in the (valid) test query between two honest parties, the adversary has to distinguish the shared key from a random key.

### Our construction

The starting point of our construction is the NIKE scheme of Freire et al. [19]. However, we show that the above scheme is insecure in the setting of leakage, even if a *single* bit of the secret key is allowed to leak. The main step where the construction breaks down is related to the exponentiation operation. In other words, if exponentiation is performed normally as in the original protocol, it may be completely insecure in the presence of leakage. A common countermeasure against this uses *masking* technique, where the secret key of the system is secret-shared using a multiplicative secret sharing scheme and the exponentiation is done step-wise using each of these shares. However, these masking schemes do not achieve the optimal 1 – *o*(1) leakage-rate and also require additional restrictions (assumptions) on the class of allowable leakage functions for arguing security. In particular, all these masking schemes are proven secure in the *only computation leaks information* (OCLI) axiom of Micali and Reyzin [30] or under the *split-state* assumption [24, 27, 40]. The OCLI axiom postulates that the leakage only happens from the memory parts that are touched during the actual computation(s), and the rest of the memory portions not touched by computation are not prone to leakage. In the split-state leakage model, it is assumed that the secret key is split into several disjoint parts and the adversary is allowed to obtain leakage independently from each of these parts. However, both these models do not address leakage from the entire memory, which is the case for the bounded memory leakage model.

So, a major challenge in our construction is to come up with a leakage-resilient exponentiation operation achieving a leakage rate of 1 – *o*(1) in the global memory leakage model. Our first idea is to use the techniques of [1, 6, 33] to perform leakage-resilient exponentiation. In particular, let 𝔾 be a group of prime order *p*, and let *g*_{1}, …, *g _{n}* be random elements of the group. Then the vector

**x**= (

*x*

_{1}, …,

*x*) ∈

_{n}*y*∈ 𝔾 with respect to

*g*

_{1}, …,

*g*, if

_{n}*g*

_{1}, …,

*g*can be included in the public parameters

_{n}*params*, the public key of a party can be set to

*y*, and the secret key can be the discrete log representation

**x**of

*y*. In [1, 6, 33], it is also shown that, as long as the leakage on each representation is bounded by

**x**

^{′}of

*y*. So this achieves the leakage rate of 1 –

*o*(1). However, it turns out that it becomes surprisingly difficult to incorporate this change in the construction of Freire et al. [19]. The main difficulty stems from the use of multiple generators and also the special structure of the public key.

To this end, we use the idea of the twisted-pair PRF trick [20], but carefully adapted to deal with leakage. The main idea behind the twisted-pair PRF trick is that it involves two PRFs *F* and *F*^{′} with reversing keys. The output of the twisting function is simply the output of the two PRFs that are combined together in special way. The guarantee is that the output of the twisting function is computationally indistinguishable from a uniform value over the same range. For our construction of a leakage-resilient twisted-pair PRF trick, we add strong randomness extractors as pre-processors to this original twisting technique [20]. The guarantee is that the output of our leakage-resilient twisted-pair PRF function is computationally indistinguishable from a uniform value over the same range, even if the adversary knows the key of one PRF in full and obtains bounded leakage from the key of the other PRF. For our construction of NIKE in the bounded leakage model, the strong randomness extractor (when appropriately parameterized) takes care of the (bounded) leakage, and then we can extract randomness from the secret key of the NIKE and use the extracted key as the key in one of the PRFs. The output of the leakage-resilient twisting function is then used to do secure exponentiation in the presence of leakage. By appropriately parameterizing the extractor, we obtain the optimum leakage rate of 1 – *o*(1). Combined with a bounded leakage-resilient CHF tolerating a leakage rate of 1 – *o*(1), we can achieve a leakage-resilient NIKE in the bounded-memory leakage model with overall leakage rate of 1 – *o*(1).

However, the main drawback of our leakage-resilient NIKE construction is that it requires a *leak-free* hardware assumption. However, as argued before, such an assumption is not merely an artifact of our protocol, but is likely to be required for constructing any LR-NIKE protocol. Since we allow the leakage function to access all the public parameters of the system (and hence the public keys of parties), the adversary can directly leak from the shared key of the parties by encoding the shared key derivation function as the leakage function. Our protocol requires a *leak-free* component to store the seed of the extractor. This is required because the view of the adversary in our construction should be independent of the seed since, otherwise, the adversary may leak from the extracted value and the uniformity guarantee of the extractor does not hold in this case. Hence we cannot include the seed in the public key. On the other hand, to compute the shared key, each party needs to have access to the random seed. Hence we require that the seed is stored in a leak-free hardware component. We also note that, since extractors are information theoretic gadgets, reusability of the seed is not permitted. Hence each party needs to store a short random seed corresponding to every other party in the leak-free hardware for establishing session keys with them. Minimizing this leak-free hardware assumption is an interesting and challenging problem we leave open. A leak-free hardware component assumption was also used in many prior works in leakage-resilient cryptography [18, 22, 25], although in the context of continual memory leakage. In particular, in [25], it is assumed that the leak-free component can produce random encryptions of fixed messages. In [22], it is assumed that there is a linear number of such leak-free components and each component is capable of sampling from a fixed polynomial-time computable distribution. In [18], it is assumed that the leak-free component can sample two vectors from the underlying field such that their inner product is zero. In contrast, in our construction, we do not require the leak-free hardware to perform any expensive computation. We only require it to store several seeds of the extractor, which are typically short random strings.

### Comparison with Chen et al. [11, 12]

It is instructive to compare our construction of BLR-NIKE with the leakage-resilient AKE protocols of Chen et al. [11, 12]. Chen et al. [11, 12] proposed constructions of AKE protocols secure against “after-the-fact” bounded-memory [11] (resp. auxiliary input [12]) leakage attacks. Our main idea of the BLR-NIKE protocols shares some technical similarities with [11, 12]. Namely, the main idea of both these works is an “extract-then-PRF” technique, where a randomness extractor is applied on the long-term secret key (in case of [11, 12] also on the ephemeral secret key) and then two PRFs with reversing keys are applied to the extracted values (twisted PRF trick). Our NIKE protocol uses the framework of Freire et al. [19] and uses the “extract-then-PRF” technique to tackle key leakage attacks. However, in this work, we consider before-the-fact leakage attacks, in contrast to [11, 12] which considered after-the-fact leakage, however, at the cost of restricting the leakage model further to avoid the impossibility result related to after-the-fact leakage.

### Leakage-resilient CCA-2-secure PKE

As one of the central applications of leakage-resilient NIKE (LR-NIKE), we show how to construct leakage-resilient IND-CCA-2 (LR-IND-CCA-2) secure PKE generically from LR-NIKE in the bounded-memory leakage model. This yields a new approach to construct LR-IND-CCA-2-secure PKE schemes, departing completely from the hash-proof frameworks used in prior works. Our construction is practical and also achieves the optimal leakage rate of 1 – *o*(1).

### Our construction

Our generic transformation from an LR-NIKE to an LR-IND-CCA-2-secure PKE scheme essentially follows and adapts the ideas of Freire et al. [19] in the setting of leakage. The main idea behind the transformation is as follows: the public-secret key pair of the LR-IND-CCA-2-secure PKE scheme is the same as the public-secret key pair of the underlying LR-NIKE protocol. While encrypting, another public-secret key pair of the NIKE is sampled independently, and the shared key generation algorithm of the NIKE is run among the two key pairs yielding a shared key. This key is used as the encapsulation key of the underlying IND-CCA-2-secure key encapsulation mechanism (KEM), and the ciphertext is set to be the new sampled public key. Decryption is straightforward, and the decryptor can recover the same shared key by running the shared key generation algorithm with the original secret key and the new sampled public key. Now, from IND-CCA-2-KEM, one can easily get full-fledged IND-CCA-2-secure PKE using standard hybrid encryption techniques. Our transformation preserves the leakage rate in the sense that if the starting point of our construction is LR-NIKE with a leakage rate of 1 – *o*(1), then the LR-CCA-2-secure PKE constructed from it also enjoys the *same* leakage rate.

In Table 1, we show the comparison of our scheme with the state-of-the-art constructions of LR-IND-CCA secure PKE schemes in terms of both *computational* and *communication* complexity. We obtain these complexity figures by instantiating all of the compared schemes with the state-of-the-art constructions of the required underlying primitives. As we can see, the number of group elements involved in our ciphertext is *much less* than the number of group elements involved in the ciphertexts of the other schemes. With regard to the number of exponentiations and multiplication operations also, our scheme is more efficient compared to others, hence improving the computational complexity of the state-of-the-art LR-IND-CCA secure PKE schemes by a significant margin. Note that we do require a constant number of pairing operations (to be precise only 3) in the encryption side and also in the decryption side. According to Benhamouda et al. [7], pairing is roughly three times slower than computing an exponentiation. Therefore, each encryption and decryption cost is roughly nine times of an exponentiation. Since we can achieve the optimum leakage rate (albeit using a leak-free hardware assumption), this additional computation cost is reasonable. In the table, *n* ∈ ℕ is usually the number of generators required for the construction. Also, [36] works over composite order groups of the form 𝔾 = 𝔾_{τ1} × 𝔾_{τ2}. Here 𝓣_{c} denotes the tag space in the encryption scheme of [36], and {0, 1}^{s} denotes the seed space of a strong randomness extractor. Lastly, we want to stress that, although our scheme is more efficient than that of Qin et al. [35, 36] in terms of computational cost and communication complexity and also achieves an optimal leakage rate, our scheme is *not* superior to them because of the use of a leak-free hardware component. The leak-free hardware is clearly a *strong assumption*.

Comparison among the LR-IND-CCA PKE schemes. Here DDH stands for the decisional Diffie–Hellman problem. RSI denotes the refined subgroup indistinguishability problem, and DBDH refers to the decisional bilinear Diffie–Hellman problem.

# exponentiations | # multiplications | # pairings | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Scheme | KeyGen | Enc. | Dec. | KeyGen | Enc. | Dec. | KeyGen | Enc. | Dec. | Size of ciphertext C | Leakage rate | Complexity assumptions | Additional assumptions |

Qin and Liu [35] | n^{2} | (n^{2} + n + 2) | (n^{2} + 2n) | (n^{2} + n) | 2n^{2} | (2n^{2} + n) | NA | NA | NA | o(1) | DDH | No | |

Qin and Liu [36] | (n^{2} + n + 1) | (3n + 2) | (3n + 1) | n | 2n | 2n | NA | NA | NA | 𝔾_{τ1} × {0, 1}^{m} × 𝔾^{n+1} × 𝓣_{c} | 1 – o(1) | RSI over composite order groups | No |

This work | (n + 5) | (2n + 8) | (n + 3) | (n + 3) | (2n + 6) | (n + 3) | NA | 3 | 3 | 1 – o(1) | DBDH | Leak-free component |

### Leakage-resilient AKE

We show how to obtain a generic construction of a leakage-resilient authenticated key exchange (LR-AKE) protocol starting from a leakage-resilient NIKE protocol. We formulate a new security model for LR-AKE protocols, which we call *bounded-memory before-the-fact leakage eCK* (BBFL-eCK) model. We then show a generic construction of BBFL-eCK-secure AKE protocol using LR-NIKE in the bounded-memory leakage setting.

### Our model

Our security model for LR-AKE is a strong security model which addresses (bounded) leakage from the entire memory, which is stronger than the “only computation leaks information” axiom [30]. We present an eCK-style [28] security model, suitably adjusted to the leakage setting.

### Our construction

We give the generic construction of leakage-resilient AKE from leakage-resilient NIKE in the bounded-memory leakage model. We adapt the construction of Bergsma et al. [8] to the setting of leakage. In particular, Bergsma et al. [8] showed a construction of AKE protocols from a standard NIKE protocol and an existentially unforgeable signature scheme. We replace the standard NIKE with our leakage-resilient NIKE and the existentially unforgeable signature scheme with a signature scheme that is existentially unforgeable under chosen message and leakage attacks [26]. We then show that the constructed AKE protocol is secure in our BBFL-eCK security model. The leakage rate of our construction is 1 – *o*(1) under an appropriate choice of parameters. We refer the reader to Table 2 for a more detailed comparison of leakage-resilient AKE protocols.

Comparison among the LR-AKE schemes. Here B/C stands for either bounded or continuous memory leakage; BFL/AFL denotes the resilience of the AKE protocols to before-the-fact/after-the-fact leakage attacks. The shorthands DDH, HPS, CR, *π*-PRF and Ext stand for the decisional Diffie–Hellman problem, hash proof systems, collision-resistant hash function, pairwise-independent PRF families and strong extractors, respectively. CPLA-2 (resp. CCLA-2) secure PKE denotes an adaptively chosen plaintext (resp. ciphertext) after-the-fact leakage secure public-key cryptosystem. *ε*-PG-IND refers to the pair-generation indistinguishable PKE scheme; ODH and KDF refer to the oracle Diffie–Hellman and secure key derivation functions, respectively. BLR-CKS-heavy stands for bounded leakage-resilient “Cash–Kiltz–Shoup”-heavy model (BLR analogue of the CKS-heavy model).

Leakage model | ||||
---|---|---|---|---|

Scheme | B/C | BFL/AFL | Complexity assumptions | Leakage rate |

Moriyama and Okamoto [32] | B | BFL | DDH, HPS, CR, π-PRF, Ext | 1 – o(1) |

Alawatugoda, Boyd and Stebila [3] | C | AFL | CCLA-2 secure PKE, PRF | sub-optimal* |

Alawatugoda, Stebila and Boyd [4] | B/C | AFL | DDH, ODH, ε-PG-IND and CPLA-2 secure PKE, UFCMLA-secure signature, secure KDF, PRF | min{CPLA-2 PKE, UFCMLA-sig} |

Chen, Mu, Yang, Susilo and Guo [11] | B | AFL | DDH, HPS, CR, π-PRF, Ext | min{HPS, Ext} |

This work | B | BFL | BLR-CKS-heavy NIKE, UFCMLA-secure signature | 1 – o(1) |

### Leakage-resilient LLKE

We show an extremely important practical application of leakage-resilient NIKE protocols. We study the leakage resiliency of low-latency key exchange (LLKE) protocols. In this paper, we give a suitable leakage security model for LLKE protocols which we call the bounded-memory leakage LLKE-ma (BL-LLKE-ma) model, where “ma” stands for mutual authentication. We then present a generic construction of leakage-resilient LLKE (LR-LLKE) construction based on our LR-NIKE protocol in the bounded-memory leakage setting.

### Our model

The security of (standard) LLKE protocols has been recently analyzed by Hale et al. [23] under mutual authentication of the client as well as the server. We give a leakage analogue of their security model. Our model allows the adversary to activate arbitrary protocol sessions between the clients and servers. Besides, the adversary can obtain the temporary or the main keys of a session of both the clients as well as the server, obtain the long-term secret key of clients and servers and also obtain bounded leakage from both the client and the server involved in the Test query. Finally, in the test query (satisfying some freshness/validity conditions), the adversary has to guess the requested key from a random key.

### Our construction

Adopting the construction of Hale et al. [23], we show a generic construction of leakage-resilient LLKE protocol from a leakage-resilient NIKE protocol. In particular, we require an LR-NIKE scheme and a UF-CMLA secure signature scheme. Plugging them appropriately in our context, we obtain the construction of a leakage-resilient LLKE protocol. Moreover, the leakage rate enjoyed by our LLKE protocol is also optimal, i.e., 1 – *o*(1).

### Alternative thought

It is possible to compare our LR-NIKE protocol (in the bounded-memory leakage model) with an alternative simpler construction, which is essentially an adaptation of the idea which we made explicit only for the LR-CCA-secure PKE scheme.

Essentially, the idea is as follows: sample a random string *r*, and use a (seeded) randomness extractor to extract another the key generation function. The seed *s* of the extractor is stored in the leak-free hardware component. The intuition behind the security is that, since the leakage from *r* is bounded and the seed *s* is kept outside the view of the adversary, the extracted value *r*^{′} should look random to the adversary (the parameters need to be appropriately set to argue this). While we have made this explicit only for PKE, the above simple idea also works for achieving leakage resilience of NIKE protocols starting from any NIKE protocol in the CKS-heavy model. In fact, our construction of NIKE is exactly along this line: we start with the NIKE protocol of Freire et al. [19] and show the above NIKE protocol can be made (bounded) leakage-resilient by using the above trick. The reason we choose to give a concrete NIKE protocol is that the protocol of Freire et al. [19] is the only existing NIKE protocol secure in the CKS-heavy model (the base model we consider in our paper as well) in the standard model. Hence we start with the protocol of Freire et al. [19] and explicitly show the above idea to bootstrap its security in the bounded leakage setting.

For other primitives like AKE and LLKE, a similar idea works, and we can construct leakage-resilient versions of these primitives in a stand-alone manner using the above idea (given a leak-free hardware assumption). However, the focus of the paper is to present LR-NIKE as a unified paradigm for constructing other leakage-resilient primitives like PKE, AKE and LLKE. In particular, a construction of continuous leakage-resilient NIKE will directly translate into a construction of continuous leakage-resilient PKE, AKE, LLKE via our transformations (which would otherwise not be possible using the above simpler transformations). Also, any improvement in the construction of NIKE will directly impact the efficiency of the corresponding PKE, AKE and LLKE schemes.

## 2 Preliminaries

### 2.1 Notations

For *a*, *b* ∈ ℝ, we let [*a*, *b*] = {*x* ∈ ℝ : *a* ≤ *x* ≤ *b*}; for *a* ∈ ℕ, we let [*a*] = {1, 2, …, *a*}. If *x* is a string, we denote |*x*| as the length of *x*. When *x* is chosen randomly from a set 𝓧, we write *x**A* is an algorithm, we write *y**A*(*x*) to denote a run of *A* on input *x* and output *y*; if *A* is randomized, then *y* is a random variable and *A*(*x*; *r*) denotes a run of *A* on input *x* and randomness *r*. We denote the security parameter throughout by *κ*. An algorithm *A* is probabilistic polynomial-time (PPT) if *A* is randomized and, for any input *x*, *r* ∈ {0, 1}^{∗}, the computation of *A*(*x*; *r*) terminates in at most poly(|*x*|) steps. Let 𝔾 be a group of prime order *p* such that log_{2}(*p*) ≥ *κ*. Let *g* be a generator of 𝔾. Then, for a (column/row) vector *C* = (*C*_{1}, …, *C _{n}*) ∈

*g*the vector

^{C}*C*= (

*g*

^{C1}, …,

*g*). Furthermore, for a vector

^{Cn}*D*= (

*D*

_{1}, …,

*D*) ∈

_{n}*C*the group element

^{D}*X*and

*Y*are

*ε*-close statistically if the statistical distance between them is at most

*ε*, and this is denoted by

*X*≈

_{ε}

*Y*. On the other hand, if

*X*and

*Y*are computationally indistinguishable, we write

*X*≈

_{c}

*Y*. We refer to Appendix A.1 for the definitions of min-entropy, average conditional min-entropy, randomness extractors and basic results related to them.

### 2.2 Underlying primitives for our constructions

For our construction of NIKE in the bounded-memory leakage setting, we require a *bounded leakage-resilient chameleon hash function* (BLR-CHF). Leakage-resilient chameleon hash functions (LR-CHF) postulate that it is hard to find collisions, even when the adversary learns bounded leakage from the secret key/trapdoor. We refer the reader to Appendix A.2 for the formal definition of LR-CHF. We also need standard pseudo-random function (PRF) for this construction (please refer to Appendix A.3). A function *F* is an (*ε*_{prf}, *s*_{prf}, *q*_{prf})-secure PRF if no adversary of size *s*_{prf} can distinguish *F* (instantiated with a random key) from a uniformly random function, except with negligible probability *ε*_{prf}.

For our construction of leakage-resilient AKE and leakage-resilient LLKE protocols, we also need an *existentially unforgeable signature secure against chosen message and leakage attacks* (UF-CMLA). We refer the reader to Appendix A.4 for the formal definition. In all these definitions, the leakage functions can be arbitrarily and adaptively chosen by the adversary, with the only restriction that the output size of those functions are bounded by some leakage parameter.

## 3 Assumptions in a bilinear group

In this paper, we consider *type-2* pairings over appropriate elliptic curve groups. Let 𝓖_{2} be a *type-2 pairing parameter generation* algorithm. It takes as input the security parameter 1^{κ} and outputs

such that *p* is a prime, (𝔾_{1}, 𝔾_{2}, 𝔾_{T}) are a description of multiplicative cyclic groups of the same order *p*, *g*_{1}, *g*_{2} are generators of 𝔾_{1} and 𝔾_{2}, respectively, *e*: 𝔾_{1} × 𝔾_{2} → 𝔾_{T} is a non-degenerate efficiently computable bilinear map, *ψ* is an efficiently computable isomorphism *ψ*: 𝔾_{2} → 𝔾_{1} and *g*_{1} = *ψ*(*g*_{2}).

### Decisional bilinear assumption over type-2 pairings (DBDH-2)

Let *gk* = (𝔾_{1}, 𝔾_{2}, 𝔾_{T}, *g*_{1}, *g*_{2}, *p*, *e*, *ψ*) be the output of the parameter generation algorithm 𝓖_{2} as above. We consider the following version of the DBDH-2 problem introduced by Galindo [21] and also used in [19]. Formally, we say that the DBDH-2 assumption holds for type-2 pairings if the advantage of the adversary 𝓐_{DBDH-2} denoted by

where the probability is taken over the random choices of the algorithm 𝓖_{2} and the internal coin tosses of the algorithm 𝓐.

## 4 Leakage-resilient non-interactive key exchange

In this section, we present the syntax of leakage-resilient non-interactive key exchange (LR-NIKE) protocols. We denote by 𝓟𝓚, 𝓢𝓚 and 𝓢𝓗𝓚 the space of public keys, secret keys and shared keys of LR-NIKE, respectively. When we write *pk _{i}* for the

*i*-th public key, we mean that

*pk*is associated with the user with identifier ID

_{i}_{i}∈ 𝓘𝓓𝓢, where 𝓘𝓓𝓢 denotes the identity space

^{4}. Formally, an LR-NIKE scheme, LR-NIKE, consists of a tuple of algorithms (NIKEcommon_setup, NIKEgen, NIKEkey) with the functionalities specified below.

- NIKEcommon_setup(1
^{κ},*λ*(*κ*)): The setup algorithm takes as input the security parameter*κ*and the leakage bound*λ*(*κ*) that can be tolerated by the NIKE scheme and outputs a set of global parameters*params*of the system. We sometimes drop*κ*and write*λ*instead of*λ*(*κ*) when*κ*is clear from context. - NIKEgen(1
^{κ},*params*): The key generation algorithm is probabilistic and can be executed independently by all the users. It takes as input the security parameter*κ*and*params*and outputs a public/secret key pair (*pk*,*sk*) ∈ 𝓟𝓚 × 𝓢𝓚. - NIKEkey(
*pk*,_{i}*sk*): The shared key generation algorithm takes the public key of user ID_{j}_{i}, namely*pk*, and the secret key of user ID_{i}_{j}, namely*sk*, and outputs a shared key_{j}*shk*∈ 𝓢𝓗𝓚 for the two keys or a failure symbol ⊥ if_{ij}*i*=*j*, i.e., if*sk*is the secret key corresponding to_{j}*pk*._{i}

The *correctness* requirement states that, for any two pairs (*pk _{i}*,

*sk*) and (

_{i}*pk*,

_{j}*sk*), the shared keys computed by them should be identical.

_{j}### 4.1 Bounded leakage-resilient non-interactive key exchange (BLR-CKS-heavy) security model

In this section, we present the formal security model for leakage-resilient non-interactive key exchange (LR-NIKE). Before defining our security model for LR-NIKE protocols, we present an impossibility result related to LR-NIKE protocols in Section 4.1.1. In particular, we show that if the leakage function is an arbitrary polynomial-time computable function having access to the public parameters, we cannot hope to construct a secure LR-NIKE protocol, even in the bounded memory leakage model. In Section 4.1.2, we show how to circumvent this impossibility result by enforcing some restrictions on the class of allowable leakage functions in our BLR-CKS-heavy security model for NIKE.

### 4.1.1 Impossibility of LR-NIKE protocols

In this section, we present an impossibility result of constructing the LR-NIKE protocol, even in the bounded leakage model. Then we suitably adapt our security model to circumvent this impossibility result. Let us assume that the NIKE protocol is run between two users, say, Alice and Bob. The key pairs of Alice and Bob are (*pk _{A}*,

*sk*) and (

_{A}*pk*,

_{B}*sk*), respectively. In the leakage setting, the adversary may ask bounded leakage from the secret keys of both Alice and Bob. Note that the shared key between Alice and Bob is a deterministic function of their own secret keys and the public key of the other party, namely, for Alice (for Bob), the shared key

_{B}*shk*is derived as NIKEkey(

_{AB}*pk*,

_{B}*sk*) (as NIKEkey(

_{A}*pk*,

_{A}*sk*)). Now the adversary can set the leakage function for Alice as

_{B}*L*( ⋅ ) =

*pk*, ⋅), i.e. the adversary can specify the leakage function as the shared key derivation function NIKEkey with the public key of the other party hard-coded in it. This allows the adversary to directly leak from the shared key

_{B}*shk*established between Alice and Bob. If the adversary can leak sufficiently many bits of

_{AB}*shk*, then, with very high probability, the adversary can distinguish the

_{AB}*shk*from a random key.

_{AB}The above attack demonstrates that if we allow the leakage function to be arbitrary polynomial-time computable functions with access to all the public parameters (and hence to the public keys of all the parties) of the system, it is *impossible* to have a secure instantiation of an LR-NIKE protocol. Hence we need to enforce some meaningful restrictions on the leakage functions. In particular, one may assume that the leakage functions are not allowed to access the public parameters of the system. This can be enforced by having the adversary specify the set of leakage functions before receiving the public parameters, the so-called “*non-adaptive*” leakage model. However, this is necessarily a much more restrictive model than the *adaptive* leakage model, where the leakage may depend on the parameters on the system. To circumvent the impossibility result in the adaptive leakage model, we incorporate some assumptions in our security model for LR-NIKE. In particular, we assume that every party participating in a NIKE protocol is equipped with a *leak-free* hardware component which can be used to store a very small part of their public key. The leakage function can access the public keys of all the parties, except the components stored in the leak-free hardware. This essentially provides a way to shield some part of the public key from the view of the adversary. The information that is stored in the leak-free hardware is implementation-specific. However, we stress that the leak-free hardware should be used in a minimal way in any LR-NIKE protocol^{5}. Also, note that if we store the entire public key in the leak-free hardware, then our model is essentially equivalent to the non-adaptive leakage model. Hence our security model for LR-NIKE can be seen as a convex combination of the non-adaptive and adaptive leakage model, depending on the information that is stored in the leak-free hardware.

Lastly, we note that the above impossibility results do not carry forward to the setting of (interactive) AKE protocols. This is because the shared key between two parties in an AKE protocol does not only depend on the long-term keys of the parties, but also depends on the ephemeral secret and public keys. Hence, in this case, the leakage functions may be allowed to access the public keys of all the parties.

### 4.1.2 BLR-CKS-heavy security model for LR-NIKE

We model every party (including the adversary) as a probabilistic polynomial-time Turing machine (PPTM). In addition, we assume that all the legitimate parties involved in the protocol have access to an oracle tape, each per party. When required, the parties can enter into a query phase and can look up the response on its corresponding oracle tape. After obtaining the response, the parties can continue with the execution of the protocol.^{6} Moreover, we assume that the adversary cannot read the contents of the oracle tape. In reality, the oracle tape models the leak-free hardware device available to each party. Equipped with this, we next present our BLR-CKS-heavy model for NIKE.

Our security model of LR-NIKE can be seen as generalization of the CKS-heavy security model introduced by Freire et al. [19] to appropriately model key leakage attacks. We assume that the adversary is *not* allowed to register the same public key more than once. In practice, this can easily be ensured by requiring the certification authority (CA) to check for consistency whenever an individual attempt to register a public key in the system. So, in the leakage-free scenario, this setting was also considered in the work of Freire et al. [19], which they called the Simplified(S)-NIKE. Our model allows the adversary to register *arbitrary* public keys into the system, provided they are distinct from each other and from the public keys of the honestly registered parties. The adversary can also issue Extract queries to learn the private keys corresponding to the honestly generated public keys. The adversary can also learn the shared key between two honestly generated parties (via HonestReveal query) as long as both of them are not involved in the challenge/Test query. We also allow the adversary to learn the shared key between an honest party and a corrupt party (via CorruptReveal query). Apart from the above queries, the BLR-CKS-heavy model allows the adversary to obtain a *bounded* amount of leakage of the secret/private keys of the parties. Finally, in the Test query, the adversary has to distinguish the real shared key between two honest parties from a random shared key. To prevent trivial wins, we enforce some natural restrictions on the Test query which we call the *validity* conditions. We also note that, once the test query is asked by the adversary, he is not allowed to make further leakage queries on the corresponding parties involved in the test query (modeling *before-the-fact* leakage).

(Extract query vs. Leakage queries). By issuing the Extract query, the adversary can learn the secret key of a party entirely. Separately, by issuing leakage queries the adversary gets a bounded amount of leakage from the secret key. It may seem paradoxical to consider both Extract as well as Leakage queries at the same time. However, there are good reasons to consider both.

A non-leakage version of the BLR-CKS-heavy model allows the adversary to corrupt the honest parties to obtain the corresponding secret keys. However, it disallows the adversary to corrupt any of the parties involved in the Test query. This is a natural restriction since corrupting any of the parties involved in the test session will also allow the adversary to reconstruct the shared key of the test session and hence win the security game with certainty. But note that, in our BLR-CKS-heavy model, the adversary can also obtain bounded leakage from the secret keys of the parties involved in the test session in *addition* to corrupting other (non-test) honest parties in the system. Hence the BLR-CKS-heavy model allows the adversary to obtain more information than a non-leakage version of BLR-CKS-heavy model, namely the CKS-heavy model [19], and hence is necessarily *stronger* than the CKS-heavy model.

### 4.1.3 Adversarial powers

Our BLR-CKS-heavy security model is stated in terms of a security game between a challenger 𝓒 and an adversary 𝓐. The adversary 𝓐 is modeled as a PPTM algorithm. We denote by Π_{U,V} the protocol run between principal *U*, with intended principal *V*. Initially, the challenger 𝓒 runs the NIKEcommon_setup algorithm to output the set of public parameters *params*, and gives *params* to 𝓐. The challenger 𝓒 also chooses a random bit *b* in the beginning of the security game and answers all the legitimate queries of 𝓐 until 𝓐 outputs a bit *b*′. The adversary 𝓐 is allowed to ask the following queries.

- RegisterHonest(1
^{κ},*params*): This query allows the adversary to register honest parties in the system. The challenger runs the NIKEgen algorithm to generate a key pair (*pk*,_{U}*sk*) and records the tuple (_{U}*honest*,*pk*,_{U}*sk*). It then returns the public key_{U}*pk*to 𝓐. We refer to the parties registered via this query as_{U}*honest*parties. - RegisterCorrupt(
*pk*): This query allows the adversary to register arbitrary corrupt parties in the system. Here 𝓐 supplies a public key_{U}*pk*. The challenger records the tuple (_{U}*corrupt*,*pk*, ⊥). We demand that all the public keys involved in this query are distinct from one another and from the honestly generated public keys from above. The parties registered via this query are referred to as_{U}*corrupt*. - Extract(
*pk*): In this query, the adversary 𝓐 supplies the public key_{U}*pk*of an honest party. The challenger looks up the corresponding tuple (_{U}*honest*,*pk*,_{U}*sk*) and returns the secret key_{U}*sk*to 𝓐._{U} - Reveal(
*pk*,_{U}*pk*): This query can be categorized into two types – HonestReveal and CorruptReveal queries. Here the adversary supplies a pair of public keys_{V}*pk*. In the HonestReveal query, both_{V}*pk*are honestly registered, i.e., both of them correspond to honest parties; whereas in the CorruptReveal query, one of the public keys is registered as honest while the other is registered as corrupt. The challenger runs the NIKEkey algorithm using the secret key of the honest party (in case of the HonestReveal query, using the secret key of any one of the parties) and the public key of the other party, and returns the result to 𝓐._{V} - Leakage: In the BLR-CKS-heavy security model, the total amount of leakage from the secret key of the underlying cryptographic primitives is bounded by the leakage parameter
*λ*=*λ*(*κ*). Here the adversary 𝓐 supplies the description of an arbitrary polynomial-time computable function*f*∈ 𝓕 and a public key_{i}*pk*. The challenger computes*f*(_{i}*sk*), where sk is the secret key corresponding to*pk*, and returns the output to 𝓐. The class 𝓕 = {*f*}_{i}_{i}of leakage functions is defined as*f*: {0, 1}_{i}^{*}→ {0, 1}^{λi(κ)}, where*λ*(_{i}*κ*) <*λ*(*κ*). Secondly, the functions*f*cannot take as input the values_{i}*f*(*pk*), where the value*f*(*pk*) is stored in a leak-free hardware component, and*f*is a function of the public key*pk*.^{7}The adversary 𝓐 can specify multiple such leakage functions as long as the leakage bound is not violated, i.e., ∑_{i}∣*f*(_{i}*sk*)∣ ≤*λ*(*κ*), and*f*∈ 𝓕. Note that 𝓐 can obtain_{i}*λ*bits of information/leakage from the secret key from each of the honest parties,*including*those involved in the Test queries. - Test(
*pk*,_{U}*pk*): Here 𝓐 supplies two distinct public keys_{V}*pk*that were both registered as_{V}*honest*. If*pk*=_{U}*pk*, the challenger aborts and returns ⊥. Otherwise, it uses the bit_{V}*b*to answer the query. If*b*= 0, the challenger runs the NIKEkey algorithm using the public key of one party, say*pk*, and the private key of the other party_{U}*sk*and returns the result to 𝓐. If_{V}*b*= 1, the challenger samples a random shared key from 𝓢𝓗𝓚 and returns that to 𝓐.

𝓐’s queries may be made *adaptively* and are *arbitrary* in number. However, to prevent trivial wins, the adversary should not be allowed to make certain queries to the parties involved in the Test query. We model this by requiring the Test query to be *valid*. We next give the definition of validity in our BLR-CKS-heavy model (see Definition 4.2).

(*λ*-BLR-CKS-heavy validity). We say that the Test query Π_{U,V} between two parties *U* and *V* with public-secret key pairs (*pk _{U}*,

*sk*) and (

_{U}*pk*,

_{V}*sk*), respectively, is

_{V}*valid*in the

*BLR*-

*CKS*-

*heavy*model if the following conditions hold.

- The adversary 𝓐 is not allowed to ask Extract(
*pk*) or Extract(_{U}*pk*) queries._{V} - The adversary 𝓐 is not allowed to ask HonestReveal(
*pk*,_{U}*pk*) or HonestReveal(_{V}*pk*,_{V}*pk*) queries._{U} - The total output length of all the leakage queries by 𝓐 to each party involved in the Test query, i.e.,
*U*and*V*, is at most*λ*(*κ*), i.e., ∑_{i}∣*f*(_{i}*sk*)∣ ≤_{U}*λ*(*κ*) and ∑_{i}∣*f*(_{i}*sk*)∣ ≤_{V}*λ*(*κ*), and we require that*f*∈ 𝓕, where 𝓕 is as defined above._{i}

### 4.1.4 Security game and security definition

(BLR-CKS-heavy security game). The security of a NIKE protocol in the generic *(BLR*-*CKS*-*heavy* model is defined using the following security game, which is played by a PPT adversary 𝓐 against the protocol challenger 𝓒.

*Stage*1: The challenger 𝓒 runs the NIKEcommon_setup algorithm to output the global parameters*params*and returns them to 𝓐.*Stage*2: 𝓐 may ask any number of RegisterHonest, RegisterCorrupt, Extract, HonestReveal, CorruptReveal, and Leakage queries adaptively.*Stage*3: At any point of the game, 𝓐 may ask a Test query that is*λ*-*BLR*-*CKS*-*heavy valid*. The challenger chooses a random bit*b*to respond to this query. If*b*= 0, the actual shared key between the respective pairs of parties involved in the corresponding test query is returned to 𝓐. If*b*= 1, the challenger samples a random shared key from 𝓢𝓗𝓚, records it for later and returns that to 𝓐.*Stage*4: 𝓐 may continue asking RegisterHonest, RegisterCorrupt, Extract, HonestReveal, CorruptReveal and Leakage queries adaptively, provided the Test query remains*valid*.*Stage*5: At some point, 𝓐 outputs the bit*b*′ ← {0, 1}, which is its guess of the value*b*. Then 𝓐 wins if*b*′ =*b*.

Let Succ_{𝓐} denote the event that 𝓐 wins the above security game (Definition 4.3).

(BLR-CKS-heavy security). Let *q*_{H}, *q*_{C}, *q*_{E}, *q*_{HR} and *q*_{CR} denote the number of RegisterHonest, RegisterCorrupt, Extract, HonestReveal and CorruptReveal queries, respectively. A NIKE protocol Π is said to be *BLR*-*CKS*-*heavy* secure if there is no PPT algorithm 𝓐 that can win the above *BLR*-*CKS*-*heavy* security game with non-negligible advantage. The advantage of an adversary 𝓐 is defined as

We remark that our BLR-CKS-heavy security model for NIKE can be generalized in a straightforward manner to incorporate continuous memory leakage (CML) attacks. However, we do not give our security model for NIKE in the CML setting since we mainly focus on the construction of BLR-NIKE in this work. We note that a recent work [10] already solved the open problem of constructing LR-NIKE in the CML setting. However, they assume additional restrictions on the leakage model, namely a *split*-*state* model (where the secret key is split into multiple parts and it is assumed that the adversary can leak from both these parts, but in an independent manner) and also does not achieve the optimal leakage rate (i.e. 1 − *o*(1)). We leave the construction of LR-NIKE in the CML setting in the non-split state model achieving optimal leakage rate as an exciting open problem.

### 4.2 Constructions of leakage-resilient non-interactive key exchange

In this section, we show our construction of leakage-resilient NIKE in the bounded-memory leakage model. We show that the pairing-based NIKE protocol of Freire et al. [19] (in the standard model), which is secure in the non-leakage setting, is in fact insecure in the bounded memory leakage model, even if the adversary obtains a single bit of leakage on the secret key of the parties. This is illustrated in Appendix C.

### 4.2.1 Protocol BLR-NIKE: Construction of NIKE in the bounded-memory leakage model

Table 3 shows our construction of NIKE in the bounded-memory leakage model. The starting point of our construction is the NIKE protocol of [19]. Let 𝓖_{2} be a type-2 pairing parameter generation algorithm, i.e., it outputs *gk* = (𝔾_{1}, 𝔾_{2}, 𝔾_{T}, *g*_{1}, *g*_{2}, *p*, *e*, *ψ*). Let ChamH_{hk} : {0, 1}^{*} × 𝓡_{cham} → ℤ_{p} be a (bounded) leakage-resilient chameleon hash function tolerating leakage bound up to *λ*(*κ*) (denoted as *λ*-LR-CHF) indexed with the evaluation/hashing key *hk*, and 𝓡_{cham} denotes the randomness space of the hash function. Also, let *F* : {0, 1}^{ℓ(κ)} × ℤ_{p} → ℤ_{p}, *F*′ : ℤ_{p} × {0, 1}^{κ} → ℤ_{p} be (*ε*_{prf}, *s*_{prf}, *q*_{prf}) and _{p} × {0, 1}^{s} → {0, 1}^{ℓ(κ)} be an average-case (*v*, *ε*)-extractor, with *v* ≪ log *p* and *s* = *ω*(log*κ*). Namely, it has log *p* bits of input, *ω*(log*κ*)-bit seed *s* and *ℓ*(*κ*)-bit outputs, and for a random seed and input with *v* bits of min-entropy, the output is *ε*-away from a uniform *ℓ*(*κ*)-bit string. We can set the parameters appropriately to achieve this.

LR-NIKE protocol in the bounded leakage model (BLR-NIKE).

Party ID_{A} | Party ID_{B} |
---|---|

NIKEcommon_setup(1^{κ}) | |

gk_{2}(1^{κ}), where gk = (𝔾_{1},𝔾_{2}, 𝔾_{T}, g_{1}, g_{2}, p, e, ψ); | |

α, β, γ, δ | |

(hk,ck) ← Cham.KeyGen(1^{κ},λ); | |

params := (gk, α, β, γ, δ, hk); | |

return params | |

NIKEgen(1^{κ}, params) | |

x, _{A}r_{A}_{p}; _{cham}; | x, _{B}r_{B}_{p}; _{cham}; |

sample s_{A}^{s}, and stores_{A}in leak-free component; | sample s_{B}^{s}, and stores_{B}in leak-free component; |

x, _{A}s);_{A} | x, _{B}s);_{B} |

r) + _{A}^{κ}); | r) + _{B}^{κ}); |

Z ← _{A} | Z ← _{B} |

t ← ChamH_{A}_{hk}(Z∥ID_{A}_{A}; | t ← ChamH_{B}_{hk}(Z∥ID_{B}_{B}; |

Y ← _{A}αβ^{tA}γ^{tA2}; X ← _{A} | Y ← _{B}αβ^{tB}γ^{tB2}; X ← _{B} |

pk ← (_{A}X, _{A}Z, _{A}r, _{A}s); _{A}sk ← _{A}x_{A} | pk ← (_{B}X, _{B}Z, _{B}r, _{B}s); _{B}sk ← _{B}x_{B} |

NIKEkey(pk, _{B}sk)_{A} | NIKEkey(pk, _{A}sk)_{B} |

if pk = _{A}pk, return ⊥;_{B} | if pk = _{B}pk, return ⊥;_{A} |

parse pk as (_{B}X, _{B}Z, _{B}r);_{B} | parse pk as (_{A}X, _{A}Z, _{A}r);_{A} |

t ← ChamH_{B}_{hk}(Z∥ID_{B}_{B}; | t ← ChamH_{A}_{hk}(Z∥ID_{A}_{A}; |

if e(X, _{B}g_{2}) ≠ e(αβ^{tB}γ^{tB2},Z), then _{B}shk_{A,B} ← ⊥; | if e(X, _{A}g_{2}) ≠ e(αβ^{tA}γ^{tA2},Z), then _{A}shk_{A,B} ← ⊥; |

r) + _{A}^{κ}); | r) + _{B}^{κ}); |

shk ← _{AB}e(Z);_{B} | shk ← _{AB}e(Z)_{A} |

### Setting the parameters of the extractor

For our construction, the seed *s* of the extractor Ext is stored in the leak-free hardware component. If the length of the seed *s* is *O*(log*κ*), the adversary can enumerate over the entire seed space itself, and the adversary can simply ask for leakage functions on the private key with enumeration of all possible seeds. This necessitates the length of *s* to be at least *ω*(log*κ*). The classical result of [34] shows that, for every *n*, *k*, *ε*, there exist (*k*, *ε*)-extractors that use a seed of length *d* = log(*n* − *k*) + 2log*O*(1) and output *m* = *k* + *d* − 2log*n* = log *p* (since the input of Ext is from ℤ_{p}), *k* = *n* − *λ* = log *p* − *λ* (since a leakage of *λ* bits can reduce the entropy of the source by at most *λ* bits). Hence the seed is of the length *s* = log(*λ*) + 2log*O*(1). By appropriately setting the value of *p*, *λ* and *ε*, one can show that *s* ≥ *ω*(log*κ*).

### On the leak-free hardware assumption

As already stated in Section 4.1.2, we consider an additional assumption that each party involved in our LR-NIKE protocol has access to a *leak*-*free* secure hardware component. In our LR-NIKE construction, each party needs to store a short random seed (which is part of the public key of that party) corresponding to every other party with whom a shared key will be established. The above assumption seems to be necessary for our protocol since, otherwise, the adversary could leak from the extracted value itself by knowing the seed. However, as discussed in Section 4.1.1, the leak-free hardware assumption seems to be necessary for the construction for any secure LR-NIKE protocol, unless one sacrifices the leakage model further (non-adaptive leakage).

*Let* ChamH_{hk}*be a family of bounded leakage*-*resilient chameleon hash function* (BLR-CHF). *Let**F**and**F*′ *be* (*ε*_{prf}, *s*_{prf}, *q*_{prf}) *and**secure PRFs*. *Let* Ext *be a* (*v*, *ε*)-*strong average case randomness extractor with seed length at least**ω*(log*κ*), *and let**p**be the order of the underlying groups* 𝔾_{1}, 𝔾_{2}*and* 𝔾_{T}. *Then the above NIKE protocol* BLR-NIKE *is* BLR-CKS-heavy-*secure assuming the intractability of the* DBDH-2 *assumption with respect to the parameter generator* 𝓐 *be an adversary against the NIKE protocol* BLR-NIKE *in the* BLR-CKS-heavy *security model making**q*_{H}*the number of* RegisterHonest *user queries*. *Then*, *using it*, *we can construct an adversary**against the* DBDH-2 *problem such that*

The proof of this theorem will proceed via the game hopping technique [38]: define a sequence of games, and relate the adversary’s advantage of distinguishing each game from the previous game to the advantage of breaking one of the underlying cryptographic primitives. Let Adv_{⅁δ}(𝓐) denote the advantage of the adversary 𝓐 in Game *δ*.

Game 0. This is the original security game with adversary 𝓐_{DBDH-2}. When the Test query is asked, the Game 0 challenger chooses a random bit *b**b* = 0, the real shared key is given to 𝓐; otherwise, a random value chosen from the shared key space is given.

Game 1. Initially, 𝓐_{DBDH-2} chooses two identities ID_{A}, ID_{B} ∈ [*q*_{H}], where *q*_{H} denotes the number of RegisterHonest queries made by 𝓐_{NIKE}. Effectively, 𝓐_{DBDH-2} is guessing that ID_{A} and ID_{B} to be honestly registered by 𝓐_{NIKE} will be involved in the Test query later. When 𝓐_{NIKE} makes its Test query on a pair of identities {ID_{I}, ID_{J}}, 𝓐_{DBDH-2} checks if {ID_{I}, ID_{J}} = {ID_{A}, ID_{B}}. If so, it continues with the simulation and gives the result to 𝓐_{NIKE}; else it aborts the simulation.

Game 2. This game is identical to the previous game, except that the challenger changes the way how the output of the extractor is computed. In particular, instead of computing *x _{A}*,

*s*) and

_{A}*x*,

_{B}*s*), the challenger chooses a uniformly random

_{B}^{ℓ(κ)}. Game 0 and Game 1 are indistinguishable by the property of the strong average case randomness extractor. Suppose that the adversary obtains at most

*λ*=

*λ*(

*κ*) bits of leakage from the secret keys

*x*and

_{A}*x*of parties

_{B}*A*and

*B*, respectively. Since Ext can work with inputs that have min-entropy

*v*≪ log

*p*, even given the bounded leakage of

*λ*bits, we have (

*x*,

_{A}*s*, Ext(

_{A}*x*,

_{A}*s*)) ≈

_{A}_{ε}(

*x*,

_{A}*s*,

_{A}*U*

_{ℓ(κ)}) and (

*x*,

_{B}*s*, Ext(

_{B}*x*,

_{B}*s*)) ≈

_{B}_{ε}(

*x*,

_{B}*s*,

_{B}*U*

_{ℓ(κ)}), where

*U*

_{ℓ(κ)}denotes the uniform distribution over {0, 1}

^{ℓ(κ)}. Recall that, in our construction, the seeds

*s*and

_{A}*s*are stored in the

_{B}*leak-free*hardware component (i.e.,

*s*and

_{A}*s*are stored in the oracle tape of party ID

_{B}_{A}and ID

_{B}, respectively), and hence are outside the view of the adversary. Thus it is possible to replace the output the extractor with a uniformly random value in this game. This is the

*only*place where we require the leak-free assumption in our proof.

Game 3. This game is identical to the previous game, except that the challenger changes the way how the PRFs are computed. In particular, instead of computing *r _{A}*) +

^{κ}) and

*r*) +

_{B}^{κ}), the challenger computes

*r*) +

_{A}^{κ}) and

*r*) +

_{B}^{κ}), where RF is random function with the same range as

*F*. If 𝓐 can distinguish the difference between Game 2 and Game 3, then 𝓐 can be used as a subroutine to construct a distinguisher 𝓓 between the PRF

*F*: {0, 1}

^{ℓ(κ)}× ℤ

_{p}→ ℤ

_{p}and a random function RF.

Game 4. This game is identical to the previous game, except that the challenger now samples *r _{A}*) +

^{κ}) and

*r*) +

_{B}^{κ}), the challenger samples

_{p}. Note that

Game 5. In this game, the challenger changes the way in which it answers RegisterCorrupt queries. In particular, let ID_{A} and ID_{B} be identities of two honest parties involved in the Test query with public keys (*X _{A}*,

*Z*,

_{A}*r*,

_{A}*s*) and (

_{A}*X*,

_{B}*Z*,

_{B}*r*,

_{B}*s*), respectively. Let ID

_{B}_{D}be the identity of the party with public key (

*X*,

_{D}*Z*,

_{D}*r*) that is subject to a RegisterCorrupt query. If

_{D}the challenger aborts. Note that if the above happens, then the challenger has successfully found a collision of the chameleon hash function. By the difference lemma [37], we have

Game 6. In this game, the DBDH-2 adversary 𝓐_{DBDH-2} receives as input *T* = *e*(*g*_{1}, *g*_{2})^{sbc} or a random element from 𝔾_{T}, where *g*_{1} and *g*_{2} are generators of the group 𝔾_{1} and 𝔾_{2}, respectively, and *a*, *b*, *c* are random elements from ℤ_{p}. We now describe how 𝓐_{DBDH-2} sets up the environment for 𝓐_{NIKE} and simulates all its queries properly.

The adversary 𝓐_{DBDH-2} runs Cham.KeyGen(1^{κ}, *λ*) to obtain a key pair for a chameleon hash function, (*hk*, *ck*). It then chooses two messages *m*_{1}, *m*_{2} ← {0, 1}^{*} and *r*_{1}, *r*_{2} ← 𝓡_{cham}, where 𝓡_{cham} is the randomness space of the chameleon hash function. 𝓐_{DBDH-2} then computes the values

Let us define a polynomial *p*(*t*) = *p*_{0} + *p*_{1}*t* + *p*_{2}*t*^{2} of degree 2 over ℤ_{p} such that *t _{A}* and

*t*are the roots of

_{B}*p*(

*t*), i.e.,

*p*(

*t*) = 0 and

_{A}*p*(

*t*) = 0. Also, let

_{B}*q*(

*t*) =

*q*

_{0}+

*q*

_{1}

*t*+

*q*

_{2}

*t*

^{2}be a random polynomial of degree 2 over ℤ

_{p}. Then 𝓐

_{DBDH-2}sets

*p*,

_{i}*q*← ℤ

_{i}_{p}are randomly chosen, the values of

*α*,

*β*and

*γ*are also random. Also, note that

*p*(

*t*) =

_{A}*p*(

*t*) = 0). Then 𝓐

_{B}_{DBDH-2}simulates all the queries of 𝓐

_{NIKE}as follows.

- RegisterHonest: When 𝓐
_{DBDH-2}receives as input a RegisterHonest user query from 𝓐_{NIKE}for a party with identity ID, it fist checks whether ID ∈ {ID_{A}, ID_{B}}. Depending upon the result, it does the following:- –If ID ∉ {ID
_{A}, ID_{B}}, 𝓐_{DBDH-2}runs NIKE.gen to generate a pair of keys (*pk*,*sk*) and returns*pk*to 𝓐_{NIKE}. - –If ID ∈ {ID
_{A}, ID_{B}}, 𝓐_{DBDH-2}does the following. Without loss of generality, let ID = ID_{A}. Now 𝓐_{DBDH-2}uses the trapdoor ck of the chameleon hash to produce ∈ 𝓡$\begin{array}{c}{r}_{A}^{\prime}\end{array}$ _{cham}such thatNote that, by the random trapdoor collision property of the chameleon hash function,$\begin{array}{c}\text{Cham.Eval}({g}_{2}^{a}\parallel {ID}_{A};{r}_{A}^{\prime})=\text{Cham.Eval}({m}_{1};{r}_{1}).\end{array}$ is uniformly distributed over 𝓡$\begin{array}{c}{r}_{A}^{\prime}\end{array}$ _{cham}and also independent of*r*_{1}. Similarly, when ID = ID_{B}, 𝓐_{DBDH-2}uses the trapdoor ck to produce ∈ 𝓡$\begin{array}{c}{r}_{B}^{\prime}\end{array}$ _{cham}such that Cham.Eval( ∥ID$\begin{array}{c}{g}_{2}^{b}\end{array}$ _{B}; ) = Cham.Eval($\begin{array}{c}{r}_{B}^{\prime}\end{array}$ *m*_{2};*r*_{2}). The value is also uniformly distributed over 𝓡$\begin{array}{c}{r}_{B}^{\prime}\end{array}$ _{cham}and also independent of*r*_{2}. 𝓐_{DBDH-2}then setswhere$\begin{array}{c}p{k}_{A}=\left(\psi \right({g}_{2}^{a}{)}^{q\left({t}_{A}\right)},{g}_{2}^{a},{r}_{A}^{\prime},{r}_{A})\phantom{\rule{0ex}{0ex}}\text{and}\phantom{\rule{0ex}{0ex}}{pk}_{B}=(\psi ({g}_{2}^{b}{)}^{q\left({t}_{B}\right)},{g}_{2}^{B},{r}_{B}^{\prime},{r}_{B}),\end{array}$ *r*,_{A}*r*← ℤ_{B}_{p}. Note that these are correct public keys since*p*(*t*) =_{A}*p*(*t*) = 0._{B}

- –If ID ∉ {ID
- RegisterCorrupt: Here 𝓐
_{DBDH-2}receives as input a public key*pk*and an identity string ID from 𝓐_{NIKE}. If ID ∈ {ID_{A}, ID_{B}}, 𝓐_{DBDH-2}aborts as in the original attack game. - HonestReveal: When 𝓐
_{NIKE}supplies identities of two honest parties, ID and ID′ say, 𝓐_{DBDH-2}checks if {ID, ID′} = {ID_{A}, ID_{B}}. If this happens, 𝓐_{DBDH-2}aborts. Else, if {ID, ID′} ∩ {ID_{A}, ID_{B}} ≤ 1, there are three cases:- –ID ∩ {ID
_{A}, ID_{B}} ≠*ϕ*and ID′ ∩ {ID_{A}, ID_{B}} =*ϕ*. Here the challenger 𝓐_{DBDH-2}runs NIKE.key(*pk*_{ID},*sk*_{ID′}) to produce the shared key*shk*_{ID, ID′}. Note that 𝓐_{DBDH-2}can do this since it knows the secret key*sk*_{ID′}of the party ID′. Then 𝓐_{DBDH-2}gives*shk*_{ID, ID′}to 𝓐_{NIKE}. - –ID ∩ {ID
_{A}, ID_{B}} =*ϕ*and ID′ ∩ {ID_{A}, ID_{B}} ≠*ϕ*. Here the challenger 𝓐_{DBDH-2}runs NIKE.key(*pk*_{ID′},*sk*_{ID}) to produce the shared key*shk*_{ID, ID′}. Note that 𝓐_{DBDH-2}can do this since it knows the secret key*sk*_{ID′}of the party ID′. Then 𝓐_{DBDH-2}gives*shk*_{ID, ID′}to 𝓐_{NIKE}. - –{ID, ID′} ∩ {ID
_{A}, ID_{B}} =*ϕ*. In this case, the challenger 𝓐_{DBDH-2}runs NIKE.key(*pk*_{ID′},*sk*_{ID}) (it can use*sk*_{ID′}also) to produce the shared key*shk*_{ID, ID′}. Then 𝓐_{DBDH-2}gives*shk*_{ID, ID′}to 𝓐_{NIKE}.

- –ID ∩ {ID
- CorruptReveal: When 𝓐
_{NIKE}supplies two identities ID and ID′, where ID was registered as corrupt and ID′ was registered as honest, 𝓐_{DBDH-2}checks if ID′ ∈ {ID_{A}, ID_{B}}. If ID′ ∉ {ID_{A}, ID_{B}}, 𝓐_{DBDH-2}runs NIKE.key(*pk*_{ID},*sk*_{ID′}) to obtain*shk*_{ID, ID′}and returns it to 𝓐_{NIKE}. However, if ID′ ∈ {ID_{A}, ID_{B}}, 𝓐_{DBDH-2}checks whether the public key*pk*_{ID}equals (*X*_{ID},*Z*_{ID}, ,$\begin{array}{c}{r}_{ID}^{\prime}\end{array}$ *r*_{ID}) by checking the pairing. This makes sure that*pk*_{ID}is of the form for some$\begin{array}{c}({Y}_{ID}^{d},{g}_{2}^{d},{r}_{D}^{\prime},{r}_{D})\end{array}$ *d*∈ ℤ_{p}, where and$\begin{array}{c}{Y}_{D}=({g}_{1}^{c}{)}^{p\left({t}_{ID}\right)}{g}_{1}^{q\left({t}_{ID}\right)},{r}_{D}\leftarrow {Z}_{p}\end{array}$ ← 𝓡$\begin{array}{c}{r}_{D}^{\prime}\end{array}$ _{cham}. This means that From this the value,$\begin{array}{c}{X}_{ID}=({g}_{1}^{cd}{)}^{p\left({t}_{ID}\right)}{g}_{1}^{dq\left({t}_{ID}\right)}.\end{array}$ can be computed as$\begin{array}{c}{g}_{1}^{cd}\end{array}$ Note that the value 1/$\begin{array}{c}{g}_{1}^{cd}=\left({X}_{ID}/\psi \right({Z}_{ID}{)}^{q\left({t}_{ID}\right)}{)}^{1/p\left({t}_{ID}\right)mod\phantom{\rule{0ex}{0ex}}p}.\end{array}$ *p*(*t*_{ID}) is well defined since*p*(*t*_{ID}) ≠ 0 mod*p*. Also, note that*t*_{ID}≠*t*,_{A}*t*since we have already eliminated the hash collisions. Assume w.l.o.g. that ID′ = ID_{B}_{A}. So, writing the public key of ID_{A}as (*Y*,_{A}*Z*,_{A} ,$\begin{array}{c}{r}_{A}^{\prime}\end{array}$ *r*), the shared key between ID_{A}_{A}and ID is given by$\begin{array}{c}{shk}_{{ID}_{A},ID}=e({g}_{1}^{cd},{Z}_{A}).\end{array}$ - Leakage queries: The adversary 𝓐
_{NIKE}may specify arbitrary polynomial-time computable functions*f*to leak from the secret keys_{i}*x*and_{A}*x*. The challenger 𝓐_{B}_{DBDH-2}forwards the functions*f*to its leakage oracle and forwards the answers to 𝓐_{i}_{NIKE}. - Test query: Here 𝓐
_{DBDH-2}returns*T*.

This completes the description of simulation by 𝓐_{DBDH-2}. If 𝓐_{NIKE} can distinguish between real and random key in Game 4, then this is equivalent to solving the DBDH-2 problem. To see this, note that, for user ID_{A}, we have *X _{A}* =

*ψ*(

*Z*)

_{A}^{q(tA)}, and for user ID

_{B}, we have

*X*=

_{B}*ψ*(

*Z*)

_{B}^{q(tB)}. Hence

Since the simulation done by 𝓐_{DBDH-2} is perfect, we have

Game 7. In this game, the challenger 𝓐_{DBDH-2} chooses *T* randomly from the target group 𝔾_{T}. Since *T* is now completely independent of the challenge bit, we have _{DBDH-2} can distinguish *e*(*g*_{1}, *g*_{2})^{abc} from a random element. So we have

By combining all the above expression from Game 0 to Game 7, we have the following. We use *G _{i}* to denote Game

_{i}.

Thus we have

### 4.2.2 Leakage tolerance of protocol BLR-NIKE

The order of the groups 𝔾_{1}, 𝔾_{2} and 𝔾_{T} is *p*. Note that, although the secret key in our protocol BLR-NIKE may appear to be a single field element, in the actual instantiation of the protocol, the secret key is a tuple of *n* + 1 field elements. This is because the secret key of the concrete instantiation of BLR-CHF of Wang and Tanaka [39] consists of *n* field elements which also corresponds to the number of generators in the construction of [39]. The leakage tolerance of BLR-CHF is *λ*′ = ((*n* − 1)log(*p*) − *ω*(log*κ*)) as shown in [39]. The size of the secret key of the BLR-CHF is *L* = *n*log *p*. So, for sufficiently large *n*, the leakage rate *o*(1). We also consider a very good randomness extractor that can work with inputs that have min-entropy *v* ≪ log *p* and produces outputs whose distance from uniform *ℓ*(*κ*)-bit strings is *ε* < 2^{−ℓ(κ)}. The size of the secret key *x _{A}* (respectively

*x*) of our NIKE construction is log

_{B}*p*(apart from the size of the secret key of

*λ*′-LR-CHF, i.e.,

*n*log

*p*). So the leakage tolerated from

*x*(respectively

_{A}*x*) is at least log

_{B}*p*−

*v*≈ log

*p*. Hence the overall leakage tolerated by our construction is

*λ*≈((

*n*− 1)log(

*p*) −

*ω*(log

*κ*)) + log

*p*≈

*n*log(

*p*) −

*ω*(log

*κ*). The overall size of the secret key of our construction is

*L*′ = (

*n*+ 1)log

*p*. So the overall leakage rate of our construction is

*n*.

## 5 Constructions of various cryptographic primitives from leakage-resilient NIKE

Now we show many potential applications of leakage-resilient NIKE.

### 5.1 Leakage-resilient adaptive chosen ciphertext secure PKE

We now present our construction of an LR-IND-CCA-2-secure PKE scheme from a BLR-CKS-heavy-secure LR-NIKE scheme. Actually, we show how to construct an LR-IND-CCA-2-secure key encapsulation mechanism (KEM) given such a NIKE. Before proceeding with the construction, we give the LR-IND-CCA-2 security model for KEMs.

### 5.1.1 Leakage-resilient chosen-ciphertext security for KEM

We say a KEM

satisfies *correctness* if, for all *pub*^{κ}), (*pk*_{KEM}, *sk*_{KEM}) ^{κ},*pub*) and (*C*, *K*) ← Encap(*pk*_{KEM}), it holds that Pr[Decap(*sk*_{KEM}, *C*) = *K*] = 1 (where the randomness is taken over the internal coin tosses of algorithm KEM.Gen and KEM.Encap).

### LR-IND-CCA-2 security

We now turn to defining indistinguishability under adaptive chosen-ciphertext and leakage attacks in the bounded-memory leakage setting (BLR-IND-CCA-2).

(BLR-IND-CCA-2 security). Let *κ* ∈ ℕ and *λ* = *λ*(*κ*) be parameters. We say a KEM

is *λ*-BLR-CCA-2-secure if, for all PPT adversaries A, there exists a negligible *ν*(*κ*) such that

where the experiment

Experiment defining LR-CCA-2 security of KEM.

Experiment | Oracle Decap^{*}(C) | Oracle f) |
---|---|---|

pub^{κ}); | if C = C^{*}, abort; | if L + ∣ f(sk)∣ ≤ λ(κ), |

(pk,sk) ^{κ},pub); L ← ∅; b | return Decap(sk, C) | return f(sk) |

if b′ = b, then return 1, else return 0 |

### 5.1.2 Generic construction of leakage-resilient KEM

We now show the construction of a leakage-resilient CCA-2-secure KEM

from leakage-resilient NIKE (Figure 1).

*Suppose the leakage*-*resilient NIKE scheme* LR-NIKE *is* BLR-CKS-heavy-*secure with leakage rate* 1 − *o*(1). *Then the KEM scheme* Γ *is* BLR-IND-CCA-2-*secure KEM*. *More formally*, *let* 𝓐_{KEM}*be an adversary against* Γ *making**q*_{D}*decapsulation queries and**q*_{L}*leakage queries*. *Then*, *using* 𝓐_{KEM}, *we can construct another adversary* 𝓐_{NIKE}*in the* BLR-CKS-heavy *security model who makes two* RegisterHonest *queries*, *q*_{D} RegisterCorrupt *queries*, *q*_{D} CorruptReveal *queries and**q*_{L} Leakage *queries*, *and the running time of* 𝓐_{NIKE}*is roughly the same as that of* 𝓐_{KEM}. *Moreover*, *the leakage rate of* Γ *is* 1 − *o*(1).

Let 𝓐_{KEM} be an adversary against the BLR-IND-CCA-2-secure KEM Γ. We now show how to use 𝓐_{KEM} to construct another adversary 𝓐_{NIKE} against LR-NIKE, thereby contradicting its BLR-CKS-heavy security. 𝓐_{NIKE} simulates the environment to 𝓐_{KEM} in the following way.

- KEM.Setup: On input of the public parameters
*params*, 𝓐_{NIKE}sets the public parameters pub of the KEM scheme as*params*. - KEM.Gen: 𝓐
_{NIKE}makes*two*RegisterHonest queries, receiving as input two honestly registered public keys*pk*_{1}and*pk*_{2}. It then sets*pk*_{KEM}=*pk*_{1}and*sk*_{KEM}= ⊥. - KEM.Encap(
*pk*): To simulate the challenge phase, 𝓐_{NIKE}makes a Test(*pk*_{1},*pk*_{2}) query. It receives as reply a shared key*K*which is either the real key, i.e.,*K*= NIKE.key(*pk*_{1},*sk*_{2}), or a random shared*K*← 𝓢𝓗𝓚. It then sets the encapsulated key*K*^{*}=*K*and*C*^{*}=*pk*_{2}. - Leakage queries: When 𝓐
_{KEM}queries with leakage functions*f*, the challenger 𝓐_{NIKE}forwards*f*to the leakage oracle and receives as response$\begin{array}{c}{O}_{{sk}_{1}}^{\lambda}(\phantom{\rule{0ex}{0ex}}\cdot \phantom{\rule{0ex}{0ex}})\end{array}$ *f*(*sk*_{1}). It then returns*f*(*sk*_{1}) to 𝓐_{KEM}. - Decapsulation queries: 𝓐
_{KEM}makes decapsulation queries to 𝓐_{NIKE}with ciphertexts*C*. 𝓐_{NIKE}parses*C*as*pk*′, and since*C*≠*C*^{*}, we have*pk*′ ≠*pk*_{2}. If*pk*′ =*pk*_{1}, 𝓐_{NIKE}outputs ⊥. This is consistent with the rejection rule of the KEM Γ and also LR-NIKE. Else, 𝓐_{NIKE}makes a RegisterCorrupt query on*pk*′. Here we assume that all of 𝓐_{KEM}’s decapsulation queries are distinct without loss of generality, and hence all of the RegisterHonest queries are distinct. 𝓐_{NIKE}then makes a CorruptReveal(*pk*_{1},*pk*′) query to get a shared key*K*∈ 𝓢𝓗𝓚 or a symbol ⊥. It then returns*K*to 𝓐_{KEM}.

This completes the description of 𝓐_{NIKE}’s simulation. From the description, it is clear that the above simulation is perfect. Note that if *K*^{*} is the real shared key, i.e., it is the output of the NIKE.key algorithm, then it is properly simulating the Encap algorithm in the *K*^{*} is chosen randomly, it also properly simulates the fact that it is chosen randomly in the experiment. Finally, when 𝓐_{KEM} outputs a bit *b*′ as its guess for *b* in the experiment, 𝓐_{NIKE} also outputs the same bit *b*′. So the advantage of 𝓐_{NIKE} in breaking the BLR-CKS-heavy security of LR-NIKE is exactly the *same* as the advantage of 𝓐_{KEM} in breaking the BLR-IND-CCA-2 security of the KEM scheme Γ. Also note that the number of RegisterCorrupt and CorruptReveal queries made by 𝓐_{NIKE} is the same as the number of decapsulation queries asked by 𝓐_{KEM}. This completes the proof of the above theorem.□

Here we show an alternative simple construction of a BLR-IND-CCA-2-secure KEM from a standard IND-CCA-2 secure KEM given access to a leak-free hardware component that can store the seed of a randomness extractor (as in our case). Namely, let *π* = (Gen, Encap, Decap) be a (standard) IND-CCA-2-secure KEM. We construct a BLR-IND-CCA-2-secure KEM Γ = (KEM.Setup, KEM.Gen, KEM.Encap, KEM.Decap) from *π* as follows. KEM.Setup(1^{κ}): Sample a random string *r*′, and compute *r* = Ext(*r*′, *s*), where *s* is the random seed of the extractor Ext. Store the seed *s* in the leak-free hardware component. Then run *pk* = Gen(1^{κ}, *r*) to obtain the public key *pk*. Set the secret key *sk* = *r*′. The above scheme Γ achieves BLR-IND-CCA-2 security as long as the string *r*′ is sufficiently long and the seed *s* is kept out of the view of the adversary. The encapsulation and the decapsulation functions remain unchanged from the underlying KEM scheme *π*. Although this simple solution works, we stress the objective of our work is to show the applications of LR-NIKE as a central unifying to construct many leakage-resilient primitives. We thank an anonymous reviewer of the journal *Design, Codes and Cryptography* for suggesting the above construction.

### 5.2 Leakage-resilient authenticated key exchange

The work of Bergsma et al. [8] shows a generic construction of an eCK-secure AKE protocol using an UF-CMA-secure signature scheme, CKS-light-secure NIKE scheme and a pseudo-random function as underlying primitives.

In this paper, we present a construction of a leakage-resilient NIKE protocol, which is secure in the CKS-heavy model, under bounded-memory leakage, i.e. BLR-CKS-heavy-secure NIKE protocol (Table 3). Since the CKS-heavy security implies the CKS-light security, our leakage-resilient NIKE protocol can work as a bounded-memory leakage-resilient CKS-light-secure NIKE protocol. Further, in the literature, we can find UF-CMLA-secure signature schemes [26], which are UF-CMA-secure signature schemes under the bounded-memory leakage model. Thus we have the necessary primitives to transform our leakage-resilient NIKE to a leakage-resilient AKE following the NIKE to AKE transformation of Bergsma et al. [8] in the bounded-memory leakage model.

There are numerous leakage versions of the eCK model, under the OCLI axiom [3, 4, 5] and the memory leakage model [11]. Further, they address after-the-fact leakage. With our new BLR-CKS-heavy-secure NIKE protocol, following the Bergsma et al. [8] transformation, we can achieve leakage-resilient AKE in an eCK-style model

- under memory leakage (stronger than the OCLI axiom),
- addressing before-the-fact leakage (weaker than after-the-fact leakage).

### 5.2.1 Bounded-memory before-the-fact leakage eCK model

We present a suitable security model to analyze the leakage resiliency of AKE protocols, considering the aforementioned points, i.e., in an eCK-style security model [28], addressing before-the-fact, bounded-memory leakage.

Let *κ* be the security parameter. Let 𝓤 = {*U*_{1}, …, *U _{n}*} be a set of

*n*parties. We use the term

*principal*to identify a party involved in a protocol instance. Each party

*U*, where

_{i}*i*∈ [1,

*N*], has a pair of long-term public and secret keys, (

_{P}*pk*,

_{Ui}*sk*). The term

_{Ui}*session*is used to identify a protocol instance at a principal. Each principal may have multiple sessions, and they may run concurrently. The oracle

*s*-th session at the owner principal

*U*, with intended partner principal

*V*. The principal which sends the first protocol message of a session is the

*initiator*of the session, and the principal which responds to the first protocol message is the

*responder*of the session.

### Partner sessions in the BBFL-eCK model

Two oracles

- both
and$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ have computed session keys;$\begin{array}{c}{\Pi}_{{U}^{\prime},{V}^{\prime}}^{{s}^{\prime}}\end{array}$ - messages sent from
and messages received by$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ are identical;$\begin{array}{c}{\Pi}_{{U}^{\prime},{V}^{\prime}}^{{s}^{\prime}}\end{array}$ - messages sent from
and messages received by$\begin{array}{c}{\Pi}_{{U}^{\prime},{V}^{\prime}}^{{s}^{\prime}}\end{array}$ are identical;$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ *U*^{′}=*V*and*V*^{′}=*U*;- exactly one of
*U*and*V*is the initiator, and the other is the responder.

The protocol is said to be *correct* if two partner oracles compute identical session keys.

### Modeling leakage

We consider the bounded-memory leakage setting for modeling the leakage. As before, the adversary is allowed to issue arbitrary efficiently computable leakage functions *f _{i}* and obtain the leakage

*f*(

_{i}*sk*) of the secret key

*sk*, before the session key is established. As mentioned above, the constraint is ∑

_{i=1}|

*f*(

_{i}*sk*)| ≤

*λ*, where

*λ*is the leakage parameter.

### Adversarial powers

- Send(
*U*,*V*,*s*,*m*) query: The oracle computes the next protocol message according to the protocol specification and sends it to the adversary. 𝓐 can also use this query to activate a new protocol instance with blank$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ *m*. - SessionKeyReveal(
*U*,*V*,*s*) query: 𝓐 is given the session key of the oracle .$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ - EphemeralKeyReveal(
*U*,*V*,*s*) query: 𝓐 is given the ephemeral keys (per-session randomness) of .$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ - Corrupt(
*U*) query: 𝓐 is given the long-term secrets of the principal*U*. - Test (
*U*,*s*) query: When 𝓐 asks the Test query, the challenger first chooses a random bit*b* {0, 1} and if$\begin{array}{c}\stackrel{\$}{\leftarrow}\end{array}$ *b*= 1 then the actual session key is returned to 𝓐, otherwise a random string chosen from the same session key space is returned to 𝓐. - Leakage(
*U*,*f*) query: The leakage_{i}*f*(_{i}*sk*_{U}) is computed and returns to the adversary if and only if ∑_{i=1}|*f*(_{i}*sk*_{U})| ≤*λ*.

*λ*-BBFL-eCK-freshness

Let *λ* be the leakage parameter. An oracle *λ*-*BBFL-eCK*-fresh if and only if conditions (1)–(3) of Alawatugoda et al. [4, Definition 4] hold, and

- (iv)before
is activated, for all Leakage($\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ *U*,*f*) queries, ∑_{i}_{i=1}|*f*(_{i}*sk*_{U})| ≤*λ*, and for all Leakage(*V*,*f*) queries, ∑_{i}_{i=1}|*f*(_{i}*sk*_{V})| ≤*λ*; - (v)after
is activated, no leakage is allowed from$\begin{array}{c}{\Pi}_{U,V}^{s}\end{array}$ *U*and*V*.

### BBFL-eCK security game

The adversary 𝓐 interacts with the challenger by issuing any combination of Send(), SessionKeyReveal(), EphemeralKeyReveal(), Leakage() and Corrupt() queries at will. At some point, the adversary chooses a *λ*-BBFL-eCK-fresh oracle and issues a Test() query. Then the adversary may continue asking the Send(), SessionKeyReveal(), EphemeralKeyReveal(), Leakage() and Corrupt() queries while preserving the freshness of the test session, and finally outputs answer bit *b*^{′} for the challenge. 𝓐 wins if *b*^{′} = *b*. Let Succ_{𝓐} denote the event that the adversary 𝓐 wins the above security game.

(BBFL-eCK security). A protocol *π* is said to be *BBFL-eCK*-secure if there is no PPT adversary 𝓐 that can win the *BBFL-eCK* security game with non-negligible advantage. The advantage of an adversary 𝓐 is defined as

### 5.2.2 Constructing BBFL-eCK-secure key exchange protocols

In Table 5, we show the generic leakage-resilient variant of the Bergsma et al. [8] AKE protocol. We replace the CKS-light-secure NIKE with a BLR-CKS-heavy-secure NIKE, and the UF-CMA-secure signature scheme with a UF-CMLA-secure signature scheme in the bounded-memory leakage model, to come up with the generic BBFL-eCK-secure AKE protocol. In this protocol, the final shared key is obtained by xor-ing the intermediate keys. Since the adversary learns the leakage only from the long-term secret parameters, it is not necessary to use leakage-resilient PRFs for the construction of LR-AKE, following NIKE to AKE transformation of Bergsma et al.

Leakage-resilient AKE protocol LR-AKE.

A (initiator) | B (responder) | |
---|---|---|

r_{A}^{κ} | r_{B}^{κ} | |

k_{A,B} := k_{nike}, nike ⊕ k_{nike, tmp} ⊕ k_{tmp, nike} ⊕ k_{tmp, tmp} | k_{B,A} := k_{nike, nike} ⊕ k__{nike, tmp} ⊕ k_{tmp, nike} ⊕ k_{tmp, tmp} |

Let LR-NIKE = (NIKEcommon_setup, NIKEgen, NIKEkey) be the underlying BLR-CKS-heavy-secure NIKE protocol. Let LR-SIG = (SIGkg, SIGsign, SIGvfy) be the underlying UF-CMLA-secure signature scheme, and let PRF be a secure pseudo-random function. Since the generic construction of the AKE protocol remains unchanged with respect to Bergsma et al. [8], except the replacement of the leakage-resilient advancements of the underlying primitives, in the bounded-memory leakage setting, the security of the resulting AKE still preserves the eCK-style with the advancements of leakage resiliency in the bounded-memory leakage setting. Therefore, the security theorem and the flow of the security proof is similar to [8, Appendix A, Theorem 1] and its proof.

*If the underlying NIKE protocol* LR-NIKE *is* BLR-CKS-heavy-*secure*, *the signature scheme* LR-SIG *is* UF-CMLA-*secure in the bounded-memory leakage model and the pseudo-random property holds for the* PRF, *then the* LR-AKE *protocol is* BBFL-eCK-*secure*.

*Let**d**be the number of parties. Each party**U _{i}*

*owns at most*

*η*

*protocol sessions. Let*𝓐

*be an adversary against the above protocol*LR-AKE.

*We construct attackers*𝓑

_{sig},

*and*𝓑

_{prf}

*against the underlying leakage-resilient signature scheme*,

*the leakage-resilient NIKE protocol (matching session exists and no matching session exists*,

*respectively) and the pseudo-random function such that*

*sketch*. To prove Theorem 5.5, we need to consider four types of attackers.

- An A1-type attacker never asks the EphemeralKeyReveal query for the test session. If there exists a partner to the test session, it will also never ask the EphemeralKeyReveal query for the partner session.
- An A2-type attacker never asks the EphemeralKeyReveal query for the test session. If there exists a partner to the test session, it also never asks the Corrupt query for the owner of the partner session.
- An A3-type attacker never asks the Corrupt query to the owner of the test session. If there exists a partner to the test session, it also never asks the EphemeralKeyReveal query to the partner session.
- An A4-type attacker never asks the Corrupt query to the owner of the test session. If there exists a partner to the test session, it also never asks the Corrupt query to the owner of the partner session.

Each legitimate attacker according to the freshness definition falls into at least on of these categories.

In the LR-AKE protocol, the session key is computed as

The main intuition behind this construction is that we need to reduce the indistinguishability of the shared key of LR-AKE to the indistinguishability of LR-NIKE. In this simulation, we can easily simulate the leakage by giving the adversary 𝓐 the leakage obtained from the underlying leakage-resilient NIKE challenger and the signature scheme challenger. In the security experiment against the leakage-resilient NIKE, the NIKE-adversary gets two challenge public keys from the leakage-resilient NIKE challenger. In the reduction, we need to embed them into the view of the adversary 𝓐, in a way that we can embed the leakage-resilient NIKE-challenge key into *k* while successfully answering all the legitimate Corrupt and EphemeralKeyReveal queries.

An A1-type attacker never asks EphemeralKeyReveal queries to the test session and the partner to the test session. Thus it is possible to embed the public keys from the leakage-resilient NIKE challenger as the ephemeral public keys of the test session. Then use the challenge key from the leakage-resilient NIKE challenger as *k*_{tmp, tmp}.

For the case of an A2-type attacker, embed the public keys from the leakage-resilient NIKE challenger, one as the ephemeral public key and the other one as the long-term public key of the test session. Then use the challenge key as *k*_{tmp, nike}. Since this embedding involves a long-term secret of one party of the test session, we need to use an additional PRF, and this long-term secret is used in many protocol executions involving the corresponding party. Similarly, A3 and A4-type attackers can be handled by embedding the leakage-resilient NIKE challenger’s public and challenge keys accordingly.

Thus the four attackers correspond to all possible combinations of Corrupt and EphemeralKeyReveal queries that are allowed in our BBFL-eCK security model. □

Our construction of an BBFL-eCK secure AKE protocol is obtained by replacing the building blocks (i.e., NIKE and one-time signature schemes) in Bergsma et al.’s framework [8] with a (bounded) leakage-resilient NIKE and a (bounded) leakage-resilient signature scheme. However, such a straightforward replacement of the underlying primitives with their corresponding leakage-resilient counterparts may encounter a subtle technical problem: the adversary can use the leakage oracle to break the authentication mechanism in the “test” session (which corresponds to breaking the UF-CMLA security of the signature scheme), or it can encode the (description of the) session key derivation function in the leakage function to leak from the session key. However, it is to be noted that, in our BBFL-eCK security model, the adversary is not allowed to query the leakage oracle during or after the test session. This is because we consider the setting of “before-the-fact” leakage in our current work. Hence the above impossibility result can be avoided in our setting.

### 5.2.3 Leakage tolerance of the generic LR-AKE protocol

This generic protocol can tolerate the leakage according to the leakage tolerance of the underlying leakage-resilient NIKE and the leakage-resilient signature scheme. Our LR-NIKE can tolerate 1 − *o*(1) leakage, and the UF-CMLA signature scheme of Katz et al. [26] can tolerate *n* − *n*^{ε} leakage, for an *n*-bit key and 1 > *ε* > 0, which approaches 1 − *o*(1) leakage rate for sufficiently large *n* and small enough *ε*. Thus the corresponding instantiation can tolerate an overall leakage rate of 1 − *o*(1).

### 5.3 Leakage-resilient low-latency key exchange

Low-latency key exchange (LLKE) can be considered as one of the important practical usages of NIKE protocols, which permits the transmission of cryptographically protected data, without prior key exchange, while providing perfect forward secrecy (PFS). Leakage resiliency of LLKE remains unstudied.

### 5.3.1 Bounded-memory leakage LLKE-ma model

We refer to the security model under mutual authentication of Hale et al. [23, Section 5] as LLKE-ma model. In this work, we introduce a bounded-memory leakage model on top of the LLKE-ma model (we use the notation BL-LLKE-ma to identify our model whenever necessary).

Let *d* be the number of clients and *ℓ* the number of servers. Each client is represented by a collection of *n* oracles C_{i,1}, …, C_{i,n}, and each server is represented by a collection of *k* oracles S_{j,1}, …, S_{j,k}. Each oracle represents an instance of the protocol. Each principal has a long-term key pair (*sk _{i}*,

*pk*). Let

_{i}*κ*be the security parameter and

*λ*the leakage parameter. Each oracle C

_{i,s}∈ [

*d*] × [

*n*] (or S

_{j,t}∈ [

*ℓ*] × [

*k*], respectively) maintains

- two variables
to store the temporary and main keys of a session,$\begin{array}{c}{k}_{i}^{tmp}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{k}_{i}^{main}\end{array}$ - a variable Partner
_{i}containing the identity of the intended communication partner, - variables
containing messages sent and received by the oracle.$\begin{array}{c}{M}_{i,s}^{in}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{M}_{i,s}^{out}\end{array}$

### Adversarial powers

- Send(C
_{i,s}/S_{j,t},*m*): The adversary sends the message*m*to the requested oracle, the oracle processes*m*according to the protocol specification, and the response is returned to the adversary. - Reveal(C
_{i,s}/*S*_{j,t}, tmp/main): This query returns the key of the given stage if it has been already computed, or ⊥ otherwise. - Corrupt(
*i/j*): This query returns the long-term secret key of the server or the client accordingly. If Corrupt(*j/i*) is the*τ*-th query issued by the adversary, we say a party is*τ*-corrupted. For the parties that are not corrupted, we define*τ*:= ∞. - Test(C
_{i,s}/*S*_{j,t}, tmp/main): This query is used to test a key. If the variable for the requested key is not empty, the challenger chooses*b* {0, 1}, and if$\begin{array}{c}\stackrel{\$}{\leftarrow}\end{array}$ *b*= 0, then the requested key is returned, else a random key is returned. Otherwise, ⊥ is returned. - Leakage(
*i/j*,*f*): The leakage_{i}*f*(_{i}*sk*_{i/j}) is computed and returns to the adversary iff ∑_{i=1}|*f*(_{i}*sk*_{i/j})| ≤*λ*.

### BL-LLKE-*ma* security game

The adversary interacts with the challenger by issuing any combination of Send(), Corrupt(), Reveal() and Leak() queries. At some point, the adversary issues a Test() query to an oracle that holds the conditions in Definition 5.7. Then the adversary may continue asking the Send(), Corrupt(), Reveal() and Leak() queries, without violating the conditions of Definition 5.7, and finally outputs answer bit *b*^{′} for the challenge. 𝓐 wins if *b*^{′} = *b*. Let Succ_{𝓐} denote the event that the adversary 𝓐 wins the above security game.

(Leakage-resilient key security (under mutual authentication)). A protocol *π* is said to be *BL-LLKE-ma*-secure if there is no PPT adversary 𝓐 that can win the *BL-LLKE-ma* security game with non-negligible advantage, while holding the following conditions.

- All the conditions in [23, Definition 8].
- Before activation of the test session on C
_{i}, for all Leakage(*i*,*f*) queries, ∑_{i}_{i=1}|*f*(_{i}*sk*_{i})| ≤*λ*, and before activation of the test session on S_{j}, for all Leakage(*j*,*f*) queries, ∑_{i}_{i=1}|*f*(_{i}*sk*_{j})| ≤*λ*. - After activation of the Test session on C
_{i}, no leakage is allowed from*sk*(same as to the case of S_{i}_{j}).

The advantage of 𝓐 is defined as

### 5.3.2 Generic construction of BL-LLKE-ma-secure LLKE from NIKE

In the work of Hale et al., they have used a CKS-light-secure NIKE scheme NIKE and a UF-CMA-secure signature scheme SIG. We simply replace those primitives with their respective leakage-resilient versions.

Let LR-NIKE = (NIKEcommon_setup, NIKEgen, NIKEkey) be a BLR-CKS-heavy-secure NIKE scheme, and let LR-SIG = (SIGkg, SIGsign, SIGvfy) be a UF-CMLA-secure signature scheme. Then we construct a LLKE protocol

Since the generic construction of the LLKE protocol remains unchanged with respect to Hale et al., except for the replacement of the leakage-resilient advancements of the underlying primitives (in the bounded-memory leakage model), the security of the resulting AKE still preserves the LLKE-ma-style with the advancements of leakage resiliency in the bounded-memory leakage model. Therefore, the security theorem and the flow of the security proof are similar to [23, Appendix 6.2, Theorem 2] and its proof.

*If the underlying NIKE protocol* LR-NIKE *is* BLR-CKS-heavy-*secure and the signature scheme* LR-SIG *is* UF-CMLA-*secure in the bounded-memory leakage model*, *then the* LR-LLKE *protocol is* BK-LLKE-ma-*secure*.

*Let**d**be the number of clients and**ℓ**the number of servers. Each client and each server is represented by a collection of**n**and**k**oracles, respectively. Let* 𝓐 *be an adversary against the above protocol* LR-LLKE. *We construct attackers* 𝓑_{sig}*and* 𝓑_{nike}*against the underlying leakage*-*resilient signature scheme and the leakage-resilient NIKE protocol such that*

*sketch*. We distinguish between four different attackers:

- an A1-type attacker asks the Test query to a client oracle and the temporary key;
- an A2-type attacker asks the Test query to a client oracle and the main key;
- an A3-type attacker asks the Test query to a server oracle and the temporary key;
- an A4-type attacker asks the Test query to a client oracle and the main key.

The four different attackers correspond to all possible combinations of queries that are allowed in our BK-LLKE-ma security model. The four distinct lines of the equation in Theorem 5.8 corresponds to each of above cases, respectively. We can easily simulate the leakage by giving the adversary 𝓐 the leakage obtained from the underlying leakage-resilient NIKE challenger and the signature scheme challenger. Apart from that, the simulation is the same as that of Hale et al. [23]. □

### 5.3.3 Leakage tolerance of the generic LR-LLKE protocol

This generic protocol can tolerate the leakage according to the leakage tolerance of the underlying leakage-resilient NIKE and the leakage-resilient signature scheme. Our LR-NIKE can tolerate 1 − *o*(1) leakage, and the UF-CMLA signature scheme of Katz et al. [26] can tolerate *n* − *n*^{ε} leakage, for an *n*-bit key and 1 > *ε* > 0, which approaches 1 − *o*(1) leakage rate for sufficiently large *n* and small enough *ε*. Thus the corresponding instantiation can tolerate an overall leakage rate of 1 − *o*(1).

## 6 Conclusion and future works

Our work provides a new direction for constructing several leakage-resilient cryptographic primitives, such as leakage-resilient PKE schemes, AKE protocols and LLKE protocols, using leakage-resilient NIKE protocols as the main building block. Our construction of LR-NIKE in the bounded leakage setting achieves an optimal leakage rate, i.e., 1 − *o*(1), and the resulting leakage-resilient constructions from that also preserve the same leakage rate, upon the appropriate choice of parameters. Our work also opens up leakage-resilient LLKE protocols, and we hope there is much work to be done on this. We leave open the following main problems:

- construction of leakage-resilient NIKE in the (1 −
*o*(1))-bounded-memory leakage model, without the leak-free hardware assumption, - leakage-resilient NIKE in the (1 −
*o*(1))-continuous memory leakage model in the non-split state model.

## A.1 Basics of information theory

(Min-entropy). The min-entropy of a random variable *X*, denoted as H_{∞}(*X*), is defined as H_{∞}(*X*) _{x} Pr[*X* = *x*]). This is a standard notion of entropy used in cryptography since it measures the worst-case predictability of *X*.

(Average conditional min-entropy). The average-conditional min-entropy of a random variable *X* conditioned on a (possibly) correlated variable *Z*, denoted as H͠_{∞}(*X*|*Z*), is defined as

This measures the worst-case predictability of *X* by an adversary that may observe a correlated variable *Z*.

The following bound on average min-entropy was proved by Dodis et al. [17].

([17]). *For any random variables**X*, *Y**and**Z*, *if**Y**takes on values in* {0, 1}^{ℓ}, *then*

## A.2 Leakage-resilient (LR) chameleon hash functions

In this section, we give the definition of LR chameleon hash functions (CHF) in the *bounded* memory leakage model following [39].

**LR-CHF in bounded leakage model**: Informally, a chameleon hash function (CHF) is a collision-resistant hash function, the only difference being that it is easy to find collision given a trapdoor. Without knowing the trapdoor, it is hard to find any collision. Leakage-resilient chameleon hash functions (LR-CHF) postulate that it is hard to find collisions, even when the adversary learns bounded leakage/information about the secret key. Formally, an *λ*-LR-CHF ChamH : 𝓓 × 𝓡_{cham} → 𝓘, where 𝓓 is the domain, 𝓡_{cham} the randomness space and 𝓘 the range, consists of the algorithms (Cham.KeyGen, Cham.Eval, Cham.TCF).

- Cham.KeyGen(1
^{κ},*λ*): The key generation algorithm takes as input 1^{κ}and the leakage bound*λ*as parameters and outputs an evaluation key along with a trapdoor (*hk*,*ck*). The public key hk defines a chameleon hash function, denoted ChamH_{hk}( ⋅, ⋅ ). - Cham.Eval(
*hk*,*m*,*r*): The hash function evaluation algorithm that takes as input*hk*, a message*m*∈ 𝓓 and a randomizer*r*∈ 𝓡_{cham}, outputs a hash value*h*= ChamH_{hk}(*m*,*r*). - Cham.TCF(
*ck*, (*m*,*r*),*m*^{′}): The trapdoor collision finder algorithm takes as the trapdoor*ck*, a message-randomizer pair (*m*,*r*), an additional message*m*^{′}, and outputs a value*r*^{′}∈ 𝓡_{cham}such that$\begin{array}{c}{ChamH}_{\text{hk}}(m,r)={ChamH}_{\text{hk}}({m}^{\prime},{r}^{\prime}).\end{array}$

A *λ*-LR-CHF must satisfy the following three properties.

*Reversibility*: The reversibility property is satisfied if*r*^{′}= Cham.TCF(*ck*, (*m*,*r*),*m*^{′}) is equivalent to$\begin{array}{c}r=\text{Cham.TCF}(\text{ck},({m}^{\prime},{r}^{\prime}),m).\end{array}$ *Random trapdoor collisions*: The random trapdoor collision property is satisfied if, for a trapdoor*ck*, an arbitrary message pair (*m*,*m*^{′}) and a randomizer*r*, we have that*r*^{′}= Cham.TCF(*ck*, (*m*,*r*),*m*^{′}) has uniform probability distribution on the randomness space 𝓡_{cham}.*LR-collision resistance*: The LR collision-resistance property is satisfied if, for any PPT adversary 𝓐, the following advantage in negligible:where$\begin{array}{c}\begin{array}{cc}{\mathrm{Adv}}_{A,ChamH}^{coll}\left(\kappa \right)& =\mid Pr\left[\right(\text{hk},\text{ck})\leftarrow \text{Cham.KeyGen}({1}^{\kappa},\ell );\phantom{\rule{0ex}{0ex}}(m,r),({m}^{\prime},{r}^{\prime})\leftarrow {A}^{{O}_{\text{ck}}^{\kappa ,\lambda}}(\text{hk}):\\ & \phantom{\rule{0ex}{0ex}}(m,r)\ne ({m}^{\prime},{r}^{\prime})\text{and}{ChamH}_{\text{hk}}(m,r)={ChamH}_{\text{hk}}({m}^{\prime},{r}^{\prime})]\mid ,\end{array}\end{array}$ is the leakage oracle to which 𝓐 can adaptively query to learn at most$\begin{array}{c}{O}_{\text{ck}}^{\kappa ,\lambda}\end{array}$ *λ*bits of information about the trapdoor ck.

## A.3 Pseudo-random functions

*F* : Σ^{k} × Σ^{m} → Σ^{n} is a (*ε*_{prf}, *s*_{prf}, *q*_{prf}) secure pseudo-random function (PRF) if no adversary of size *s*_{prf} can distinguish *F* (instantiated with a random key) from a uniformly random function, i.e., for any 𝓐 of size *s*_{prf} making *q*_{prf} oracle queries, we have

where *R*(*m*, *n*) is the set of all functions from Σ^{m} → Σ^{n}.

## A.4 UF-CMLA-secure signature schemes

We review the definition of UF-CMLA security according to Katz et al. [26]. The leakage function *f _{i}* is an adversary chosen efficiently computable adaptive leakage function, which leaks

*f*(

_{i}*sk*) from a secret key sk.

(Unforgeability against chosen message leakage attacks (UF-CMLA)). Let *κ* be the security parameter and *λ* the leakage parameter. Let LR-SIG = (SIGkg, SIGsign, SIGvfy) be a signature scheme. We define the advantage _{sig} winning the following game:

- (
*sk*^{sig},*pk*^{sig}) SIGkg(1$\begin{array}{c}\stackrel{\$}{\leftarrow}\end{array}$ ^{κ}). - (
*m*^{*},*σ*^{*}) ←𝓐^{𝓞}( ⋅, ⋅ )(*pk*^{sig}). - If SIGvfy(
*pk*^{sig},*m*^{*},*σ*^{*}) = “true” and*m*^{*}is not been previously signed, then 𝓑_{sig}wins.

Oracle 𝓞(*m*, *f _{i}*):

*σ* SIGsign$\begin{array}{c}\stackrel{\$}{\leftarrow}\end{array}$ *sk*^{sig},*m*).*γ*←_{i}*f*(_{i}*sk*^{sig}).- If ∑
_{i=1}|*γ*| ≤_{i}*λ*,- –
*γ*←*γ*,_{i} - –
*γ*← ⊥.

- –
- Return (
*σ*,*γ*).

We say the signature scheme LR-SIG is *UF-CMLA*-secure if

Katz et al. [26] constructed a UFCMLA-secure signature scheme in bounded leakage model in which *n* = 1. It contains signing and verification operations based on NIZK proofs, where the signature can be generated with a cost of two exponentiations and verified with a cost of four exponentiations (with a simple NIZK proof).

During the past two decades, side-channel attacks have arisen as a popular method of attacking cryptographic systems. In order to abstractly model the side-channel attacks and analyze the security of cryptographic primitives against them, cryptographers have proposed the notion of *leakage-resilient* cryptography, introducing various leakage models [2, 6, 29, 30].

In the work of Micali and Reyzin [30], a general framework was introduced to model the leakage that occurs during computation with secret parameters, which is widely known as the *only computation leaks information (OCLI) axiom*. They mentioned that the leakage only occurs from the secret memory portions which are actively involved in computations. The leakage amount is bounded per computation, though the adversary is allowed to obtain the leakage from many computations. Therefore, the overall leakage amount is unbounded. Since this assumption enforces that the leakage only occurs due to computations, this does not cover the attacks that happen due to the leakage from the memory such as malware attacks, cold-boot attacks, etc.

Inspired by the cold-boot attacks, Akavia et al. [2] constructed a general framework to model bounded leakage attacks, which is widely known as *bounded-memory leakage* model. The adversary chooses an arbitrary polynomial-time leakage function *f* and sends it to the leakage oracle. The leakage oracle returns *f*(*sk*) to the adversary, where sk is the secret key. The only restriction here is that the sum of output lengths of all the leakage functions that an adversary can obtain is bounded by some parameter *λ*, which is smaller than the size of sk. This leakage model does not address the continuous leakage from the memory, which can often happen due to attacks such as malware attacks.

Previous works of Zvika et al. [9] and Dodis et al. [14] presented a *continual-memory leakage* model, in which it is assumed that the leakage happens from the entire secret memory. The other characteristics of this model are the same as the OCLI model. This leakage model is stronger than the OCLI model because here the adversary can obtain the leakage from the entire memory regardless of computations.

Differently, Dodis et al. [16] introduced a leakage model where the adversary is allowed to obtain the leakage as any computationally uninvertible function of the secret key as auxiliary input. That model eliminates the concept of leakage parameter, but enforces the hardness parameter instead.

In this section, we show that the NIKE protocol of Freire et al. [19] from pairings in the standard model is completely insecure, even if the adversary is given only a *single* bit of leakage on the secret key. The attack exploits the fact that the adversary can ask any arbitrary leakage function as long as the output of the function is length-shrinking in its input size. In particular, the secret key of a party in the NIKE protocol in Freire et al. [19] is a field element, i.e., *x* ∈ ℤ_{p}, and one of the components of the public key is *Z* = *g*^{x}. The shared key between two parties ID_{i} and ID_{j} has the structure *e*(*S*^{xi}, *Z _{j}*), where S is a public element,

*Z*=

_{j}*g*

^{xj}and

*x*and

_{i}*x*are the secret keys of parties ID

_{j}_{i}and ID

_{j}, respectively.

Now, given the public key, the adversary can encode the function that leaks the hardcore bit of the discrete logarithm of *Z*. In other words, he can specify the leakage function in such a way that it leaks exactly the most significant bit (MSB) of *x*. Note that the MSB of *x* is actually the hardcore bit of the discrete logarithm function. So, with a single bit of leakage, the adversary can recover *x* completely, and hence he can distinguish the shared secret key from a random key with probability 1 and win the indistinguishability game. In fact, here, with only a single bit of leakage, the adversary can perform a *key recovery attack*, which is stronger than the attack on the indistinguishability game.

^{}

**Funding**: Janaka would like to acknowledge the university research grant URG/2018/19/E of the University of Peradeniya, Sri Lanka. The authors would like to acknowledge the support of the Indian Institute of Technology Madras and the University of Peradeniya for this collaborative work.

## References

- [1]↑
S. Agrawal, Y. Dodis, V. Vaikuntanathan and D. Wichs, On continual leakage of discrete log representations, in: Advances in Cryptology—ASIACRYPT 2013. Part II, Lecture Notes in Comput. Sci. 8270, Springer, Heidelberg (2013), 401–420.

- [2]↑
A. Akavia, S. Goldwasser and V. Vaikuntanathan, Simultaneous hardcore bits and cryptography against memory attacks, in: Theory of Cryptography, Lecture Notes in Comput. Sci. 5444, Springer, Berlin (2009), 474–495.

- [3]↑
J. Alawatugoda, C. Boyd and D. Stebila, Continuous after-the-fact leakage-resilient key exchange, in: Information Security and Privacy—ACISP 2014, Lecture Notes in Comput. Sci. 8544, Springer, Cham (2014), 258–273.

- [4]↑
J. Alawatugoda, D. Stebila and C. Boyd, Modelling after-the-fact leakage for key exchange, in: Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security—ASIACCS 2014, ACM, New York (2014), 207–216.

- [5]↑
J. Alawatugoda, D. Stebila and C. Boyd, Continuous after-the-fact leakage resilient eCK-secure key exchange, in: Cryptography and Coding, Lecture Notes in Comput. Sci. 9496, Springer, Cham (2015), 277–294.

- [6]↑
J. Alwen, Y. Dodis and D. Wichs, Leakage-resilient public-key cryptography in the bounded-retrieval model, in: Advances in Cryptology—CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 36–54.

- [7]↑
F. Benhamouda, G. Couteau, D. Pointcheval and H. Wee, Implicit zero-knowledge arguments and applications to the malicious setting, in: Advances in Cryptology—CRYPTO 2015. Part II, Lecture Notes in Comput. Sci. 9216, Springer, Heidelberg (2015), 107–129.

- [8]↑
F. Bergsma, T. Jager and J. Schwenk, One-round key exchange with strong security: an efficient and generic construction in the standard model, in: Public-Key Cryptography—PKC 2015, Lecture Notes in Comput. Sci. 9020, Springer, Heidelberg (2015), 477–494.

- [9]↑
Z. Brakerski, Y. T. Kalai, J. Katz and V. Vaikuntanathan, Overcoming the hole in the bucket: public-key cryptography resilient to continual memory leakage, in: 2010 IEEE 51st Annual Symposium on Foundations of Computer Science—FOCS 2010, IEEE Computer Soc., Los Alamitos, CA (2010), 501–510.

- [10]↑
S. Chakraborty, J. Alawatugoda and C. P. Rangan, Leakage-resilient non-interactive key exchange in the continuous-memory leakage setting, in: Provable Security, Lecture Notes in Comput. Sci. 10592, Springer, Cham (2017), 167–187.

- [11]↑
R. Chen, Y. Mu, G. Yang, W. Susilo and F. Guo, Strongly leakage-resilient authenticated key exchange, in: Topics in Cryptology—CT-RSA 2016, Lecture Notes in Comput. Sci. 9610, Springer, Cham (2016), 19–36.

- [12]↑
R. Chen, Y. Mu, G. Yang, W. Susilo and F. Guo, Strong authenticated key exchange with auxiliary inputs, Des. Codes Cryptogr. 85 (2017), no. 1, 145–173.

- [13]↑
W. Diffie and M. E. Hellman, New directions in cryptography, IEEE Trans. Inform. Theory IT-22 (1976), no. 6, 644–654.

- [14]↑
Y. Dodis, K. Haralambiev, A. López-Alt and D. Wichs, Cryptography against continuous memory attacks, in: 2010 IEEE 51st Annual Symposium on Foundations of Computer Science—FOCS 2010, IEEE Computer Soc., Los Alamitos, CA (2010), 511–520.

- [15]↑
Y. Dodis, K. Haralambiev, A. López-Alt and D. Wichs, Efficient public-key cryptography in the presence of key leakage, in: Advances in Cryptology—ASIACRYPT 2010, Lecture Notes in Comput. Sci. 6477, Springer, Berlin (2010), 613–631.

- [16]↑
Y. Dodis, Y. T. Kalai and S. Lovett, On cryptography with auxiliary input, in: STOC’09—Proceedings of the 2009 ACM International Symposium on Theory of Computing, ACM, New York (2009), 621–630.

- [17]↑
Y. Dodis, R. Ostrovsky, L. Reyzin and A. Smith, Fuzzy extractors: How to generate strong keys from biometrics and other noisy data, SIAM J. Comput. 38 (2008), no. 1, 97–139.

- [18]↑
S. Dziembowski and S. Faust, Leakage-resilient circuits without computational assumptions, in: Theory of Cryptography Conference, Springer, Berlin (2012), 230–247.

- [19]↑
E. S. V. Freire, D. Hofheinz, E. Kiltz and K. G. Paterson, Non-interactive key exchange, in: Public-Key Cryptography—PKC 2013, Lecture Notes in Comput. Sci. 7778, Springer, Heidelberg (2013), 254–271.

- [20]↑
A. Fujioka, K. Suzuki, K. Xagawa and K. Yoneyama, Strongly secure authenticated key exchange from factoring, codes, and lattices, Des. Codes Cryptogr. 76 (2015), no. 3, 469–504.

- [21]↑
D. Galindo, Boneh-Franklin identity based encryption revisited, in: Automata, Languages and Programming, Lecture Notes in Comput. Sci. 3580, Springer, Berlin (2005), 791–802.

- [22]↑
S. Goldwasser and G. N. Rothblum, Securing computation against continuous leakage, in: Advances in Cryptology—CRYPTO 2010, Lecture Notes in Comput. Sci. 6223, Springer, Berlin (2010), 59–79.

- [23]↑
B. Hale, T. Jager, S. Lauer and J. Schwenk, Speeding: On low-latency key exchange, preprint (2015), https://eprint.iacr.org/2015/1214.

- [24]↑
S. Halevi and H. Lin, After-the-fact leakage in public-key encryption, in: Theory of Cryptography, Lecture Notes in Comput. Sci. 6597, Springer, Heidelberg (2011), 107–124.

- [25]↑
A. Juma and Y. Vahlis, Protecting cryptographic keys against continual leakage, in: Advances in Cryptology—CRYPTO 2010, Lecture Notes in Comput. Sci. 6223, Springer, Berlin (2010), 41–58.

- [26]↑
J. Katz and V. Vaikuntanathan, Signature schemes with bounded leakage resilience, in: Advances in Cryptology—ASIACRYPT 2009, Lecture Notes in Comput. Sci. 5912, Springer, Berlin (2009), 703–720.

- [27]↑
E. Kiltz and K. Pietrzak, Leakage resilient ElGamal encryption, in: Advances in Cryptology—ASIACRYPT 2010, Lecture Notes in Comput. Sci. 6477, Springer, Berlin (2010), 595–612.

- [28]↑
B. LaMacchia, K. Lauter and A. Mityagin, Stronger security of authenticated key exchange, in: International Conference on Provable Security, Springer, Berlin (2007), 1–16.

- [29]↑
T. Malkin, I. Teranishi, Y. Vahlis and M. Yung, Signatures resilient to continual leakage on memory and computation, in: Theory of Cryptography, Lecture Notes in Comput. Sci. 6597, Springer, Heidelberg (2011), 89–106.

- [30]↑
S. Micali and L. Reyzin, Physically observable cryptography (extended abstract), in: Theory of Cryptography, Lecture Notes in Comput. Sci. 2951, Springer, Berlin (2004), 278–296.

- [31]↑
H. Morita, J. C. N. Schuldt, T. Matsuda, G. Hanaoka and T. Iwata, On the security of non-interactive key exchange against related-key attacks, IEICE Trans. Fundam. Electron. Comm. Comput. Sci. 100 (2017), no. 9, 1910–1923.

- [32]↑
D. Moriyama and T. Okamoto, Leakage resilient eck-secure key exchange protocol without random oracles, in: Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security—ASIACCS 2011, ACM, New York (2011), 441–447.

- [33]↑
M. Naor and G. Segev, Public-key cryptosystems resilient to key leakage, in: Advances in Cryptology—CRYPTO 2009, Lecture Notes in Comput. Sci. 5677, Springer, Berlin (2009), 18–35.

- [34]↑
N. Nisan and D. Zuckerman, Randomness is linear in space, J. Comput. System Sci. 52 (1996), no. 1, 43–52.

- [35]↑
B. Qin and S. Liu, Leakage-resilient chosen-ciphertext secure public-key encryption from hash proof system and one-time lossy filter, in: International Conference on the Theory and Application of Cryptology and Information Security, Springer, Heidelberg (2013), 381–400.

- [36]↑
B. Qin and S. Liu, Leakage-flexible CCA-secure public-key encryption: Simple construction and free of pairing, in: Public-Key Cryptography—PKC 2014, Lecture Notes in Comput. Sci. 8383, Springer, Heidelberg (2014), 19–36.

- [38]↑
V. Shoup, Sequences of games: A tool for taming complexity in security proofs, preprint (2004), .

- [39]↑
Y. Wang and K. Tanaka, Generic transformation to strongly existentially unforgeable signature schemes with leakage resiliency, in: Provable Security, Lecture Notes in Comput. Sci. 8782, Springer, Cham (2014), 117–129.

- [40]↑
J.-D. Wu, Y.-M. Tseng, S.-S. Huang and W.-C. Chou, Leakage-resilient certificateless key encapsulation scheme, Informatica (Vilnius) 29 (2018), no. 1, 125–155.

## Footnotes

^{1}

Note that this circumvents the impossibility result of Dodis et al. [15] since the analysis of [15] considered the fact that the LR-IND-CCA-secure PKE was constructed solely from HPS; whereas, in [35, 36], they do not solely use HPS and instead rely on both HPS and OTLF for their construction.

^{3}

Jumping ahead, in our LR-NIKE protocol, the leak-free hardware is only used to store a short seed used for randomness extraction.

^{4}

Note that, we are *not* in the identity-based setting.

^{5}

In our construction, we use the leak-free hardware only to store a short seed used for randomness extraction.

^{6}

In our construction, the oracle tape generates a short random string and stores it in as a response. When required, the parties can look up the contents of its oracle tape and use the string as a seed for randomness extraction in the LR-NIKE protocol.

^{7}

In our LR-NIKE, the public-key is *pk* = (*pk*_{1}, *pk*_{2}, *pk*_{3}, *pk*_{4}, *pk*_{5}, and *F*(*pk*) = *pk*_{5}.