Open Access Published by De Gruyter February 19, 2019

# Generic constructions of PoRs from codes and instantiations

Julien Lavauzelle and Françoise Levy-dit-Vehel

# Abstract

In this paper, we show how to construct – from any linear code – a Proof of Retrievability ( 𝖯𝗈𝖱 ) which features very low computation complexity on both the client ( 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 ) and the server ( 𝖯𝗋𝗈𝗏𝖾𝗋 ) sides, as well as small client storage (typically 512 bits). We adapt the security model initiated by Juels and Kaliski [PoRs: Proofs of retrievability for large files, Proceedings of the 2007 ACM Conference on Computer and Communications Security—CCS 2007, ACM, New York 2007, 584–597] to fit into the framework of Paterson, Stinson and Upadhyay [A coding theory foundation for the analysis of general unconditionally secure proof-of-retrievability schemes for cloud storage, J. Math. Cryptol. 7 2013, 3, 183–216], from which our construction evolves. We thus provide a rigorous treatment of the security of our generic design; more precisely, we sharply bound the extraction failure of our protocol according to this security model. Next we instantiate our formal construction with codes built from tensor-products as well as with Reed–Muller codes and lifted codes, yielding 𝖯𝗈𝖱 s with moderate communication complexity and (server) storage overhead, in addition to the aforementioned features.

MSC 2010: 11T71

## 1 Introduction

### 1.1 Motivation

Cloud computing and storage has evolved quite spectacularly over the past decade. Especially, data outsourcing allows users and companies to lighten their storage burden and maintenance cost. Though, it raises several issues: for example, how can someone check efficiently that he can retrieve without any loss a massive file that he had uploaded on a distant server and erased from his personal system?

Proofs of retrievability ( 𝖯𝗈𝖱 s) address this issue. They are cryptographic protocols involving two parts: a client (or a verifier) and a server (or a prover). 𝖯𝗈𝖱 s usually consist in the following phases. First, a key generation process creates secret material related to the file, meant to be kept by the client only. Then the file is initialised, that is, it is encoded and/or encrypted according to the secret data held by the client. This processed file is uploaded to the server. In order to check retrievability, the client can run a verification procedure, which is the core of the 𝖯𝗈𝖱 . Finally, if the client is convinced that the server still holds his file, the client can proceed at any time to the extraction of the file.

Several parameters must be taken into account. Plainly, the verification process has to feature a low communication complexity, as the main goal is to avoid downloading a large part of the file to only check its extractability. Second, the storage overhead induced by the protocol must be low, as large server overhead would imply high fees for the customer. Third, the computation cost of the verification procedure must be low, both for the client (which is likely to own a lightweight device) and the server (whose computation work could also be expensive for the client).

Notice that proofs of data possession ( 𝖯𝖣𝖯 ) represent protocols close to what is needed in 𝖯𝗈𝖱 s. However, in 𝖯𝖣𝖯 s, one does not require the client to be able to extract the file from the server. Instances of 𝖯𝖣𝖯 s are given by Ateniese et al. [2]. Besides, protocols of Lillibridge et al. [8] and Naor and Rothblum [10] are very often seen as precursors for 𝖯𝗈𝖱 s. For instance, the work of Naor and Rothblum [10] considers a setting in which the client directly accesses the file stored by the prover/server (while the actual 𝖯𝗈𝖱 definition uses “an arbitrary program as opposed to a simple memory layout and this program may answer these questions in an arbitrary manner” [14]).

### 1.2 Previous work

Juels and Kaliski [6] gave the first formal definition of 𝖯𝗈𝖱 s. They also proposed a first construction based on so-called sentinels (namely, random parts of the file to be checked during the verification step) the client keeps secretly on his device. Additionally, an erasure code ensures the integrity of the file to be extracted. This seminal work also raised several interesting points. On the one hand, it revealed that (i) the client must store secret data to be used in the verification step and (ii) coding is needed in order to retrieve the file without erasures or errors. On the other hand, in Juels and Kaliski’s construction, the verification step can only be performed a finite number of times since sentinels cannot be reused endlessly.

As a consequence, Shacham and Waters proposed to consider unbounded-use 𝖯𝗈𝖱 s in [14], where they built two kinds of 𝖯𝗈𝖱 s. The first one is based on linear combinations of authenticators produced via pseudo-random functions; its security was proved using cryptographic tools such as unforgeable MAC scheme, semantically secure symmetric encryption and secure PRFs. The second one is a publicly verifiable scheme based on the Diffie–Hellman problem in bilinear groups.

Bowers, Juels and Oprea [3] adopted a coding-theoretic approach (inner code, outer code) to compare variants of Shacham–Waters and Juels–Kaliski schemes. They focused on the efficiency of the schemes, and proved that, despite bounded use, new variants of Juels–Kaliski construction are highly competitive compared to other existing schemes.

In [11], Paterson, Stinson and Upadhyay provide a general framework for 𝖯𝗈𝖱 s in the unconditional security model. They show that retrievability of the file can be expressed as error correction of a so-called response code. That allows them to precisely quantify the extraction success as a function of the success probability of a proving algorithm: indeed, in this setting, extraction can be naturally seen as nearest-neighbour decoding in the response code. They notably apply their framework to prove the security of a modified version of the Shacham–Waters scheme. Also, notice that, prior to [11], Dodis, Vahan and Wichs [4] proposed another coding-theoretic model for 𝖯𝗈𝖱 s that allowed them to build efficient bounded-use and unbounded-use 𝖯𝗈𝖱 schemes.

With practicality in mind, other features have been deployed on 𝖯𝗈𝖱 s. For instance, Wang et al. [15] presented a 𝖯𝗈𝖱 construction based on Merkle hash trees, which allows efficient file updates on the server. Their scheme is provably secure under cryptographic assumptions (hardness of Diffie–Hellman in bilinear groups, unforgeable signatures, etc.) and has been improved by Mo, Zhou and Chen [9] in order to prevent unbalanced trees. More recently, other features have been proposed for 𝖯𝗈𝖱 s, such as multi-prover 𝖯𝗈𝖱 s (see [12]) or public verifiability (for instance in [13]).

### 1.3 Our approach

As we remarked before, most 𝖯𝗈𝖱 schemes rely on two techniques: (i) the client locally stores secret data in order to check the integrity of the file, and (ii) the client encodes the file in order to repair a small number of erasures and errors that could have been missed during the verification step.

In this work, we propose to build 𝖯𝗈𝖱 schemes using codes that fulfil the two previous goals, when equipped with a suitable family of efficiently computable random permutations. More precisely, our idea is the following. Given a file F, a code 𝒞 and a family of random permutations σ K , the client sends to the server an encoded and scrambled version σ K ( 𝒞 ( F ) ) of his file. Then the verification step consists in checking “short” relations among descrambled symbols of w = 𝒞 ( F ) , which come, for instance, from low-weight parity-check equations for 𝒞 . Moreover, during the extraction step, the code 𝒞 provides the redundancy necessary to repair erasures and potential unnoticed errors.

In the present work, we develop a seminal idea that appeared in [7], where the authors proposed a construction of 𝖯𝗈𝖱 s based on lifted codes. We here provide a more generic construction and give a deeper analysis of its security.

While our scheme does not feature updatability nor public verifiability, we emphasise the genericity of our construction, which is based on well-studied algebraic and combinatorial structures, namely, codes and their parity-check equations. Moreover, since the code 𝒞 is public, the client must only store the secret material associated to the random permutations σ K , which consist in a few bytes. Besides, an honest server simply needs to read pieces of w during the verification step, and therefore has very low computational burden compared to many other 𝖯𝗈𝖱 schemes.

### 1.4 Organisation

Section 2 is devoted to the definition and security model of proofs of retrievability. Despite the great disparity of models in 𝖯𝗈𝖱 literature, we try to keep close to the definitions given in [6, 11] for the sake of uniformity.

Section 3 presents our construction of 𝖯𝗈𝖱 . Precisely, in Section 3.1, we introduce objects called verification structures for a code 𝒞 that will be used in the definition of our 𝖯𝗈𝖱 scheme (Section 3.2). A rigorous analysis of our scheme is the purpose of the remainder of that section.

The performance of our generic construction is given in Section 4. We then provide several instances in Section 5, proving the practicality of our 𝖯𝗈𝖱 schemes for some classes of codes.

## 2 Proofs of retrievability

### 2.1 Definition of underlying protocols

We recall that, in proofs of retrievability, a user wants to estimate if a message m can be retrieved from a encoded version w of the message stored on a server. In all what follows, the user will be known as the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 (wants to verify the retrievability of the message) while the server is the 𝖯𝗋𝗈𝗏𝖾𝗋 (aims at proving the retrievability). The message space is denoted by while 𝒲 , the (server) file space, is the set of encoded versions of the messages. We also denote by 𝒦 the set of secret values (or keys) kept by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 , and by the space of responses to challenges.

Throughout the paper, the symbols R and respectively denote the output of randomised and deterministic algorithms.

### Definition 2.1.

A keyed proof of retrievability ( 𝖯𝗈𝖱 ) is a tuple of algorithms ( 𝖪𝖾𝗒𝖦𝖾𝗇 , 𝖨𝗇𝗂𝗍 , 𝖵𝖾𝗋𝗂𝖿𝗒 , 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ) running as follows:

1. (1)

The key generation algorithm 𝖪𝖾𝗒𝖦𝖾𝗇 generates uniformly at random a key κ R 𝒦 . The key κ is secretly kept by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 .

2. (2)

The initialisation algorithm 𝖨𝗇𝗂𝗍 is a deterministic algorithm which takes, as input, a message m and a key κ 𝒦 , and outputs a file w 𝒲 . 𝖨𝗇𝗂𝗍 is run by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 which initially holds the message m. After the process, the file w is sent to the 𝖯𝗋𝗈𝗏𝖾𝗋 , and the message m is erased on 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 ’s side. Upon receipt of w, the 𝖯𝗋𝗈𝗏𝖾𝗋 sets a deterministic algorithm 𝖯 ( w ) that will be run during the verification procedure.

3. (3)

The verification algorithm 𝖵𝖾𝗋𝗂𝖿𝗒 is a randomised algorithm initiated by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 which needs a secret key κ 𝒦 and interacts with the 𝖯𝗋𝗈𝗏𝖾𝗋 . 𝖵𝖾𝗋𝗂𝖿𝗒 is depicted in Figure 1 and works as follows:

1. (i)

the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 runs a random query generator that outputs a challenge u R 𝒬 (the set 𝒬 being the so-called query set);

2. (ii)

the challenge u is sent to the 𝖯𝗋𝗈𝗏𝖾𝗋 ;

3. (iii)

the 𝖯𝗋𝗈𝗏𝖾𝗋 outputs a response r u 𝖯 ( w ) ( u ) ;

4. (iv)

the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 checks the validity of r u according to u and κ; the algorithm 𝖵𝖾𝗋𝗂𝖿𝗒 finally outputs the Boolean value 𝖢𝗁𝖾𝖼𝗄 ( u , r u , κ ) .

4. (4)

The extraction algorithm 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 is run by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 . It takes, as input, κ and r = ( r u : u 𝒬 ) 𝒬 and outputs either a message m or a failure symbol . We say that extraction succeeds if 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ( r , κ ) = m .

The vector r = ( r u 𝖯 ( w ) ( u ) ) u 𝒬 𝒬 is called the response word associated to 𝖯 ( w ) .

### Figure 1

Definition of the algorithm 𝖵𝖾𝗋𝗂𝖿𝗒 .

Note that, in assuming that the response algorithm 𝖯 ( w ) is deterministic and non-adaptive[1], we follow the work of Paterson, Stinson and Upadhyay [11]. The authors justify determinism of response algorithms by the fact that any probabilistic prover can be replaced by a deterministic prover whose success probability is at least as good as the probabilistic one.

In Definition 2.1, we can see that a deterministic algorithm 𝖯 ( w ) can be represented by the vector of its outputs r = ( 𝖯 ( w ) ( u ) ) u 𝒬 , called the response word of 𝖯 ( w ) . Therefore, we can assume that, before the verification step, the 𝖯𝗋𝗈𝗏𝖾𝗋 produces a word r ( w ) 𝒬 related to the file w he holds. In other words, we model provers as algorithms 𝖯 which, given as input w, return a word r 𝒬 .

Following [11], we also assume in this chapter that the extraction algorithm 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 is deterministic, though, in general, it can be randomised. Finally, notice that proofs of retrievability aim at proving the extractability of a file. The extraction algorithm is therefore a tool to retrieve the whole file. Hence its computational efficiency is not a crucial feature.

Table 1 summarises the information held by each entity after the initialisation step. Table 2 reports the inputs and outputs of the algorithms involved in a 𝖯𝗈𝖱 .

Table 1

Information held by each entity after the initialisation step.

 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 𝖯𝗋𝗈𝗏𝖾𝗋 κ w
Table 2

Inputs and outputs of the algorithms involved in a 𝖯𝗈𝖱 .

 Algorithm 𝖪𝖾𝗒𝖦𝖾𝗇 𝖨𝗇𝗂𝗍 𝖵𝖾𝗋𝗂𝖿𝗒 𝖢𝗁𝖾𝖼𝗄 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 Input 1 λ m, κ r, κ u, r u , κ r, κ Output κ w True or False True or False m ′ or ⊥

### 2.2 Security models

One should first notice that, despite many efforts, proofs of retrievability lack a general agreement on the definition of their security model. Nevertheless, our definitions remain very close to the ones given in the original work of Juels and Kaliski [6].

For a response word r 𝒬 given by the 𝖯𝗋𝗈𝗏𝖾𝗋 and a key κ 𝒦 kept by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 , we first define the success of r according to κ as

succ ( r , κ ) : = Pr u ( 𝖢𝗁𝖾𝖼𝗄 ( u , r u , κ ) = 𝚃𝚛𝚞𝚎 ) ,

where the probability is taken over the internal randomness of 𝖵𝖾𝗋𝗂𝖿𝗒 . A first security model can be defined as follows.

### Definition 2.2 (Security model, strong version).

Let ε , τ [ 0 , 1 ] . A proof of retrievability ( 𝖪𝖾𝗒𝖦𝖾𝗇 , 𝖨𝗇𝗂𝗍 , 𝖵𝖾𝗋𝗂𝖿𝗒 , 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ) is strongly ( ε , τ ) -sound if, for every initial file m , every uploaded file w 𝒲 and every prover 𝖯 : 𝒲 𝒬 , we have

(2.1) Pr ( 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ( r , κ ) m succ ( r , κ ) 1 - ε | κ R 𝖪𝖾𝗒𝖦𝖾𝗇 ( 1 λ ) w 𝖨𝗇𝗂𝗍 ( m , κ ) r 𝖯 ( w ) ) τ ,

the probability being taken over the internal randomness of 𝖪𝖾𝗒𝖦𝖾𝗇 under the constraint that w = 𝖨𝗇𝗂𝗍 ( m , κ ) .

#### A remark concerning parameters ε and τ

In proofs of retrievability, we aim at making the extraction of the desired file m as sure as possible when the audit succeeds. Hence it is desirable to have τ small. On the other hand, the parameter ε measures the rate of unsuccessful audits which leads the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 to believe the extraction will fail. Therefore, one does not necessarily need to look for large values of ε, though, in practice, large ε afford more flexibility, for instance, if communication errors occur between the 𝖯𝗋𝗈𝗏𝖾𝗋 and the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 during the verification procedure.

Definition 2.2 provides a strong security model, in the sense that (i) it does not require any bound on the response algorithms given by the 𝖯𝗋𝗈𝗏𝖾𝗋 and (ii) the probability in (2.1) is taken over fixed messages m (informally, it means the 𝖯𝗋𝗈𝗏𝖾𝗋 knows m).

However, keyed proofs of retrievability are usually insecure according to the security model given in Definition 2.2. For instance, in [11], Paterson, Stinson and Upadhyay noticed that in the Shacham–Waters scheme [14], given the knowledge of m and w, an unbounded 𝖯𝗋𝗈𝗏𝖾𝗋 may be able to

1. (i)

compute (or at least randomly guess) a key κ such that 𝖨𝗇𝗂𝗍 ( m , κ ) = w ,

2. (ii)

build m m such that 𝖨𝗇𝗂𝗍 ( m , κ ) = w ,

3. (iii)

set 𝖯 ( w ) = r which (a) successfully passes every audit and (b) leads to the extraction of m m .

Hence we choose to use a weaker but still realistic security model, where, informally, the 𝖯𝗋𝗈𝗏𝖾𝗋 only knows what he stores (that is, w) and has no information on the initial message m. The following security model thus remains conform with the one given by Paterson, Stinson and Upadhyay [11].

#### Definition 2.3 (Security model, weak version).

Let ε , τ [ 0 , 1 ] . A proof of retrievability ( 𝖪𝖾𝗒𝖦𝖾𝗇 , 𝖨𝗇𝗂𝗍 , 𝖵𝖾𝗋𝗂𝖿𝗒 , 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ) is weakly ( ε , τ ) -sound (or simply ( ε , τ ) -sound) if, for every polynomial-time prover 𝖯 : 𝒲 𝒬 and every uploaded file w 𝒲 , we have

(2.2) Pr ( 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ( r , κ ) m succ ( r , κ ) 1 - ε | m R κ R 𝖪𝖾𝗒𝖦𝖾𝗇 ( 1 λ ) w 𝖨𝗇𝗂𝗍 ( m , κ ) r 𝖯 ( w ) ) τ .

In equation (2.2), the randomness comes from pairs ( m , κ ) × 𝒦 picked uniformly at random among those satisfying w = 𝖨𝗇𝗂𝗍 ( m , κ ) .

Since we deal with values of τ very close to 0, we also say that a strongly ( ε , τ ) -sound 𝖯𝗈𝖱 admits λ = - log 2 ( τ ) bits of security against ε-adversaries.

Informally, saying that a 𝖯𝗈𝖱 is not weakly sound amounts to finding a polynomial-time deterministic algorithm 𝖯 which

• takes, as input, a file w 𝒲 and outputs a response word r 𝒬 ,

• makes the extraction fail with non-negligible probability (over messages m and keys κ such that the corresponding response words are successfully audited).

## 3 Our generic construction

Schematically, in the initialisation phase of our construction, the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋

1. (i)

encodes his file according to a code 𝒞 ,

2. (ii)

scrambles the resulting codeword using a tuple of permutations over the base field,

3. (iii)

uploads the result to the 𝖯𝗋𝗈𝗏𝖾𝗋 .

As we explained in the introduction, the verification step then consists in checking that the server is still able to give answers that, once descrambled, satisfy low-weight parity-check equations for 𝒞 .

For this purpose, we next introduce objects called verification structures for codes, which will be used in the definition of our generic 𝖯𝗈𝖱 scheme.

### 3.1 Verification structures: A tool for our PoR scheme

We here consider 𝔽 q , the finite field with q elements. From well-known coding theory terminology, the support of a word w 𝔽 q n is supp ( w ) : = { i [ 1 , n ] , w i 0 } , and its weight is wt ( w ) : = | supp ( w ) | .

In this work, we need to consider codes whose alphabets are finite-dimensional spaces over 𝔽 q , typically = 𝔽 q s . Precisely, a code 𝒞 of length n over is a subset of n . A code 𝒞 n is 𝔽 q -linear if 𝒞 is a vector space over 𝔽 q . When = 𝔽 q , we get the usual definition of linear codes over finite fields. Unless stated otherwise, we only consider 𝔽 q -linear codes, that we will refer to as codes.

We usually denote by k the dimension over 𝔽 q of a code 𝒞 . Its minimum distance d min ( 𝒞 ) is the smallest Hamming distance between two distinct codewords. If n is the length of 𝒞 , then d min ( 𝒞 ) / n [ 0 , 1 ] is the relative minimum distance of the code 𝒞 , while k / n represents its rate. If 𝒞 𝔽 q n , its dual code 𝒞 is defined as { h 𝔽 q n , i = 1 n h i c i = 0 for all c 𝒞 } . Codewords in 𝒞 are also called parity-check equations for 𝒞 .

### Definition 3.1 (Verification structure).

Let 1 n and 𝒞 𝔽 q n be a code. Let also 𝒬 be a non-empty set of -subsets of [ 1 , n ] . Set = 𝔽 q . We define the restriction mapR associated to 𝒬 as

R : 𝒬 × 𝔽 q n ,
( u , w ) w | u .

Given an integer s 1 and a map V : 𝒬 × 𝔽 q s , we say that ( 𝒬 , V ) is a verification structure for 𝒞 if the following holds:

1. (1)

For all i [ 1 , n ] , there exists u 𝒬 such that i u .

2. (2)

For all u 𝒬 , the map 𝔽 q n 𝔽 q s given by a V ( u , R ( u , a ) ) is surjective and vanishes on the code 𝒞 . Explicitly,

V ( u , R ( u , c ) ) = 0 for all c 𝒞 .

The map V is then called a verification map for 𝒞 , and the set 𝒬 a query set for 𝒞 . By convention, for w 𝔽 q n and r 𝒬 , we define

R ( w ) : = ( R ( u , w ) : u 𝒬 ) 𝒬 , V ( r ) : = ( V ( u , r u ) : u 𝒬 ) ( 𝔽 q s ) 𝒬 .

Finally, the code R ( 𝒞 ) : = { R ( c ) , c 𝒞 } is called the response code of 𝒞 .

### Example 3.2 (Fundamental example).

Let 𝒞 be a code, and let be a set of parity-check equations for 𝒞 of Hamming weight , whose supports are pairwise distinct. Define the query set 𝒬 = { supp ( h ) , h } and, for any u 𝒬 , h ( u ) to be the unique parity-check equation in whose support is u. Finally, we define a map V by

V : 𝒬 × 𝔽 q , ( u , r ) i = 1 h ( u ) u i r i .

Notice that we set s = 1 here. By construction, it is clear that ( 𝒬 , V ) is a verification structure for 𝒞 .

### Example 3.3 (Toy example).

Let 𝒞 𝔽 2 7 be a binary Hadamard code of length n = 7 and dimension k = 3 . In other words, 𝒞 is defined by a parity-check matrix

H = ( 1 1 1 0 0 0 0 1 0 0 1 1 0 0 1 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 1 ) .

According to Example 3.2, we define 𝒬 to be the set of supports of rows of H. In other words,

𝒬 = { { 1 , 2 , 3 } , { 1 , 4 , 5 } , { 1 , 6 , 7 } , { 2 , 5 , 6 } , { 2 , 4 , 7 } , { 3 , 4 , 6 } , { 3 , 5 , 7 } } .

Then the verification map V : 𝒬 × 𝔽 2 3 𝔽 2 can be defined as follows. If u = { u 1 , u 2 , u 3 } 𝒬 and b 𝔽 2 u is indexed according to u, then we define

V ( u , b ) = i = 1 3 b u i .

Now let m = ( m 1 , m 2 , m 3 ) 𝔽 2 3 . The message m can be encoded into

c = ( m 1 , m 2 , m 1 + m 2 , m 3 , m 1 + m 3 , m 1 + m 2 + m 3 , m 2 + m 3 ) 𝒞 .

Hence the word r = R ( c ) ( 𝔽 2 3 ) 7 is

r = ( ( c 1 c 2 c 3 ) , ( c 1 c 4 c 5 ) , ( c 1 c 6 c 7 ) , ( c 2 c 5 c 6 ) , ( c 2 c 4 c 7 ) , ( c 3 c 4 c 6 ) , ( c 3 c 5 c 7 ) ) = ( ( m 1 m 2 m 1 + m 2 ) , ( m 1 m 3 m 1 + m 3 ) , ( m 1 m 1 + m 2 + m 3 m 2 + m 3 ) , ( m 2 m 1 + m 3 m 1 + m 2 + m 3 ) , ( m 2 m 3 m 2 + m 3 ) , ( m 1 + m 2 m 3 m 1 + m 2 + m 3 ) , ( m 1 + m 2 m 1 + m 3 m 2 + m 3 ) ) .

For each vector-coordinate b 𝔽 2 3 of r = R ( c ) , one can now check that j b j = 0 . Hence we get V ( R ( c ) ) = 0 , as expected.

From now on, we denote by N = | 𝒬 | the length of the response code R ( 𝒞 ) of a code 𝒞 equipped with a verification structure ( 𝒬 , V ) .

### 3.2 Definition of our PoR scheme

Let ( 𝒬 , V ) be a verification structure for 𝒞 𝔽 q n , and let σ 𝔖 ( 𝔽 q ) n , where 𝔖 ( 𝔽 q ) denotes the set of permutations over 𝔽 q . Any n-tuple of permutations σ = ( σ 1 , , σ n ) 𝔖 ( 𝔽 q ) n naturally acts on c 𝔽 q n by

σ ( c ) ( σ 1 ( c 1 ) , , σ n ( c n ) ) ,

and we define σ ( 𝒞 ) = { σ ( c ) , c 𝒞 } . Let finally

V σ : 𝒬 × 𝔽 q 𝔽 q s ,
( u , y ) V ( u , σ | u - 1 ( y ) ) ,

where σ | u - 1 ( y ) = ( σ u 1 - 1 ( y 1 ) , , σ u - 1 ( y ) ) . The map V σ has been defined in order to satisfy

V σ ( u , R ( u , σ ( c ) ) ) = V ( u , R ( u , c ) )

for every ( c , u ) 𝒞 × 𝒬 .

Based on this, our 𝖯𝗈𝖱 construction is given in Figure 2.

### Figure 2

Definition of our 𝖯𝗈𝖱 scheme.

### Figure 3

Our extraction procedure 𝖤𝗑𝗍𝗋𝖺𝖼𝗍 ( r , σ ) .

### 3.3 Analysis

#### 3.3.1 Preliminary results

We first give results concerning verification structures and response codes. The following two lemmata are straightforward to prove.

#### Lemma 3.4.

Let ( Q , V ) be a verification structure for a code C F q n . Then ( Q , V σ ) is a verification structure for σ ( C ) .

#### Lemma 3.5.

Let Q be any query-set for a code C F q n whose elements have cardinality 1 . Then its response code R ( C ) is an F q -linear code over the alphabet R F q .

#### Remark 3.6.

By considering σ ( 𝒞 ) instead of 𝒞 , we loose the 𝔽 q -linearity, but one can check that verification structures still make sense and provide the result claimed in Lemma 3.4.

The next result states that the map 𝒞 σ ( 𝒞 ) does not modify the distance between codewords.

#### Lemma 3.7.

Let C F q n be a linear code, ( Q , V ) a verification structure for C , and σ S ( F q ) n . Then it holds that

• the distribution of distances in 𝒞 and σ ( 𝒞 ) are the same,

• the distribution of distances in R ( 𝒞 ) and R ( σ ( 𝒞 ) ) are the same.

#### Proof.

Since every σ i is one-to-one, for any c , c 𝒞 , we get

d ( c , c ) = | { i [ 1 , n ] , c i c i } | = | { i [ 1 , n ] , σ i ( c i ) σ i ( c i ) } | = d ( σ ( c ) , σ ( c ) ) .

The proof for response codes relies on the same argument. ∎

Remark these results imply that, if 𝒞 is linear, then the minimum distance of R ( σ ( 𝒞 ) ) is the minimum weight of R ( 𝒞 ) .

#### Definition 3.8.

Let ε [ 0 , 1 ] and ( 𝒬 , V ) be a verification structure for a code 𝒞 𝔽 q n . We say r 𝒬 is ε-close to ( 𝒬 , V ) if

wt ( V ( r ) ) : = | { u 𝒬 , V ( u , r u ) 0 } | ε N .

Let now c 𝒞 and β [ 0 , 1 ] . We say that r 𝒬 is a β-liar for ( 𝒬 , V , c ) if

| { u 𝒬 , V ( u , r u ) = 0 and r u R ( u , c ) } | β N .

##### Bounded-distance error-and-erasure decoder

Let 𝒜 𝔽 q n be any code of minimum distance d, and let a 𝒜 be corrupted with b errors and e erasures, resulting in a word r ( 𝔽 q { } ) n . Then it is well known that, as long as 2 b + e < d , it is possible to retrieve a from r thanks to a so-called bounded-distance error-and-erasure decoding algorithm. This is precisely the decoding algorithm that we employ in Figure 3 on the code 𝒜 = R ( 𝒞 ) .

Our framework allows us to reformulate the extraction success in terms of a probability to decode corrupted codewords. More precisely:

##### Proposition 3.9.

Let σ S ( F q ) n , m F q k , and denote by d the minimum distance of R ( C ) of length N. Let also r R Q be the response word, output of a proving algorithm P taking w = σ ( C ( m ) ) as input. Finally, assume that r is ε-close to ( Q , V σ ) and a β-liar for ( Q , V σ , w ) , with ( ε + 2 β ) N < d . Then Extract ( r , σ ) = m , where Extract ( r , σ ) is defined in Figure 3.

##### Proof.

Recall that r ( { } ) 𝒬 represents the word we get from r after step (ii) of the algorithm given in Figure 3. Let us now translate our assumptions on r in coding-theoretic terminology:

• r is ε-close to ( 𝒬 , V σ ) means that there are at most ε N challenges u 𝒬 for which we know that the coordinate r u is not authentic. This justifies that we assign erasure symbols to these coordinates.

• r is a β-liar for ( 𝒬 , V , c ) means that there are at most β N other corrupted values r u , but we cannot identify them. Therefore, we can assimilate these coordinates to errors.

To sum up, we see r as a corruption of R ( 𝒞 ( m ) ) with at most ε N erasures and at most β N errors, where N = | 𝒬 | . Since we assume that ( ε + 2 β ) N < d , we know from the previous discussion that the decoding succeeds to retrieve m. ∎

#### 3.3.2 Bounding the extraction failure

According to Definition 2.3, our 𝖯𝗈𝖱 scheme is weakly ( ε , τ ) -sound if, for every polynomial-time algorithm 𝖯 outputting a response word r ( w ) from a file w, we have

Pr σ , m ( decoding r ( w ) into m fails wt ( V σ ( r ( w ) ) ) ε N | m R 𝔽 q k σ R 𝔖 ( 𝔽 q ) n w = σ ( 𝒞 ( m ) ) ) τ .

Using Proposition 3.9, the security analysis of our 𝖯𝗈𝖱 scheme reduces to measuring the ability of the 𝖯𝗋𝗈𝗏𝖾𝗋 to produce a response word r which is ε-close to ( 𝒬 , V σ ) and a β-liar for ( 𝒬 , V σ , w ) , with ( ε + 2 β ) N d .

For fixed r 𝒬 , σ 𝔖 ( 𝔽 q ) n and w = σ ( 𝒞 ( m ) ) the authentic file given to the prover, we define three subsets of 𝒬 :

• 𝒟 ( r , w ) : = { u 𝒬 , r u R ( w ) u } and D ( r , w ) : = | 𝒟 ( r , w ) | = wt ( r - R ( w ) ) . This represents challenges u on which the response word r differs from the authentic one R ( w ) .

• ( r , σ ) : = { u 𝒬 , V σ ( u , r u ) 0 } and E ( r , σ ) : = | ( r , σ ) | = wt ( V σ ( r ) ) . These are challenges u on which the associated coordinate r u is not accepted by the verification map (it corresponds to erasures in the decoding process).

• ( r , σ , w ) : = { u 𝒬 , r u R ( w ) u and V σ ( u , r u ) = 0 } and B ( r , σ , m ) : = | ( r , σ , m ) | . These are the challenges u on which the associated coordinate r u is accepted by the verification map, but differs from the authentic response s u (it corresponds to errors in the decoding process).

One can easily check that, for every σ, the sets ( r , σ ) and ( r , σ , w ) define a partition of 𝒟 ( r , w ) . The probability of extraction failure can thus be written as

(3.1) Pr ( 2 D ( r , w ) - E ( r , σ ) d min ( R ( 𝒞 ) ) E ( r , σ ) ε N | m R 𝔽 q k σ R 𝔖 ( 𝔽 q ) n w = σ ( 𝒞 ( m ) ) ) .

For w 𝔽 q n , let us define the set of admissible permutations and messages

Φ w : = { ( σ , m ) 𝔖 ( 𝔽 q ) n × 𝔽 q k , w = σ ( 𝒞 ( m ) ) } ,

so that equation (3.1) rewrites

Pr ( 2 D ( r , w ) - E ( r , σ ) d min ( R ( 𝒞 ) ) E ( r , σ ) ε N | ( σ , m ) R Φ w ) .

Later on, we will use the notation Pr Φ w to refer to the fact that ( σ , m ) is uniformly drawn from Φ w . Similarly we will use notation 𝔼 Φ w for the expectancy and Var Φ w for the variance.

Given r 𝒬 , we also define

α ( r , w ) : = max u 𝒟 ( r , w ) Pr Φ w ( V σ ( u , r u ) = 0 )

and α : = max ( r , w ) α ( r , w ) , where ( r , w ) are such that D ( r , w ) 0 . The parameter α ( 0 , 1 ) is called the bias of the verification structure ( 𝒬 , V ) for 𝒞 . It corresponds to the maximum probability that a response is accepted but not authentic.

#### Lemma 3.10.

For all r R Q and w F q n , we have

𝔼 Φ w ( E ( r , σ ) ) ( 1 - α ) D ( r , w ) .

#### Proof.

A simple computation shows

𝔼 Φ w ( E ( r , σ ) ) = 𝔼 Φ w ( u 𝒟 ( r , w ) 𝟙 V σ ( u , r u ) 0 ) = u 𝒟 ( r , w ) Pr Φ w ( V σ ( u , r u ) 0 ) u 𝒟 ( r , w ) ( 1 - α ) ( 1 - α ) D ( r , w ) .

Lemma 3.10 essentially means that, if an adversary to our 𝖯𝗈𝖱 scheme wants its response word to be (in average) ε-close to the verification structure, then he should modify at most D ( r , w ) ε N 1 - α responses. Below, we take advantage of this result, and we measure the probability of an extraction failure.

First, for δ , ε ( 0 , 1 ) , let

p ( r , w ; ε , δ ) : = Pr Φ w ( 2 D ( r , w ) - E ( r , σ ) δ N and E ( r , σ ) ε N ) = Pr Φ w ( E ( r , σ ) min { ε N , 2 D ( r , w ) - δ N } ) .

The probability p ( r , w ; ε , δ ) represents the probability that the extraction fails for a response code of relative distance δ and an adversarial response word r associated to w, which is ε-close to the verification structure. Let us bound p ( r , w ; ε , δ ) .

#### Proposition 3.11.

Let δ , ε ( 0 , 1 ) such that δ 1 - α 1 + α > ε . Let also r R Q and w F q n . Then we have

p ( r , w ; ε , δ ) Var Φ w ( E ( r , σ ) ) ( 1 + α 2 ( δ 1 - α 1 + α - ε ) ) 2 N 2 .

#### Proof.

We distinguish three cases.

(i) 2 D ( r , w ) - δ N < 0 . The event E ( r , σ ) min { ε N , 2 D ( r , w ) - δ N } never occurs since E ( r , σ ) 0 . Hence p ( r , w ; ε , δ ) = 0 .

(ii) ε N 2 D ( r , w ) - δ N . The inequality E ( r , σ ) ε N implies

E ( r , σ ) - 𝔼 Φ w ( E ) ε N - ( 1 - α ) D ( r , w ) ε N - ( 1 - α ) ε + δ 2 N - 1 + α 2 ( δ 1 - α 1 + α - ε ) N .

Hence, using Chebychev’s inequality,

p ( r , w ; ε , δ ) = Pr Φ w ( E ( r , σ ) ε N ) Pr Φ w ( | E ( r , σ ) - 𝔼 Φ w ( E ) | 1 + α 2 ( δ 1 - α 1 + α - ε ) N ) Var Φ w ( E ( r , σ ) ) ( 1 + α 2 ( δ 1 - α 1 + α - ε ) ) 2 N 2 .

(iii) 0 2 D ( r , w ) - δ N < ε N . In this case, E ( r , σ ) 2 D ( r , w ) - δ N implies

E ( r , σ ) - 𝔼 Φ w ( E ) ( 1 + α ) D ( r , w ) - δ N ( 1 + α ) ε + δ 2 N - δ N - 1 + α 2 ( δ 1 - α 1 + α - ε ) N .

Therefore, similarly to the previous case, we obtain the claimed result. ∎

For any u 𝒟 ( r , w ) , denote by X u the { 0 , 1 } -random variable “ 𝟙 V σ ( u , r u ) = 0 ” when σ is uniformly drawn from Φ w . It holds that E ( r , σ ) = u 𝒟 ( r , w ) ( 1 - X u ) .

Recall that two real random variables Y , Z are uncorrelated if 𝔼 ( Y Z ) = 𝔼 ( Y ) 𝔼 ( Z ) . For instance, two independent random variables are uncorrelated.

#### Lemma 3.12.

Let r R Q and w F q n . If the random variables { X u } u D ( r , w ) are pairwise uncorrelated, then

Var Φ w ( E ( r , σ ) ) D ( r , w ) .

#### Proof.

By assumption, { X u } u 𝒟 ( r , w ) are pairwise uncorrelated; hence

Var Φ w ( E ( r , σ ) ) = u 𝒟 ( r , w ) Var Φ w ( 1 - X u ) .

The trivial bound Var Φ w ( 1 - X u ) 1 gives the result. ∎

As a corollary of Proposition 3.11 and Lemma 3.12, under the same hypothesis and assuming δ 1 - α 1 + α > ε , we get

p ( r , w ; ε , δ ) 4 N ( ( 1 - α ) δ - ( 1 + α ) ε ) 2

since D ( r , w ) N . Moreover, if lim N δ > 0 and lim N α = 0 , then p ( r , w ; ε , δ ) = 𝒪 ( 1 / N ) .

Therefore, we end up with the following theorem.

#### Theorem 3.13.

Let ( Q , V ) be a verification structure for C with bias α. Let N = | Q | , and let δ = d min ( R ( C ) ) / N be the relative distance of the associated response code. Finally, assume that, for any r R Q and any w F q n , the variables { X u } u D ( r , w ) are pairwise uncorrelated. Then, for any ε < δ 1 - α 1 + α , the PoR scheme associated to C and ( Q , V ) is ( ε , τ ) -sound, where

τ = 4 N ( ( 1 - α ) δ - ( 1 + α ) ε ) 2 .

For asymptotically small α, a code 𝒞 equipped with a verification structure satisfying the conditions of Theorem 3.13 thus gives an ( ε , τ ) -sound 𝖯𝗈𝖱 scheme for every ε < ( 1 + o ( 1 ) ) δ and τ = 𝒪 ( 1 / N ) .

According to Theorem 3.13, we thus need to look for (sequences of) codes 𝒞 and associated verification structures ( 𝒬 , V ) such that

1. (i)

the response code R ( 𝒞 ) admits a good relative distance δ = d min ( R ( 𝒞 ) ) / N ,

2. (ii)

the bias α is small,

3. (iii)

random variables { X u } u 𝒟 ( r , w ) are pairwise uncorrelated.

Sections 3.4 and 3.5 characterise conditions under which the last two points are fulfilled. Then, in Section 5, we discuss which response codes can achieve good relative distance.

### 3.4 Estimating α

In this section, we prove that, assuming Φ w approximates the uniform distribution over 𝔖 ( 𝔽 q ) n in a sense that we make precise later, the bias α can be bounded according to parameters of the verification structure.

Let us fix r 𝒬 , w 𝔽 q n and u 𝒬 . We recall that α is defined by

α = max r , w max u 𝒟 ( r , w ) Pr Φ w ( V σ ( u , r u ) = 0 ) ,

where randomness comes from σ R Φ w = { ( σ , m ) 𝔖 ( 𝔽 q ) n × 𝔽 q k , w = σ ( 𝒞 ( m ) ) } . We notice that this is equivalent to write σ R { σ 𝔖 ( 𝔽 q ) n , σ - 1 ( w ) 𝒞 } .

For convenience, we will view r u = 𝔽 q as a vector indexed by u = ( u 1 , , u ) , so that we can easily denote by r u [ u j ] 𝔽 q its j-th coordinate, 1 j . We define the code K u : = ker V ( u , ) 𝔽 q , and up to re-indexing coordinates, 𝒞 | u K u . This allows us to write that, for every σ, we have V σ ( u , r u ) = 0 if and only if σ u - 1 ( r u ) K u . Finally, we denote by Z u : = { i u , r u [ i ] R ( w ) u [ i ] } the set of coordinates of r u that are not authentic.

Let Y u ( σ ) represent the event “ σ u - 1 ( r u ) K u supp ( σ u - 1 ( r u ) ) = Z u ”. Informally, the reason why we consider an event Y u ( σ ) conditioned by supp ( σ u - 1 ( r u ) ) = Z u is that the 𝖯𝗋𝗈𝗏𝖾𝗋 is free to choose any support Z u on which he can modify the original file. More formally, this constraint will help us to bound the probability Pr Φ w ( V σ ( u , r u ) = 0 ) in Lemma 3.14. We say that Φ w is sufficiently uniform if, for every u 𝒬 , we have

γ u : = Pr [ Y u ( σ ) σ R Φ w ] - Pr [ Y u ( σ ) σ R 𝔖 ( 𝔽 q ) n ] Pr [ Y u ( σ ) σ R 𝔖 ( 𝔽 q ) n ] = o ( 1 )

when the file size n log q . In other words, Φ w is sufficiently uniform if it is a good approximation of the whole set of n-tuples of permutations, when considering the probability that Y u ( σ ) happens.

### Lemma 3.14.

Let r, w, u and Z u be defined as above. Let also A u = | { x K u , supp ( x ) = Z u } | . Then

Pr Φ w ( V σ ( u , r u ) = 0 ) ( 1 + γ u ) A u ( q - 1 ) | Z u | .

### Proof.

For every σ such that ( σ , m ) Φ w , we know that σ u - 1 ( R ( w ) u ) K u , and we recall that V σ ( u , r u ) = 0 if and only if σ u - 1 ( r u ) K u . Since K u is linear, and up to considering σ u - 1 ( R ( w ) u - r u ) instead, we can assume without loss of generality that σ u - 1 ( r u ) [ i ] = 0 for every i u Z u . In other words, we assume that supp ( σ u - 1 ( r u ) ) = Z u .

Remark that

Pr σ R 𝔖 ( 𝔽 q ) n [ σ u - 1 ( r u ) K u supp ( σ u - 1 ( r u ) ) = Z u ] = Pr x R 𝔽 q [ x K u supp ( x ) = Z u ] = Pr x R 𝔽 q [ x K u supp ( x ) = Z u ] = A u ( q - 1 ) | Z u |

since A u counts the number of codewords in K u whose support is Z u .

Therefore, we get

Pr Φ w ( V σ ( u , r u ) = 0 ) Pr Φ w [ V σ ( u , r u ) = 0 supp ( σ u - 1 ( r u ) ) = Z u ] = ( 1 + γ u ) Pr 𝔖 ( 𝔽 q ) n [ V σ ( u , r u ) = 0 supp ( σ u - 1 ( r u ) ) = Z u ] = ( 1 + γ u ) Pr x R 𝔽 q [ x K u supp ( x ) = Z u ] = ( 1 + γ u ) A u ( q - 1 ) | Z u | .

### Lemma 3.15.

Let S u be the F q -vector space { x K u , supp ( x ) = Z u } , and assume that S u { 0 } . We have

A u q | Z u | - d min ( S u ) + 1 .

### Proof.

We prove that, if A u > q e for some integer e 0 , then d min ( S u ) | Z u | - e , which clearly induces our result. If A u > q e , then dim S u > e since | S u | A u . The Singleton bound then provides

d min ( S u ) | Z u | - dim S u + 1 | Z u | - e .

Finally, we get the following upper bound on α.

### Proposition 3.16.

Let Δ = min { d min ( K u ) , u Q } . Then

α ( 1 + γ ) ( 1 + 1 q - 1 ) q - Δ + 1 ,

where γ u = max γ u .

### Proof.

Remark that S u , defined in previous lemma, is a subcode of K u shortened on u Z u . Hence

d min ( K u ) d min ( S u ) ,

and we can apply previous results and obtain the desired bound

α max u , r ( 1 + γ u ) ( q q - 1 ) | Z u | q - d min ( K u ) + 1 ( 1 + γ ) ( 1 + 1 q - 1 ) q - Δ + 1 ,

where γ = max u γ u . ∎

If every Φ w is sufficiently uniform, then, by definition, we have γ = o ( 1 ) when the file size n log q . This assumption is significant since we desire to have a small bias α, which is deeply linked to the soundness of 𝖯𝗈𝖱 s (see Theorem 3.13). In Appendix A, we present experimental estimates of α, validating that the assumption that Φ w is sufficiently uniform.

### 3.5 Pairwise uncorrelation of { X u } u ∈ 𝒟

This section is devoted to proving that variables { X u } u 𝒟 ( r , w ) are pairwise uncorrelated if the supports of challenges u 𝒟 ( r , w ) have small pairwise intersection. For this purpose, let us recall that, for fixed r 𝒬 , w and u 𝒟 ( r , w ) , the random variable X u represents 𝟙 V σ ( u , r u ) = 0 when σ is uniformly picked in Φ w .

We first state a technical lemma that will be useful to prove Proposition 3.18 below. For clarity, we denote by d ( 𝒞 ) the minimum distance of the dual code 𝒞 of a linear code 𝒞 .

### Lemma 3.17.

Let C F q n be a linear code and T [ 1 , n ] , | T | = t , where t < d ( C ) . For a F q T , we define

𝒱 a = { c 𝒞 , c | T = a } 𝑎𝑛𝑑 N a = | 𝒱 a | .

Then

1. (i)

𝒱 0 = { v 𝒞 , v | T = 0 } is a linear subcode of 𝒞 ;

2. (ii)

for every non-zero a 𝔽 q T , there exists a non-zero c ( a ) 𝒞 such that 𝒱 a = 𝒱 0 + { c ( a ) } ;

3. (iii)

for every a 𝔽 q T , N a = q k - t , where k = dim 𝒞 .

### Proof.

(i) The fact that 𝒱 0 = { v 𝔽 q X , v | T = 0 } is actually the well-known definition of the shortening of a code. It is easy to prove that it defines a linear code.

(ii) Let a 𝔽 q T be non-zero, and let us first prove that there exists c ( a ) 𝒞 such that c | T ( a ) = a . If it were not the case, then, by definition, we would have 𝒞 | T 𝔽 q t . But this is impossible since 𝒞 contains no non-zero codeword of weight less that t. It is then easy to check that 𝒱 a = 𝒱 0 + { c ( a ) } .

(iii) First notice that 𝒱 a 𝒱 b = if a b . Since

𝒞 = a 𝔽 q t 𝒱 a ,

we get the expected result. ∎

### Proposition 3.18.

If max { | u v | , u v Q } < min { d ( C | u ) , u Q } , then the random variables { X u } u Q are pairwise uncorrelated.

### Proof.

Recall that K u : = ker V ( u , ) and that, by definition of a verification structure, we have 𝒞 | u K u . For u v 𝒬 , let us prove that 𝔼 ( X u X v ) = 𝔼 ( X u ) 𝔼 ( X v ) . First,

𝔼 ( X u X v ) = Pr ( V σ ( u , r u ) = 0 and V σ ( v , r v ) = 0 ) = Pr ( σ - 1 ( r u ) | u K u and σ - 1 ( r v ) | v K v ) .

Denote t = | u v | , and let ( 𝐚 , 𝐛 ) ( 𝔽 q t ) 2 . We denote by Z ( σ , 𝐚 , 𝐛 ) the event

σ - 1 ( r u ) | u v = 𝐚 and σ - 1 ( r v ) | u v = 𝐛 .

We first notice that { σ | u v - 1 , σ Φ w } = 𝔖 ( 𝔽 q ) t . Indeed, we can here use an argument similar to the proof of Lemma 3.17: the constraint σ - 1 ( w ) 𝒞 is ineffective on σ | u v - 1 since | u v | t < d ( 𝒞 | z ) for every z 𝒬 . Therefore, for every ( 𝐚 , 𝐛 ) ( 𝔽 q t ) 2 , we have

Pr ( Z ( σ , 𝐚 , 𝐛 ) ) = q - 2 t ,

and it follows that

𝔼 ( X u X v ) = 1 q 2 t 𝐚 , 𝐛 ( 𝔽 q t ) 2 Pr ( σ - 1 ( r u ) | u K u and σ - 1 ( r v ) | v K v Z ( σ , 𝐚 , 𝐛 ) ) .

Recall now that t < min { d ( 𝒞 | u ) , u 𝒬 } min { d ( K u ) , u 𝒬 } . Hence, for fixed 𝐚 and 𝐛 , the variables σ - 1 ( r u ) | u K u Z ( σ , 𝐚 , 𝐛 ) and σ - 1 ( r v ) | v K v Z ( σ , 𝐚 , 𝐛 ) are independent (once again, it is a consequence of the structure results of Lemma 3.17). Therefore,

𝔼 ( X u X v ) = 1 q 2 t 𝐚 , 𝐛 ( 𝔽 q t ) 2 Pr ( σ - 1 ( r u ) | u K u Z ( σ , 𝐚 , 𝐛 ) ) Pr ( σ - 1 ( r v ) | v K v Z ( σ , 𝐚 , 𝐛 ) ) .

Then

𝔼 ( X u X v ) = 1 q 2 t 𝐚 , 𝐛 ( 𝔽 q t ) 2 Pr ( σ - 1 ( r u ) | u K u σ - 1 ( r u ) | u v = 𝐚 ) Pr ( σ - 1 ( r v ) | v K v σ - 1 ( r v ) | u v = 𝐛 ) ,

and we conclude since

𝔼 ( X u ) = q - t 𝐚 𝔽 q t Pr ( σ - 1 ( r u ) | u K u σ - 1 ( r u ) | u v = 𝐚 ) .

## 4 Performance

### 4.1 Efficient scrambling of the encoded file

In the 𝖯𝗈𝖱 scheme we propose, the storage cost of an n-tuple of permutations in 𝔖 ( 𝔽 q ) n is excessive since it is superlinear in the original file size. In this subsection, we propose a storage-efficient way to scramble the codeword c 𝒞 produced by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 .

Precisely, we want to define a family of maps ( σ ( κ ) ) κ , where σ ( κ ) : 𝒞 𝔽 q n , c w 𝔽 q n , with the following requirements:

• For every κ, the map σ ( κ ) is efficiently computable and requires a low storage.

• For every κ and every c 𝒞 , if w = σ ( κ ) ( c ) , then, for every i [ 1 , n ] , the local inverse map w i c i is efficiently computable.

• If κ is randomly generated but unknown, then, given the knowledge of w = σ ( κ ) ( c ) and 𝒞 , it is hard to produce a response word r 𝒬 such that, for many u 𝒬 , both V σ ( κ ) ( u , r u ) = 0 and r u w | u hold. To be more specific and in light of the security analysis of Section 3.3, we require that it is hard to distinguish σ ( κ ) ( c ) from a random ( z 1 , , z n ) 𝔽 q n , where symbols z i are picked independently and uniformly at random.

We here propose to derive σ ( κ ) from a suitable block cipher, yielding the explicit construction given below. Of course, other proposals can be envisioned.

#### The construction

Let IV denote a random initialisation vector for AES in CTR mode ( IV could be a nonce concatenated with a random value). Vector IV is kept secret by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 , as well as a randomly chosen key κ for the cipher. Let also f be a permutation polynomial over 𝔽 q of degree d > 1 . For instance, one could choose f ( x ) = x d with gcd ( d , q - 1 ) = 1 . Notice that polynomial f can be made public.

Let s = 256 log 2 q be the number of 𝔽 q -symbols one can store in a 256-bit word[2]. Up to appending a few random bits to c, we assume that s n , and we define t = n / s . Let us fix a partition of [ 1 , n ] into s-tuples i = ( i 1 , , i s ) ; it can be, for instance, ( 1 , , s ) , ( s + 1 , , 2 t ) , , ( ( t - 1 ) s + 1 , , n ) . Notice that this partition does not need to be chosen at random. Given c = ( c 1 , , c n ) 𝒞 and i an element of the above partition, we now define

b i = ( f ( c i 1 ) f ( c i s ) ) AES κ ( IV i ) { 0 , 1 } 256 .

If log 2 q 256 , trailing zeroes can be added to evaluations of f. Finally, the pseudo-random permutation σ is defined by

σ ( c ) : = ( b 1 , , b t ) .

#### Design rationale

AES is a natural choice when one needs a (secret-)keyed pseudo-random permutation. Also notice that, with this construction, one only needs to store the key κ and the vector IV since the other objects (the polynomial f, the partition) are made public. Hence our objectives in terms of storage are met.

We now point out the necessity to use i as a part of the input of the AES cipher. Assume that we do not. Then the local permutation σ j , 1 j n , would not depend on j. As a consequence, for a certain class of codes, the local verification map r u V σ ( u , r u ) would not depend on u, and a malicious 𝖯𝗋𝗈𝗏𝖾𝗋 would then be able to produce accepted answers while storing only a small piece of the file w (e.g., w | u for only one u 𝒬 ).

Another mandatory feature is the non-linearity of the permutation polynomial f. Indeed, assume, for instance, that f = id . Then, given the knowledge of w = σ ( c ) , it would be very easy for a malicious 𝖯𝗋𝗈𝗏𝖾𝗋 to produce a word w w such that r = R ( w ) is always accepted by the 𝖵𝖾𝗋𝗂𝖿𝗂𝖾𝗋 . Simply, the 𝖯𝗋𝗈𝗏𝖾𝗋 defines w = w + c , where c is any non-zero codeword of 𝒞 . Hence one sees that the polynomial f must be non-linear in order to prevent such kind of attacks.

### 4.2 Parameters

We here consider a 𝖯𝗈𝖱 built upon a code 𝒞 𝔽 q n with verification structure ( 𝒬 , V ) satisfying = 𝔽 q and V ( ) = 𝔽 q s . We also assume that we use an n-tuple of pseudo-random permutations as described in the previous subsection.

#### Communication complexity

At each verification step, the client sends an -tuple of coordinates ( u 1 , , u ) , u i [ 1 , n ] . The server then answers with corresponding symbols w u i 𝔽 q . Therefore, the upload communication cost is log 2 n bits, while the download communication cost is log 2 q , thus a total of ( log 2 n + log 2 q ) bits.

#### Computation complexity

In the initialisation phase, following the encryption described in Section 4.1, the client essentially has

• to compute the codeword c 𝒞 associated to its message,

• to make n evaluations of the permutation polynomial f over 𝔽 q ,

• to compute t = n log 2 q 256 AES ciphertexts to produce the word w to be sent to the server.

Given a generator matrix of 𝒞 , the codeword c can be computed in 𝒪 ( k n ) operations over 𝔽 q with a matrix-vector product. Notice that quasi-linear-time encoding algorithms exist for some classes of codes. Besides, if a monomial or a sparse permutation polynomial is used, then the cost of each evaluation is 𝒪 ( ( log 2 q ) 3 ) . If we denote by c the bitcost of an AES encryption, we get a total bitcost of 𝒪 ( n k ( log 2 q ) 2 + n ( log 2 q ) 3 + c n log 2 q ) for the initialisation phase. Recall this is a worst-case scenario in which the encoding process is inefficient.

At each verification step, an honest server only needs to read symbols from the file it stores. Hence its computation complexity is 𝒪 ( ) . The client has to compute a matrix-vector product over 𝔽 q , where the matrix has size s × and the vector has size , thus a computation cost of 𝒪 ( s ) operations over 𝔽 q .

#### Storage needs

The client stores 2 × 256 bits for secret material κ and IV to use in AES. The server storage overhead exactly corresponds to the redundancy of the linear code 𝒞 , that is, ( n - dim 𝒞 ) log 2 q bits.

#### Other features

Our 𝖯𝗈𝖱 scheme is unbounded-use since every challenge reveals nothing about the secret data held by the client. It does not feature dynamic updates of files. Though, we must emphasise that the file w the client produces can be split among several servers, and the verification step remains possible even if the servers do not communicate with each other. Indeed, computing a response to a challenge does not require mixing distinct symbols w i of the uploaded file. Therefore, our scheme is well suited for the storage of large static distributed databases. Parameters of the 𝖯𝗈𝖱 schemes we propose are reported in Figure 4.

### Figure 4

Summary of parameters of our 𝖯𝗈𝖱 construction for an original file of size k log 2 q bits and a code 𝒞 of dimension k over 𝔽 q equipped with a verification structure ( 𝒬 , V ) such that | u | = and rank V ( u , ) s for all u 𝒬 .

## 5 Instantiations

In this section, we present several instantiations of our 𝖯𝗈𝖱 construction. We first recall basics and notation from coding theory.

The code Rep ( ) 𝔽 q denotes the repetition code ( 1 , , 1 ) . We recall that Rep ( ) is the parity code Par ( ) : = { c 𝔽 q , i = 1 c i = 0 } . Let 𝒞 , 𝒞 be two linear codes over 𝔽 q of respective parameters [ n , k , d ] and [ n , k , d ] . Their tensor product 𝒞 𝒞 is the 𝔽 q -linear code generated by words

( c i c j : 1 i n ,  1 j n ) 𝔽 q n n .

It has dimension k k and minimum distance d d . We also denote by

𝒞 s : = 𝒞 𝒞 s  times 𝔽 q n s

the s-fold tensor product of 𝒞 with itself.

### 5.1 Tensor-product codes

The upcoming subsection illustrates our construction with a non practical but simple instance. The next ones lead to practical 𝖯𝗈𝖱 instances.

#### 5.1.1 A simple but non-practical instance

Let n = N and 𝒬 = { u i = { i + 1 , i + 2 , , ( i + 1 ) } , i [ 0 , N - 1 ] } . The set 𝒬 defines a partition of [ 1 , n ] . We define the code

𝒞 = { c 𝔽 q n , j u c j = 0 for all u 𝒬 } 𝔽 q n .

In other words, 𝒞 = Par ( ) 𝔽 q N , and a parity-check matrix H for 𝒞 is given by

H = ( 1 1 0 0 0 0 1 1 0 0 0 1 1 ) .

The verification map V : 𝒬 × 𝔽 q 𝔽 q is defined by V ( u , b ) : = j = 1 b u j for all ( u , b ) 𝒬 × 𝔽 q . By construction (see the fundamental Example 3.2), the pair ( 𝒬 , V ) defines a verification structure for 𝒞 .

#### Lemma 5.1.

Let C = Par ( ) F q N as above. Then the response code R ( C ) has minimum distance 1.

#### Proof.

We see that the restriction map R sends the codeword ( 1 , - 1 , 0 , 0 , , 0 ) 𝒞 to a word of weight 1. Besides, R is injective, so d min ( R ( 𝒞 ) ) > 0 . ∎

Since δ = d min ( R ( 𝒞 ) ) / N = 1 / N 0 when N goes to infinity, an attempt to build a 𝖯𝗈𝖱 scheme from 𝒞 cannot be practical.

#### 5.1.2 Higher order tensor-product codes

Let 𝒜 𝔽 q be a non-degenerate [ , k 𝒜 , d 𝒜 ] q -linear code, and define 𝒞 = 𝒜 s 𝔽 q n , where n = s . Notice that it will be more convenient to see coordinates of words w 𝔽 q n as elements of [ 1 , ] s .

For 𝐚 [ 1 , ] s and 1 i s , we define L i , 𝐚 [ 1 , ] s , the “i-th axis-parallel line with basis 𝐚 ”, as

L i , 𝐚 : = { 𝐱 [ 1 , ] s such that x j = a j for all j i } .

By definition of 𝒞 , a word c lies in 𝒞 if and only if, for every L = L i , 𝐚 , the restriction c | L 𝒜 . This means that we can define

• a set of queries 𝒬 = { L i , 𝐚 , i [ 1 , s ] , 𝐚 [ 1 , ] s } ,

• a verification map

V : 𝒬 × 𝔽 q - k 𝒜 ,
( L , r ) H r ,

where H is a parity-check matrix for 𝒜 whose columns are ordered according to the line L.

By the previous discussion, it is clear that c 𝒞 implies that V ( L , c | L ) = 0 for every L 𝒬 (in fact, these two assertions are equivalent). Hence ( 𝒬 , V ) defines a verification structure for 𝒞 , and we have N = | 𝒬 | = s s - 1 .

#### Lemma 5.2.

Let C = A s as above. Then R ( C ) has minimum distance s d A s - 1 .

#### Proof.

Let us first prove that the minimum distance of R ( 𝒞 ) is larger than s d 𝒜 s - 1 . Let r = R ( c ) R ( 𝒞 ) , and assume r 0 . Then there exists L 𝒬 such that 0 r L = c | L 𝒜 . Therefore, c 𝐱 0 for some 𝐱 L [ 1 , ] s . Consider the set

S i , 𝐱 = { 𝐲 [ 1 , ] s , y i = x i } .

Very informally, the set S i , 𝐱 corresponds to the hyperplane passing through 𝐱 and “orthogonal” to the i-th axis. By definition of 𝒞 = 𝒜 s , we know that c | S i , 𝐱 𝒜 ( s - 1 ) { 0 } for every 1 i s . Let

U i = supp ( c | S i , 𝐱 ) = { 𝐮 ( i , 1 ) , , 𝐮 ( i , t i ) }

with t i d min ( 𝒜 ( s - 1 ) ) = ( d 𝒜 ) s - 1 . Every 𝐮 ( i , j ) U i defines a line L i , 𝐮 ( i , j ) on which c | L i , 𝐮 ( i , j ) is a non-zero codeword of 𝒜 . Equivalently, r is non-zero on index L i , 𝐮 ( i , j ) 𝒬 . Therefore,

wt ( r ) = | { L 𝒬 , r L 0 } | | i = 1 s { L i , 𝐮 ( i , j ) ,  1 j t i } | i = 1 s t i s ( d 𝒜 ) s - 1 .

Let us now build a word r R ( 𝒞 ) of weight s ( d 𝒜 ) s - 1 . Let w 𝒜 { 0 } be a minimum-weight codeword of 𝒜 , and define W : = supp ( w ) A . Define c = w s 𝒞 ; then supp ( c ) = W s . Let finally r = R ( c ) . We see that r L i , 𝐱 0 if and only if 𝐱 W s . Hence we get

wt ( r ) = | { L 𝒬 , r L 0 } | = | i = 1 s { L i , 𝐱 , 𝐱 W s } | = s d 𝒜 s - 1

since each line L i , 𝐱 is counted d 𝒜 times when 𝐱 runs over W s . ∎

#### Proposition 5.3.

Let δ > 0 , and let A be an [ , ( 1 - δ ) + 1 , δ ] q MDS code. Define C = A s and ( Q , V ) as above. If every Φ w is sufficiently uniform, then the PoR scheme associated to C and ( Q , V ) is ( ε , τ ) -sound for τ = O ( 1 ( δ ) s s ) and every ε < ε 0 , where ε 0 = ( 1 + O ( q - δ + 1 ) ) δ s when .

#### Proof.

First, the relative distance of R ( 𝒞 ) is δ s according to Lemma 5.2. Then the random variables { X u } u 𝒟 are pairwise uncorrelated because the inequality

max u v 𝒬 2 | u v | = 1 < ( 1 - δ ) + 2 = min u 𝒬 d min ( ( 𝒞 | u ) )

allows us to apply Proposition 3.18. Besides, if every Φ w is sufficiently uniform, then the bias α satisfies α = 𝒪 ( q - δ + 1 ) and hence 1 - α 1 + α = 1 + 𝒪 ( q - δ + 1 ) . Therefore, we can use Theorem 3.13, and we get the desired result. ∎

##### Parameters

We mainly focus on the download communication complexity in the verification step and on the server storage overhead since these are the most crucial parameters which depend on the family of codes 𝒞 we use. Besides, we consider that it is more relevant to analyse the ratio between these quantities and the file size than their absolute values.

Here, for an initial file of size | F | = ( ( 1 - δ ) q + 1 ) s log 2 q bits, we get

• a redundancy rate

n log 2 q | F | = ( q ( 1 - δ ) q + 1 ) s 1 ( 1 - δ ) s ,

• a communication complexity rate

log 2 q | F | = q ( ( 1 - δ ) q + 1 ) s 1 ( 1 - δ ) s q 1 - s .

##### Example 5.4.

In Table 3, we present various parameters of 𝖯𝗈𝖱 instances admitting 0.10 ε 0 0.16 , for files of size approaching 10 4 , 10 6 and 10 9 bits. Here 𝒜 is a [ q , ( 1 - δ ) q + 1 , δ q ] q MDS code (e.g., a Reed–Solomon code), and 𝒞 = 𝒜 s .

Table 3

Parameters of 𝖯𝗈𝖱 instances admitting 0.10 ε 0 0.16 .

 q δ ⁢ q s File size (bits) Comm. rate Redundancy rate ε 0 16 10 4 9,604 6.664 × 10 - 3 27.3 0.153 25 13 3 10,985 1.138 × 10 - 2 7.112 0.141 64 24 2 10,086 3.807 × 10 - 2 2.437 0.141 32 21 5 1,244,160 1.286 × 10 - 4 134.8 0.122 47 28 4 960,000 2.938 × 10 - 4 30.5 0.126 101 47 3 1,164,625 6.071 × 10 - 4 6.193 0.101 512 180 2 998,001 4.617 × 10 - 3 2.364 0.124 128 85 5 1,154,413,568 7.762 × 10 - 7 208.3 0.129 256 150 4 1,048,636,808 1.953 × 10 - 6 32.77 0.118 1,024 550 3 1,071,718,750 9.555 × 10 - 6 10.02 0.155 12,167 3,900 2 957,037,536 1.78 × 10 - 4 2.166 0.103 16,384 5,500 2 1,658,765,150 1.383 × 10 - 4 2.266 0.113

The previous example shows that, while the communication rate is reasonable for these 𝖯𝗈𝖱 instances over large files, the storage needs remain large.

### 5.2 Reed–Muller and related codes

Low-degree Reed–Muller codes are known to admit many distinct low-weight parity-check equations, whose supports correspond to affine subspaces of the ambient space. Therefore, they seem naturally adapted to our construction. Let us first consider the plane (or bivariate) Reed–Muller code case.

#### 5.2.1 The plane Reed–Muller code RM q ⁢ ( 2 , q - 2 )

Let 𝒞 be the Reed–Muller code

𝒞 = RM q ( 2 , q - 2 ) : = { ( f ( x , y ) ) ( x , y ) 𝔽 q 2 , f 𝔽 q [ X , Y ] , deg f q - 2 } .

It is well known that 𝒞 has length q 2 and dimension ( q - 1 ) ( q - 2 ) / 2 . Besides, for every line

L = { 𝐱 = ( a t + b , c t + d ) , t 𝔽 q } 𝔽 q 2

and every c 𝒞 , we can check that 𝐱 L c 𝐱 = 0 . Indeed, let f 𝔽 q [ X , Y ] , deg f = a q - 2 . The restriction of f on an affine line L can be interpolated as a univariate polynomial f | L of degree at most a. Our claim follows since z 𝔽 q z i = 0 for every i q - 2 .

Therefore, we can define 𝒬 as the set of affine lines L of 𝔽 q 2 and V ( L , r ) = j = 1 r j 𝔽 q . From the previous discussion, we see that ( 𝒬 , V ) is a verification structure for 𝒞 . Also notice there are q ( q + 1 ) distinct affine lines in 𝔽 q 2 ; hence N = q ( q + 1 ) .

#### Lemma 5.5.

Let C = RM q ( 2 , q - 2 ) , equipped with its verification structure defined as above. Then the response code R ( C ) has minimum distance q 2 + 2 .

#### Proof.

Any non-zero codeword c 𝒞 consists in the evaluation of a non-zero polynomial f ( X , Y ) 𝔽 q [ X , Y ] of degree at most q - 2 . Denote by L 1 , , L a 𝔽 q 2 the affine lines on which f vanishes, i.e., f ( P ) = 0 for every P L i , 1 i a . We claim that a q - 2 . Indeed, since f has total degree less than q - 1 , it also vanishes on closed lines L 1 ¯ , , L a ¯ , considered as affine lines in 𝔽 q ¯ 2 , where 𝔽 q ¯ denotes the algebraic closure of 𝔽 q . Denote by g i 𝔽 q [ X , Y ] the monic polynomial of degree 1 which defines L i ¯ . From Hilbert’s Nullstellensatz, there exists r > 0 such that ( i = 1 a g i ) f r . Since the g i ’s have degree 1 and are distinct, we get a deg f q - 2 . Hence the affine lines different from L 1 , , L a correspond to non-zero coordinates of R ( c ) . There are q ( q + 1 ) - a q 2 + 2 such lines, so d min ( R ( 𝒞 ) ) q 2 + 2 .

Now we claim there exists a word r R ( 𝒞 ) of weight N - q + 2 = q 2 + 2 . Let L ( 0 ) and L ( 1 ) be two distinct parallel affine lines, respectively defined by X = 0 and X = 1 . We build the word c which is -1 on coordinates corresponding to points in L ( 0 ) , 1 on those corresponding to points in L ( 1 ) and 0 elsewhere. One can check that c 𝒞 ; indeed, c corresponds to the evaluation of z 𝔽 q { 0 , 1 } ( z - X ) . Now, if we want to compute wt ( R ( c ) ) , we only need to count the number of lines which do not intersect L ( 0 ) nor L ( 1 ) . Clearly, there are only q - 2 such lines. Hence wt ( R ( c ) ) = q ( q + 1 ) - ( q - 2 ) , and this concludes the proof. ∎

#### Proposition 5.6.

Let C = RM ( 2 , q - 2 ) , and let ( Q , V ) be its associated verification structure. If every Φ w is sufficiently uniform, then the PoR scheme associated to C and ( Q , V ) is ( ε , τ ) -sound for ε = 1 - o ( 1 ) and τ = O ( 1 ( 1 - ε ) q 2 ) , when q .

#### Proof.

One can check that the random variables { X u } u 𝒟 are pairwise uncorrelated since

max u v 𝒬 2 | u v | = 1 < ( 1 - δ ) + 2 = min u 𝒬 d min ( ( 𝒞 | u ) ) .

Besides, the relative distance of R ( 𝒞 ) is q 2 + 2 q ( q + 1 ) 1 according to Lemma 5.5. If every Φ w is sufficiently uniform, the bias α satisfies α 𝒪 ( 1 / q ) and hence 1 - α 1 + α = 1 + 𝒪 ( 1 / q ) . Therefore, we can use Theorem 3.13, and we get the desired result. ∎

##### Parameters

For an initial file of size | F | = 1 2 ( q - 1 ) ( q - 2 ) log 2 q bits, we get

• a redundancy rate

q 2 log 2 q | F | = 2 ( 1 - 1 / q ) ( 1 - 2 / q ) 2 ,

• a communication complexity rate

q log 2 q | F | = 2 q 1 ( 1 - 1 / q ) ( 1 - 2 / q ) = 𝒪 ( 1 / q ) .

#### 5.2.2 Storage improvements via lifted codes

The redundancy rate of Reed–Muller codes presented above stays stuck above 2. Affine lifted codes, introduced by Guo, Kopparty and Sudan [5], allow to break this barrier while keeping the same verification structure. Generically, they are defined as follows:

Lift ( m , d ) : = { ( f ( 𝐏 ) ) 𝐏 𝔽 q m , f 𝔽 q [ X 1 , , X m ] for every affine line L 𝔽 q m , ( f ( 𝐐 ) ) 𝐐 L RS q ( d + 1 ) } .

We refer to [5] for more details about the construction. Here we focus on Lift ( 2 , q - 2 ) since it can be compared to RM ( 2 , q - 2 ) . Indeed, one sees that

(5.1) RM ( 2 , q - 2 ) Lift ( 2 , q - 2 ) ,

and equation (5.1) turns into a proper inclusion as long as q is not a prime. Besides, by definition of lifted codes, Lift ( 2 ,