Privacy-preserving verifiable delegation of polynomial and matrix functions

Abstract Outsourcing computation has gained significant popularity in recent years due to the development of cloud computing and mobile services. In a basic outsourcing model, a client delegates computation of a function f on an input x to a server. There are two main security requirements in this setting: guaranteeing the server performs the computation correctly, and protecting the client’s input (and hence the function value) from the server. The verifiable computation model of Gennaro, Gentry and Parno achieves the above requirements, but the resulting schemes lack efficiency. This is due to the use of computationally expensive primitives such as fully homomorphic encryption (FHE) and garbled circuits, and the need to represent f as a Boolean circuit. Also, the security model does not allow verification queries, which implies the server cannot learn if the client accepts the computation result. This is a weak security model that does not match many real life scenarios. In this paper, we construct efficient (i.e., without using FHE, garbled circuits and Boolean circuit representations) verifiable computation schemes that provide privacy for the client’s input, and prove their security in a strong model that allows verification queries. We first propose a transformation that provides input privacy for a number of existing schemes for verifiable delegation of multivariate polynomial f over a finite field. Our transformation is based on noisy encoding of x and keeps x semantically secure under the noisy curve reconstruction (CR) assumption. We then propose a construction for verifiable delegation of matrix-vector multiplication, where the delegated function f is a matrix and the input to the function is a vector. The scheme uses PRFs with amortized closed-form efficiency and achieves high efficiency. We outline applications of our results to outsourced two-party protocols.


Introduction
Outsourcing computation has gained significant popularity in recent years due to the development of cloud computing and mobile devices. Computationally weak devices such as smartphones and netbooks can outsource expensive computations to powerful cloud servers.
The first security concern that arises in outsourcing is to guarantee that the cloud server correctly performs the delegated computation. Cloud servers may have incentives, such as saving in computation time or other malicious goals, to produce results that may be incorrect. The verifiable computation (VC) of Gennaro, Gentry and Parno [18] allows a client to outsource the computation of a function f on an input x and then verify the correctness of the server's work. The outsourcing is especially meaningful as long as the client's work spent on preparing x for delegation and verifying the server's results are substantially less than computing f(x) locally. A second security concern is privacy of client's data, including the input x and the output f(x).
Resolving both security issues simultaneously in an efficient way is a nontrivial problem. The proposals in [18] and several following works [2,4,13] address both security concerns but use expensive cryptographic primitives such as fully homomorphic encryption (FHE) and/or garbled circuits, and represent the function f as a Boolean circuit. The result is inefficient verifiable computation schemes. From a security view point, an important shortcoming is that these schemes can only tolerate adversaries that do not make verification queries, i.e., the adversary is not allowed to learn if the client has accepted the computation result.

Our work
In this paper, we develop the first verifiable computation schemes where the client's input is kept private from the server; both the client and the server computations are free of FHE, garbled circuits and Boolean circuit representations, and the security is proved in a strong model that allows verification queries. We achieve these properties for two types of functions: multivariate polynomials and functions that are represented by a matrix over a finite field.

A transformation for polynomial delegation schemes
Our first contribution is a transformation T that can be applied to a number of existing verifiable computation schemes, resulting in the input x and the output f(x), to remain private (semantic security) from the server. Our transformation works for all schemes in [6,10,16,37] where the function f is a multivariate polynomial over finite fields, does not use FHE, garbled circuits and Boolean circuits and allows verification queries if so does the underlying scheme.
Verifiable delegation of high-degree polynomial computations on private inputs is highly nontrivial. On one hand, the client has to provide a semantically secure encryption of x (say σ x ) to the cloud server. On the other hand, the cloud server has to compute f on σ x without knowing the decryption key and produce an encoding σ y of the output y = f(x). A generic way to enable such computations is to use FHE, which however should be avoided in our schemes.
We resolve this difficulty using techniques from multivariate polynomial interpolation and reconstruction [12,43]. Let f(x) = f(x 1 , . . . , x h ) be an h-variate polynomial of degree ≤ d over a finite field , and let a = (a 1 , . . . , a h ) ∈ h be any input to the function. We observe that f(a) can be learned from the restriction of f on a random (parametric) curve that passes through a. More precisely, let γ(z) = a + r 1 ⋅ z + ⋅ ⋅ ⋅ + r k ⋅ z k (r 1 , . . . , r k ∈ h ) be a degree-k parametric curve passing through a, and let g(z) = f(γ(z)) be the restriction of f on the curve. Given any t > kd points {(z i , g(z i ))} t i=1 , one can interpolate g(z) and learn f(a) = g(0). The t points {(z i , γ(z i ))} t i=1 can be regarded as a noiseless encoding of a, where any ≤ k points perfectly hide a. Distributing the t points to t different servers, one to each server, would enable each server to return a value f(γ(z i )) = g(z i ), and the t values jointly give f(a) in a way such that any ≤ k servers learn no information about a.
Unfortunately, we cannot use the noiseless encoding to attain input privacy in VC, where the single cloud server knows all t points {(z i , g(z i ))} t i=1 and so is able to learn a. To overcome this difficulty, we mix the t points of a noiseless encoding with n − t random points {(z j , u j )} n j=t+1 and form a noisy encoding of a that consists of n points (with locations randomly permuted). The values of f on these points suffice to compute f(a). The problem of decoding a from its noisy encoding (known as noisy curve reconstruction) has been extensively studied in [7,15,23,26,40]. There is no known polynomial-time algorithm for the problem for t ≤ (nk h ) 1/(h+1) + k + 1. The noisy curve reconstruction (CR) assumption [26] is that the noisy curve reconstruction problem is intractable when t = o((nk h ) 1/(h+1) + k + 1).
While noisy encoding gives a "semantically secure encryption" of a, it results in the long encoding of the input and so significant efficiency loss. Fortunately, the noisy encoding can be extended to "encrypt" a polynomial number of inputs a 1 , . . . , a s at the same time, resulting in an encoding of size O(n) elements for s = O(n/t). With this extended noisy encoding, one can delegate computation of f(a 1 ), . . . , f(a s ) simultaneously and significantly reduce the average cost of delegation. This inspires our transformation T that extends polynomial delegation schemes to have input (and output) privacy: the cloud server computes f on O(n) points of the noisy encoding of a 1 , . . . , a s and gives the results to the client; the client computes f(a 1 ), . . . , f(a s ) using polynomial interpolation. Using a verifiable polynomial delegation scheme Σ, the O(n) computations of f will be verifiable by the client, and this determines if the server has worked correctly.
Applying T to the existing schemes [6,10,16,37] (without data privacy), results in schemes with input (and output) privacy. Furthermore, T keeps additional properties, such as private/public verifiability of the underlying schemes, unchanged. For example, applying T to [37] results in a publicly verifiable scheme that allows efficient update of the function.

An input-private construction for matrix delegation
We interpret an n × n matrix M = (M i,j ) as a function that takes a vector x = (x 1 , . . . , x n ) as input and outputs M ⋅ x , where x is the transpose of x. We propose an input-private verifiable computation of matrix-vector multiplications of the form M ⋅ x . The scheme provides security assuming an adversary with access to verification queries and provides very efficient verification by avoiding the second level of amortization as used above.
The construction uses three primitives: a somewhat homomorphic encryption (SHE) adapted from [8], a homomorphic hash from [17] and a PRF with closed-form efficiency from [17]. The input (and output) privacy is obtained by the client encrypting x using the SHE and giving it to the server. The server has the matrix M and a tag matrix T = (T i,j ) for the matrix elements, each computed using the PRF with closed-form efficiency. The SHE scheme allows the server to perform homomorphic scalar multiplications and additions on the elements of M (in clear) and the ciphertext of x, which gives an encrypted version of M ⋅ x for the client. The server is able to compute the homomorphic hash digests on the SHE ciphertexts of x and combines these digests with the tags of M to generate a proof of correctness for computation. The homomorphic property of the hash, and the amortized closed-form efficiency of the PRF, makes the client's verification significantly faster than the computation of M ⋅ x from scratch. In particular, the verification can be done in constant time after a one-time computation that is substantially more efficient than computing M ⋅ x .

Application
Our verifiable computation schemes could be used to outsourcing of two-party protocols. We show an example of such applications to the outsourcing of private information retrieval (PIR). PIR [12] allows a client to retrieve any block f i of a database f = (f 1 , f 2 , . . . , f N ) from a server such that i ∈ [N] is not revealed to the server. Outsourced PIR [25,34] has been suggested to offload the PIR server computation [5] to cloud. Both of our constructions give outsourced PIR with security against malicious cloud servers.

Related work
Securely outsourcing computation dates back to the work on interactive proofs [3,22], PCP-based efficient arguments [29,30], CS proofs [35] and the muggle proofs [21]. While these schemes are either interactive or in the random oracle model, the verifiable computation of Gennaro, Gentry and Parno [18] is non-interactive and in the standard model.
The verifiable computation schemes of [2,4,13,18] attain input (and output) privacy and thus resolve both security issues simultaneously. However, they have to use the expensive cryptographic primitives such as FHE and/or garbled circuits and occasionally represent the function f as a Boolean circuit. As a result, these schemes are not efficient enough both in terms of server computation and in terms of client computation. Furthermore, these schemes are only secure against adversaries that do not make verification queries. Goldwasser et al. [20] show how to construct reusable garbled circuits and obtain private schemes but again make use of FHE. Ananth et al. [1] constructed a verifiable computation scheme achieving input (and output) privacy using multiple servers, where FHE is not used but the security requires at least one of the servers is honest.
Fiore, Gennaro and Pastro [17] consider verifiable computation schemes where the data on cloud server (the function f in our setting) is kept private. In our schemes, the data on server is not necessarily encrypted, but the client's input x should be kept semantically secure in order to achieve input (and output) privacy. That is, we are studying a problem orthogonal to [17]. The schemes of [27,32] consider the same problem as [17].
The verifiable computation schemes of [6,9,11,14,16,21,39] require the client to send its input to the cloud server in clear and thus attain no input (or output) privacy.

Preliminaries
Let λ be a security parameter. We denote by poly(λ) an arbitrary polynomial function in λ. We denote by negl(λ) an arbitrary negligible function in λ, i.e., any function ϵ(λ) from the natural numbers to the nonnegative real numbers such that, for any c > 0, there is an integer λ c > 0 such that ϵ(λ) < λ −c for all λ ≥ λ c . Let A( ⋅ ) be any probabilistic polynomial-time (p.p.t.) algorithm. We denote by "y ← A(x)" the procedure of running A on input x and assigning the output to y. Let Ω be any finite set. We denote by "y ← Ω" the procedure of choosing an element y from Ω uniformly and at random. For every integer m > 0, we denote [m] = {1, 2, . . . , m}.

Verifiable computation
A verifiable computation scheme [6,18] is a two-party protocol between a client and a server. The client provides a function f and an input x to the server. The server is expected to compute f(x) and respond with the (possibly encoded) output together with a proof that the output is correct. The client then verifies the output is indeed correct. The goal of verifiable computation is to make the client's verification as efficient as possible, and in particular much faster than the computation of f(x) from scratch. In the amortized model of [6,18], the client is allowed to do an expensive preprocessing on f to produce a key pair and then use the key pair to efficiently verify the server's computation of f on many different inputs. The scheme is said to be outsourceable if each individual verification is much faster than the corresponding computation.
A verifiable computation scheme VC = (KeyGen, ProbGen, Compute, Verify) for an admissible function family F consists of four polynomial-time algorithms defined below.
• (PK f , SK f ) ← KeyGen(1 λ , f): Based on the security parameter λ, the randomized key generation algorithm generates a public key that encodes the target function f and the matching secret key. The public key is provided to the server, and the secret key is kept private by the client.
The problem generation algorithm uses the secret key SK f to encode the function input x as a public value σ x which is given to the server, and a secret value τ x which is kept private by the client.
• σ y ← Compute(PK f , σ x ): Using the client's public key and the encoded input, the server computes an encoded version (i.e., σ y ) of the function's output y = f(x).
• {y, ⊥} ← Verify(SK f , τ x , σ y ): Using the secret key SK f and the secret "decoding" value τ x , the verification algorithm converts the server's encoded output into the output of the function, e.g., y = f(x), or outputs ⊥ indicating that σ y does not represent the valid output of f on x.
We are interested in verifiable computation schemes that are correct, secure, private and outsourceable. The scheme is said to be correct if the problem generation algorithm produces values that allow an honest server to compute values that will verify successfully and be converted to the evaluation of f on the client's input x.

Definition 1 (Correctness).
The scheme VC is correct if, for any function f from the admissible function family F, the key generation algorithm produces keys (PK f , Intuitively, a verifiable computation scheme is secure if a malicious server cannot persuade the verification algorithm to accept an incorrect output. In other words, for a given function f and input x, a malicious server should not be able to convince the verification algorithm to output a valueŷ such thatŷ ̸ = f(x). This intuition can be formalized by the following experiment. • , output "1"; otherwise, output "0". In the experiment Exp Ver A (VC, f, λ), the adversary A is given a polynomial number (i.e., L) of opportunities to persuade the verification algorithm to accept the wrong output value for an input value. In each trial, the adversary is given oracle access to generate the encoding of a problem instance, and also oracle access to the result of the verification algorithm on an arbitrary string on that instance. The adversary succeeds if it ever convinces the verification algorithm in a trial to accept the wrong output value for the input value. The security of VC requires that the adversary succeeds only with negligible probability. Definition 2 (Security). The scheme VC is secure if, for any function f ∈ F and for any probabilistic polynomial-time adversary A, there is a negligible function negl such that Intuitively, a verifiable computation scheme is (input) private when the public outputs of the problem generation algorithm ProbGen for two different inputs are indistinguishable; i.e., nobody can decide which encoding is the correct one for a given input. The input privacy can be defined based on a typical indistinguishability argument and yields output privacy. Let PubProbGen(SK f , ⋅ ) be an oracle that computes (σ x , τ x ) ← ProbGen(SK f , x) on any input x and returns only the public value σ x . We formalize the intuition (on input privacy) with the following experiment. •

Definition 3 (Privacy).
The scheme VC is private if, for any function f ∈ F and for any probabilistic polynomial-time adversary A, there is a negligible function "negl" such that Informally, a verifiable computation scheme is outsourceable if the time to encode the input and verify the output must be smaller than the time to compute the function from scratch.

Definition 4 (Outsourceable).
The scheme VC is outsourceable if it permits efficient problem generation and output verification. That is, for any x and any σ y , the time required for ProbGen(SK f , x) plus the time required for We work in the amortized model of [6,18], where the time required for KeyGen(1 λ , f) is not included in the above definition. In this model, computing the key pair (PK f , SK f ) is a one-time operation (per function) that can be amortized over the computation of on many (in fact, any poly(λ) number of) different inputs. Apart from this amortization, we also consider a second level of amortization occasionally, where a number of different inputs, say x 1 , . . . , x s , are processed by ProbGen together and the delegation and verification of the computations f(x 1 ), . . . , f(x s ) are done simultaneously.

Adding privacy to polynomial delegation
In this section, we show a transformation that can add (input and output) privacy to a verifiable computation scheme whose admissible function family consists of multivariate polynomials over a finite field. Our transformation is based on the noisy curve reconstruction assumption.

Noisy curve reconstruction assumption
The noisy curve reconstruction assumption generalizes the noisy polynomial reconstruction assumption [28,36], which was widely used in protocol design [26,42] and is based on the hardness of noisy polynomial list reconstruction problems.
Definition 5 (Noisy polynomial list reconstruction). Let be a finite field, and let n, k, t > 0 be integers. Let (z 1 , y 1 ), . . . , (z n , y n ) ∈ 2 . The noisy polynomial list reconstruction problem with input (n, k, t, When t ≥ n+k 2 , the noisy polynomial list reconstruction problem has a unique solution and can be solved in polynomial time by Berlekamp and Massey's algorithm [33]. Goldreich, Rubinfeld and Sudan [19] showed that, for t > √ kn, the noisy polynomial list reconstruction problem has ≤ poly(n) solutions. Sudan [41] and Guruswami and Sudan [24] proposed polynomial-time algorithms for t ≥ √ 2kn and t ≥ √ kn, respectively. For t ≤ √ kn, no polynomial-time algorithms are known. Naor and Pinkas [36] introduced the noisy polynomial reconstruction assumption, which asserts that, for appropriately chosen n = n(λ), k = k(λ), t = t(λ) and = (λ), the output distribution of the following procedure keeps a ∈ semantically secure: n nonzero field elements z 1 , z 2 , . . . , z n ∈ such that they are distinct, -a subset T ⊆ [n] of cardinality t and set y i = γ(z i ) for every i ∈ T, -a field element y i ∈ for every i ∈ [n] \ T; Ishai, Kushilevitz, Ostrovsky and Sahai [26] considered a multi-dimensional variant of the noisy polynomial list reconstruction problem and introduced the noisy curve reconstruction (CR) assumption. L. F. Zhang and R. Safavi-Naini, Privacy-preserving verifiable delegation | 7 Definition 6 (CR assumption). Let k be a degree parameter, which will also serve as a security parameter. Given functions (k) (field), h(k) (dimension), t(k) (the number of points on the curve) and n(k) (the total number of points), the CR assumption holds with parameters ( , h, t, n) if the output distribution D a n,k,t,h of the following procedure keeps a = (a 1 , . . . , a h ) ∈ (k) h(k) semantically secure: . Formally, the CR assumption holds if, for any points a 0 , a 1 ∈ (k) h(k) , for any probabilistic polynomial-time algorithm A, there is a negligible function "negl" such that and was resolved in [15] when t > (nk h ) 1/(h+1) + k + 1. The problem remains hard when t = o((nk h ) 1/(h+1) ), and the CR assumption remains plausible despite of the progress in list decoding [7,15,23,26,40].

Multivariate polynomial interpolation and noisy encoding
Multivariate polynomial interpolation allows one to learn the value of a multivariate polynomial at a point, given its restriction on a parametric curve passing through that point. Let h, d > 0 be integers. For any vector The multivariate polynomial interpolation technique of learning f(a) can be described as the following procedure: This procedure allows one to hide a from a subset of the players in distributed protocols for evaluating a multivariate polynomial f(x), such as in the private information retrieval (PIR) protocols [12,43], where a client gives t points γ(z 1 ), . . . , γ(z t ) to t servers such that no k or less servers can learn any information about a, the i-th server returns g(z i ) and the client recovers f(a) from the t values g(z 1 ), . . . , g(z t ). We consider the t points γ(z 1 ), . . . , γ(z t ) as a noiseless encoding of a, which leaks absolutely no information about a to any adversary that observes ≤ k of the t points.
We shall construct verifiable computation schemes where the client's input a is kept private from a single cloud server. While sending a noiseless encoding (γ(z 1 ), . . . , γ(z t )) of a to the server simply reveals a to that server, the CR assumption allows us to develop a noisy encoding {y i } n i=1 of a (as in the procedure of Definition 6) that keeps a semantically secure. Unfortunately, we cannot directly use this noisy encoding in the constructions due to efficiency loss. On one hand, the CR assumption requires that t ≤ (nk h ) 1/(h+1) + k + 1.
On the other hand, one has to choose t ≥ kd + 1 to enable the interpolation of g(z) = f(γ(z)). As a result, n ≥ (d − 1) h+1 ⋅ k and is comparable to ( h+d d ), the number coefficients of f . And the noisy encoding only yields a scheme that is not outsourceable.
We bypass this difficulty with a second level of amortization, i.e., by processing multiple function inputs a 1 , . . . , a s together such that the average encoding length of each input is short and thus results in outsourceable schemes. In [26], it was shown that if n − t noisy points suffice to keep one point semantically secure, then, for any s = poly(k), they suffice to keep s points semantically secure. With this observation, we describe an extended noisy encoding algorithm (pk ⃗ a , rk ⃗ a ) ← NEnc(k, ⃗ a) that takes ⃗ a = (a 1 , . . . , a s ) ∈ ( h ) s as input and outputs a public noisy encoding pk ⃗ a and a private value rk ⃗ a for reconstruction use as follows.

The transformation
The algorithm NEnc allows one to hide s function inputs, say ⃗ a = (a 1 , . . . , a s ), with a public noisy encoding pk ⃗ a such that no information about ⃗ a will be leaked (under the CR assumption). Let Σ be a non-private verifiable computation scheme [6,10,16,37] with an admissible function family of multivariate polynomials over a finite field. We shall present a transformation T that adds (input and output) privacy to Σ. The idea of our transformation is letting the client encode ⃗ a as pk ⃗ a and give pk ⃗ a to the server; the server runs Σ.Compute on every element (which is a point) of pk ⃗ a and provides the public values of evaluating the polynomial f on all points to the client; at last the client runs Σ.Verify to both verify the server's work and recover the results f(a 1 ), . . . , f(a s ). This idea gives a new scheme Π = T(Σ) as below. • Σ.KeyGen(1 k , f) to generate a public key pk f and the matching secret key sk f ; output PK f = pk f and

Privacy:
In the scheme Π, ⃗ a = (a 1 , . . . , a s ) is encoded with NEnc and then given to the server. The CR assumption implies that ⃗ a will be kept semantically secure against the server as long as t = o((nk h ) 1/(h+1) + k + 1). Therefore, Π achieves input privacy (and thus output privacy) under the CR assumption.

Security:
The security of Π requires that no adversary running in probabilistic polynomial-time should be able to persuade the verification algorithm to accept and output incorrect values on the input values. The proof of the following theorem is straightforward and left to Appendix A.

Theorem 1. If the scheme Σ is secure under Definition 2, then Π is a secure verifiable computation scheme under this security definition.
Efficiency: A verifiable computation scheme is outsourceable if the time to encode the input and verify the output is smaller than the time to compute the function from scratch. The existing verifiable computation schemes [6,18] are in an amortized model, where the one-time cost of KeyGen(1 λ , f) is amortized over many different inputs. And for each input x, the total time required for ProbGen (SK f , x) and Verify(SK f , τ x , σ y ) is substantially less than the time required for computing f(x) from scratch.
The Since Σ is outsourceable, Π is outsourceable as well.
Our transformation gives efficient verifiable computation schemes that enable the delegation of highdegree polynomial computations on private (encrypted) function inputs. In particular, our scheme neither relies on the expensive primitives such as fully homomorphic encryption (FHE) and garbled circuits nor has to represent the function f as a Boolean circuit. Even for very small k and d, our schemes are the first ensuring security and privacy without using expensive primitives. We can easily extend T such that it is not only applicable to privately verifiable schemes [6] but also applicable to publicly verifiable schemes [10,16,37]. Furthermore, T never changes the verifiability of the underlying scheme Σ. Implementation: Applying our transformation to the privately delegatable and verifiable computation scheme Σ bgv for multivariate polynomials of bounded total degree from Benabbas, Gennaro and Vahlis [6] gives a new scheme Π bgv that achieves input and output privacy for the client. We implemented Π bgv with a cyclic group of order ≥ 2 1024 , where strong DDH [6] is supposed true. Let T c ( ⋅ ) and T s ( ⋅ ) denote the average client running time and server running time. Our implementation shows that T c (Π bgv ) = O(T c (Σ bgv )) and T s (Π bgv ) = O(T s (Σ bgv )), where the constants hidden in O depend on k and d. The moderate efficiency loss stems from T, which adds privacy to Σ bgv . In contrast, the FHE-based schemes [2,4,13,18] achieve input privacy but provide no implementations for polynomial computations. More precisely, we implemented Π bgv on a Dell Optiplex 9020 desktop with Intel Core i7-4790 Processor running at 3.6 GHz, on which we run Ubuntu 16.04.1 with 4 GB of RAM and the g++ compiler version 5.4.0. All our programs are single-threaded and built on top of NTL (and GMP). In order to achieve 128-bit security, the underlying scheme Π bgv requires a cyclic group of order ≥ 2 1024 , where strong DDH assumption [6] is supposed to be true. We consider the computation of a 4-variate polynomial of total degree ≤ 6 at 504 points. The test shows that the clientside computations (Π bgv .ProbeGen and Π bgv .Verify) can be done very efficiently with total running time 27.616 seconds, and the average client's work for each of the 504 delegated computations is ≤ 0.88 milliseconds; the one-time work of running Π bgv .KeyGen takes 0.636 seconds. On the other hand, the server's work of running Π bgv .Compute takes 4444.24 seconds, which gives an amortized cost of 8.818 seconds for each of the 504 function inputs. Compared with the cost of 0.142 seconds in the non-private scheme, this high cost is the price of converting a non-private scheme to one that achieves privacy. This cost will become reasonable if the work of executing Π bgv .Compute is done in parallel. The performance of our implementation shows that the resulting schemes of our transformation in this section is potentially practical.

Private delegation of matrix-vector multiplication
We interpret any matrix M = (M i,j ) as a function that takes a vector x as input and outputs M ⋅ x , where x is the transpose of x. In this section, we present a verifiable computation scheme with an admissible function family of all matrix functions over a finite field, where the function input and output are kept private. Our construction is based on the somewhat homomorphic encryption, homomorphic hash and PRF with amortized closed-form efficiency.

Somewhat homomorphic encryption
A somewhat homomorphic encryption scheme allows one to evaluate low-degree polynomials on encrypted data. Fiore, Gennaro and Pastro [17] described a slight variation HE = (ParamGen, KeyGen, Eval, Enc, Dec) of the somewhat homomorphic encryption scheme by Brakerski and Vaikuntanathan [8], based on the hardness of the polynomial learning with error (LWE) problem. The variation is specialized to evaluate circuits of multiplicative depth 1 and sketched as below: • HE.ParamGen(λ): Given the security parameter λ, generate is the m-th cyclotomic polynomial of degree ϕ(m), where ϕ( ⋅ ) is the Euler totient function, -a ciphertext space C ⊆ ℤ q [X, Y] that consists of two kinds of elements: -level-0 ciphertext:

Homomorphic hash
A keyed homomorphic hash (H.KeyGen, H, H.Eval) is defined by three algorithms, where H.KeyGen generates two keys K (public) and κ (private), H uses K or κ to map any input μ ∈ D to a digest H K (μ) ∈ R and H.Eval allows homomorphic computations (addition "+", multiplication " * " and scalar multiplication "⋅") over R. Let bgpp = (q, 1 , 2 , T , e, g, h) be a tuple of bilinear group parameters, and let The following homomorphic hash with domain D and range R = 1 × 2 (or T ) is from [17].
• H.KeyGen(1 λ ): Choose α, β ← ℤ q ; output a public key and a matching secret key κ = (α, β); both allow the computation of hash digest, and the latter usually makes the computation more efficient.

PRFs with amortized closed-form efficiency
A pseudorandom function (F.KG, F) is defined by two algorithms, where the key generation algorithm F.KG takes as input the security parameter 1 λ and outputs a secret key k and some public parameters pp that specify domain X and range R of the function, and the function F k (x) takes input x ∈ X and uses the secret key k to compute a value R ∈ R. The PRF (F.KG, F) is said to be secure (satisfy the pseudorandomness property) if, for any p.p.t. adversary A, where (k, pp) ← F.KG(1 λ ) and Φ : X → R is a random function. Let C be a computation that takes as input n random values R 1 , . . . , R n ∈ R and a vector of m arbitrary values z = (z 1 , . . . , z m ), and assume that the computation of C(R 1 , . . . , R n ; z 1 , . . . , z m ) requires time t(n, m). Let L = ((ξ, η 1 ), . . . , (ξ, η n )) ∈ X n and η = (η 1 , . . . , η n ). The PRF (F.KG, F) is said to satisfy the amortized closed-form efficiency for (C, L) if there exist two polynomial-time algorithms CFEval off C,η and CFEval on C,ξ such that (1) for any ω ← CFEval off C,η (k, z), Let f(x 1 , . . . , x n ) = ∑ n i,j=1 α i,j x i x j + ∑ n i=1 β i x i be a degree-2 arithmetic circuit defined by f = {α i,j , β i } n i,j=1 . Let C : ( 1 × 2 ) n × ℤ n 2 +n q → T be a computation defined by Let bgpp = (q, 1 , 2 , T , e, g, h) be a tuple of bilinear group parameters, and let F be a PRF with domain {0, 1} * and range ℤ 2 q . Fiore, Gennaro and Pastro [17] proposed a PRF with amortized closed-form efficiency for (C, L). • F.KG(1 λ ): Choose two secret keys k 1 , k 2 for the PRF F ; output k = (k 1 , k 2 ) and pp, where pp defines the domain X = ({0, 1} * ) 2 and the range R = 1 × 2 (or T ).

The construction
In this section, we present a private verifiable computation scheme Γ with an admissible function family of all matrix functions over a finite field. In this scheme, the function to be delegated is a square matrix M = (M i,j ) of order n, and the input is a vector x = (x 1 , . . . , x n ) of dimension n; the server is required to compute and reply with an encoding of M ⋅ x , where x is the transpose of x. The input (and output) privacy of Γ is attained by the client encrypting x (as HE.Enc(x)) and then giving it to the server. The somewhat homomorphic encryption scheme used here allows the server to perform homomorphic scalar multiplications and additions on the elements of M (in clear) and the ciphertext of x, which gives an encrypted version of M ⋅ x for the client. The server is able to compute the homomorphic hash digests of HE.Enc(x) and combine these digests with the tags of M to generate a proof that its computation is correct. The homomorphic property of the hash and the amortized closed-form efficiency property of the PRF makes the client's verification significantly faster than the computation of M ⋅ x from scratch. Below is the description of Γ. -run HE.KeyGen(1 λ ) to generate an encryption key pk and a decryption key dk for the encryption scheme HE; -choose a tuple bgpp = (q, 1 , 2 , T , e, g, h) of bilinear map parameters; -run H.KeyGen(1 λ ) to choose two keys K (public) and κ (private) for the homomorphic hash H; -run F.KG(1 λ ) to generate a secret key k = (k 1 , k 2 ) for the PRF F; choose a ← ℤ q ; -compute T i,j = g aM i,j ⋅ [F k (i, j)] 1 for all (i, j) ∈ [n] 2 ; -output PK M = (p, m, n, bgpp, pk, K, M, T = (T i,j )), SK M = (dk, κ, k, a).
Privacy: In the scheme Γ, the client's input x is encrypted using HE, the slight variation of the somewhat homomorphic encryption scheme by Brakerski and Vaikuntanathan [8]. The encryption scheme is semantically secure based on the hardness of the polynomial learning with error (LWE) problem. The input (and output) privacy of Γ follows from HE's semantic security.

Security:
The security of Γ requires that no adversary running in probabilistic polynomial time would be able to persuade the verification algorithm to accept and output wrong results.

Theorem 2. If F is a secure PRF and H is a collision-resistant homomorphic hash, then Γ is a secure verifiable computation scheme.
Proof. Let λ be any security parameter. Let M = (M i,j ) be any n × n matrix. Let A be any p.p.t. adversary. We define the following security experiments. E 0 : This is the standard security experiment Exp Ver Γ,A (M, λ) of Definition 2. E 1 : This experiment is identical to E 0 , except that, at step (d) of the standard security experiment, the W i is computed as W i = ∏ n j=1 e(F k (i, j), h) μ j (β,α) for every i ∈ [n], instead of using the key ω for efficient verification (and therefore avoid the use of CFEval on ). E 2 : This is identical to E 1 , except that the F k is replaced with a random function R : ({0, 1} * ) 2 → 1 × 2 . Below is the description of E 2 . C ⊆ ℤ q [X, Y] is the ciphertext space; -run HE.KeyGen(1 λ ) to generate (pk, dk); -choose bgpp = (q, 1 , 2 , T , e, g, h) ← G(1 λ ); -run H.KeyGen to generate keys (K, κ) for the homomorphic hash H; -choose a random function R : ; -output PK M = (p, m, n, bgpp, pk, K, M, T) and SK M = (dk, κ, R, a).
Due to the union bound, we have that Note that X 0 = (ϕ, ψ) means that (ϕ, ψ) is the first index such that a collision of H K is found by the adversary.
As H is collision resistant, we must have that Pr[X 0 = (ϕ, ψ)] ≤ negl(λ). On the other hand, X 1 = (ϕ, ψ) means that (ϕ, ψ) is the first index such that an equation about a is determined by A and thus gives a when A is computationally unbounded. Note that a computationally unbounded A can rule out one possibility of a via any one of the inequalities of the formδ ℓ,i /δ ℓ,i ̸ = e(g, h) a⋅(γ ℓ,i (β,α)−γ ℓ,i (β,α)) . Therefore, .

Efficiency:
A VC scheme is outsourceable if the time to encode the input and verify the output is smaller than the time to compute the function from scratch. For every x ∈ ℤ n q , the time spent on Γ.ProbGen(SK M , x) is equal to the time of computing {HE.Enc pk (x i )} n i=1 plus the time of computing ω (n PRF computations and O(n) field operations). For every σ y , the time cost of Γ.Verify(SK M , τ x , σ y ) is dominated by O(n) group operations and n executions of HE.Dec sk . Note that M ⋅ x requires O(n 2 ) field operations. When n is large enough, the client's cost of running Γ.ProbGen and Γ.Verify is o(n 2 ) and substantially less than that of computing M ⋅ x from scratch. Hence, Γ is outsourceable.

Implementation:
We implemented the scheme Γ on a Dell Optiplex 9020 desktop with Intel Core i7-4790 Processor running at 3.6 GHz, on which we run Ubuntu 16.04.1 with 4 GB of RAM and the g++ compiler version 5.4.0. All our programs are single-threaded and built on top of GMP. We consider the multiplication between a random square matrix of n rows (columns) over a finite field of order > 2 256 and a random vector of dimension n over the same field, for n = 100, 200, . . . , 1000. We record the client's time of running Γ.ProbGen and Γ.Verify, and get Figure 3.
The experiment shows that, for n = 100, the client-side computation can be done in 0.89 seconds. If we use the scheme Γ in a natural way to delegate the multiplication of two 100 × 100 matrices, then the clientside computation can be done in at most 89 seconds. Parno, Howell, Gentry and Raykova [38] implemented the scheme of [18] to delegate the same computation. Their experiment shows that the client in [18] has to spend at least 10 11 seconds on problem generation and result verification. Compared with [18], the client in our scheme is faster with an order of 9. The performance of our implementation shows that the resulting schemes of our transformation in this section is nearly practical.

Application
Our verifiable computation schemes have interesting applications in the design of outsourced two-party protocols such as outsourced private information retrieval (PIR). PIR [12] allows a client to retrieve any block f i of a database f = (f 1 , f 2 , . . . , f N ) from a server such that i ∈ [N] is not revealed to the server. PIR can be achieved by the client downloading f but that requires a communication cost of O(N). There are PIR schemes [12,31] in the semi-honest server model which achieve nontrivial communication cost o(N). Recently, outsourced PIR [25,34] has been suggested to offload the PIR server computation [5] to cloud. Outsourcing requires PIRs that are secure against untrusted cloud servers which may not faithfully execute the schemes.
It is trivial to see that f(E(j)) = f j for every j ∈ [N]. Suppose a number of clients are to retrieve s bits of the database, say f i 1 , . . . , f i s , where i 1 , . . . , i s ∈ [N]. The encodings a 1 = E(i 1 ), . . . , a s = E(i s ) form a set of s function inputs. Our scheme T(Σ) from Section 3.3 allows the clients to produce a noisy encoding of ⃗ a = (a 1 , . . . , a s ) and delegate the computations of {f i j } s j=1 = {f(a j )} s j=1 to a PIR server such that ⃗ a is kept private and any incorrect responses from the server will be detected. The amortized communication cost for each of the s retrievals is dominated by O(t) vectors from h , which gives a nontrivial outsourced PIR.

Solution based on Γ:
We model the database as an n × n matrix M = (M i,j ). Suppose that the client is interested in M i,j . Then it suffices for the client to retrieve the i-th row of M, i.e., (M i,1 , . . . , M i,n ). This retrieval can be captured by M ⋅ x with x = e i = (0, . . . , 1 i , . . . , 0), the vector whose i-th component is 1 and all other components are 0. The scheme Γ allows the client to delegate the computation of M ⋅ x with x being kept private and then verify the server's response efficiently. In particular, the client and the server only need to communicate O(n) = o(n 2 ) HE ciphertexts, which gives a nontrivial outsourced PIR.
Extension: By applying T to the publicly delegatable and verifiable schemes of [37], one would obtain schemes that are publicly delegatable and verifiable as well. These schemes would allow one to store f on a cloud server, and later, any client can freely retrieve a block of f on its own. Our schemes can be also used in other two-party protocols, such as oblivious polynomial evaluation and oblivious transfer [36], in order to obtain schemes against malicious parties.

Conclusion
In this paper, we proposed a transformation that adds privacy to a number of existing verifiable outsourcing schemes for the function family of multivariate polynomials over finite fields. The transformation is based on a noisy encoding of inputs and gives the first nearly practical verifiable computation scheme that has input (and output) privacy and does not limit the degree of the delegated polynomials. We also gave a verifiable computation scheme for delegation of matrix-vector multiplication which has very efficient verification. We show an application of our schemes to the outsourcing of PIR. Applications of the schemes to other problems such as oblivious polynomial evaluation, and oblivious transfer, are interesting directions for future work.

A Security proof for the transformation T
Let f(x 1 , . . . , x h ) be any h-variate polynomial, and let k be a security parameter. Consider the security definition, Definition 2. Let A be any p.p.t. adversary attacking Γ, and let ϵ = Pr[Exp Ver Γ,A (f, k) = 1]. We need to show that ϵ is a negligible function of k. This can be done by constructing a p.p.t adversary B that executes A as a subroutine and attacks Σ successfully at least with the same probability, i.e., Pr[Exp Ver Σ,B (f, k) = 1] ≥ ϵ. Given (f, k), the adversary B simply works as below: • first of all, B's challenger computes (pk f , sk f ) ← Σ. KeyGen(1 k , f) and then gives pk f to B; • the adversary B invokes A with PK f = pk f ; • for i = 1 to q = q(k), the adversaries B, A and the challenger of B proceed as below: -based on its current view, i.e., (PK f , { ⃗ a j , σ ⃗ a j , b j } i−1 j=1 ), the adversary A produces a new set ⃗ a i = (a i,1 , . . . , a i,s ) ∈ ( h ) s of points and gives the set to B; -the adversary B computes (pk ⃗ a i , rk ⃗ a i ) ← NEnc(1 k , ⃗ a i ) and parses pk ⃗ a i as {c i,j } m j=1 ; -for j = 1 to m: the adversary B gives c i,j to its challenger; the challenger computes Due to the security of Σ under Definition 2, the function ϵ must be negligible in k, the security parameter.
Funding: This work was supported by National Natural Science Foundation of China (No. 61602304).