Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Physics

formerly Central European Journal of Physics

Editor-in-Chief: Seidel, Sally

Managing Editor: Lesna-Szreter, Paulina


IMPACT FACTOR 2018: 1.005

CiteScore 2018: 1.01

SCImago Journal Rank (SJR) 2018: 0.237
Source Normalized Impact per Paper (SNIP) 2018: 0.541

ICV 2017: 162.45

Open Access
Online
ISSN
2391-5471
See all formats and pricing
More options …
Volume 15, Issue 1

Issues

Volume 13 (2015)

A privacy-preserving parallel and homomorphic encryption scheme

Zhaoe Min
  • Corresponding author
  • School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Geng Yang / Jingqi Shi
Published Online: 2017-04-06 | DOI: https://doi.org/10.1515/phys-2017-0014

Abstract

In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.

Keywords: privacy protection; homomorphic encryption; parallel encryption; parallel homomorphic encryption; cloud computing; MapReduce

PACS: 9.70.-a; 89.70.EG

1 Introduction

Since 2009, cloud servers such as Microsoft, Amazon and Google have experienced security incidents and the security of cloud computing has become the paramount problem constraining its development. In 2011, Cloud Security Alliance (CSA) presented the cloud security model, emphasising the issue of privacy protection in cloud computing. The privacy problem in cloud computing results from the features of data outsourcing and service renting. Users lose their direct control of data while they store the data in the cloud, which can lead to the leakage and abuse of personal privacy data. Some privacy preserving methods are proposed in Papers [1, 2]. Furthermore, it is considered that data privacy can be protected from the two aspects of data encryption and user identity management. The homomorphic encryption (HE) technique supports the management of ciphertext data under the privacy protection, and can directly applied to ciphertexts for retrieval, computation and statistics in the clouds. Afterwards, the results are returned to the user in the form of ciphertexts, in contrast to the conventional encryption algorithm, which can eliminate the need for frequent operations of encryption and decryption between the clouds and the users, thus reducing the communication overhead and computation resources.

Cloud computing clusters possess large numbers of computational nodes, and they can be used for storing and processing mass data. It is easy to construct a cloud computing environment, therefore, many enterprises and research institutes have built private cloud computing platforms. We can conduct encryption on the data which need to be stored in the public clouds using the parallel features of the private cloud clusters and the computational capacities of the various nodes. Therefore, with pertinence to the problem of mass data and the frequent visits and clouds utilization, it is of great importance to speed up the processing capacities of mass data encryption, and then conduct parallel process of data encryption computations in combination with the features of cloud computing and the networks.

2 Related works

HE was first proposed by Rivest et al. in 1978 as a concept of “privacy homomorphism” [3], and is an encryption scheme which can directly conduct computations on the ciphertexts. Subsequently, RSA [4] and the ElGamal algorithm [5] with multiplicative homomorphism as well as Paillier [6] and the BNG algorithm [7] with additive homomorphism were successively proposed. Particularly, the above algorithms belong to the category of homomorphic encryption algorithms.

By 2009, Gentry proposed the first fully homomorphic scheme based on the ideal lattice [8], which can conduct any number of addition and multiplication operations on the ciphertext. Since then, more fully homomorphic scheme acts have been proposed, such as the fully homomorphic scheme DGHV [9] based on integers, BGV scheme [10] based on the RLWE, GSW13 scheme [11] based on the approximate eigenvectors, the multiple keys fully homomorphic ciphertext scheme [12] based on NTRU, and so on.

The low encryption efficiency is a problem which is mutually faced by the somewhat homomorphic encryption (SHE) algorithm and the fully homomorphic encryption( FHE) algorithm. In order to elevate the encryption efficiency, various kinds of schemes have been proposed.For example, papers [1316] introduce a scheme to elevate the efficiency of the RSA algorithm through different parallel methods, such as GPU, CPU or tree architecture. Wang et al. elaborated on the scheme of parallel technique on MapReduce so as to elevate the efficiency of Hill Cipher [17].

As for the fully homomorphic scheme, Gu proposed a kind of HE scheme which is based on the deal lattice together with SIMD technique [18]. Cheon improved the scheme of DGHV, and proposed an optimal method of batch processing plaintext [19]. Considering that the addition and multiplication calculations in the ciphertext homomorphism in the GSW scheme are only based on the addition and multiplication of the matrices, Hiromasa proposed a method to compress ciphertexts and optimize bootstrap [20].Moreover, the scheme was the first fully homomorphic method which can conduct encryptions of the matrices and support the homomorphic operations on the matrices.

Among all different kinds of the above mentioned HE algorithms, Hill Cipher has been proven to be lacking sufficient security. For the Paillier and RSA algorithms with the same length key, the Paillier algorithm has higher security. On the other hand, the Paillier algorithm is more suitable for parallel due to its property of additive homomorphism. More research has been done on the fully homomorphic encryption algorithm in recent years. However, due to the problem of encryption efficiency, most of them are still in the experimental stage and are not widely applied in practice.

With the goal of privacy protection, this paper realizes a kind of parallel encryption algorithm based on the MapReduce. Firstly, we use the additive homomorphism of the Paillier algorithm to divide the data that requires encryption into several approximately equal numbers according to the parity of each data packet. Then, using the parallelism in MapReduce, we realize the parallel encryption by grouping the plaintext. Experimental results show that the proposed scheme can achieve the speedup of 7.1 while maintaining security, thus solving the problem of low computational efficiency for a somewhat homomorphic encryption algorithm.

3 Background Knowledge

3.1 Paillier algorithm

In 1999, Paillier proposed a kind of probabilistic public key crypto system based on a high order residual class, i.e. the Paillier algorithm [6]. The concrete encryption scheme is defined as follows.

Generating key: Set n = p × q and λ = lcm(p − 1)(q − 1), where p and q are large primes, and then we randomly select a base gZn2, this can be done efficiently by checking whether gcd(L(gλmodn2),n)=1(1) where the function L is defined as: L(u)=u1n, Now, (n, g) is regarded as public keys while the pair (p, q) (or equivalently λ) remains private.

Encryption: Plaintext mZn, a random number of rZn is selected, then the ciphertext is defined as follows: c=E(m)=gmrnmodn2(2)

Decryption: Ciphertext cZn2, Plaintext m is defined as follows: m=D(c)=L(cλmodn2)L(gλmodn2)modn(3)

It can be observed that Paillier's public key encryption algorithm has additive homomorphism, and the proving process is listed as follows: E(m1)×E(m2)=(gm1r1nmodn2)(gm2r2nmodn2)=gm1+m2(r1r2)nmodn2E(m1+m2)=gm1+m2rnmodn2(4) where r is a random number. Set r = r1 × r2, then E(m1 + m2) = E(m1) × E(m2) are satisfied.

3.2 MapReduce Model

From the perspective of the software system, the hadoop system includes two parts: the distributed storage and the parallel computations. The Hadoop Distributed File System (denoted as HDFS) is mainly used for the distributed storage of massive-scale data, and among them, the master node controls and manages the whole system of distributed files which are named as NameNodes. Furthermore, each slave node which is responsible for data storage is defined as a DataNode.

In order to realize the parallel computation processing of the large data stored in the HDFS, Hadoop is used to propose a parallel computational framework (named MapReduce). The computational framework contains a master node (named JobTracker) and a multitude of slave nodes are defined as TaskTrackers. The JobTracker is mainly responsible for scheduling and managing the tasks in the operations, while the TaskTracker is responsible for executing the tasks distributed by the JobTracker.

The MapReduce parallel computing framework is a parallelized program execution system that provides a parallel processing model and a process comprising two stages of Map and Reduce. In the Map phase, the input tuples are partitioned by each Mapper, and tuples with the same partition value are passed to the same Reducer for correlation calculation.

4 An Encryption Scheme Based on the Additive Homomorphism

The encryption algorithm in this work mainly uses the additive homomorphism of the Paillier algorithm to encrypt, and is defined as follows.

E(m)=E(m1+m2)=E(m1)×E(m2)

To ensure the security of the encryption algorithm, the length of the public key is set to 256 bit, and the specific encryption scheme is listed as follows.

  1. According to the definition of Paillier encryption algorithm, public keys (n, g) and private keys (p, q) are generated.

  2. Assume that the length of the plaintext (L MB) can be divided into x groups of 256 bits in length, and the number of packets x is computed as follows x=L×8×220256=L×22032(5)

  3. For each packet, the plaintext is firstly converted to BigInteger type, the value of which is represented as m, and m = m1 + m2 is satisfied. Afterwards, the values of m1 and m2 are computed. If m is an even number (m mod 2 = 0), then m1=m2=m2, else the values of m1 and m2 are given as follows.

    m1=(m1)/2m2=(m+1)/2(m mod2)=1(6)

  4. E(m1) and E(m2) are calculated respectively, and the specific formula is given as follows.

    E(m1)=gm1r1n mod n2E(m2)=gm2r2n mod n2(7)

The ciphertexts are E(m1) and E(m2). We let r1 = r2 while m1 = m2, hence, we just need to calculate E(m1) for E(m1) = E(m2).

During the decryption process, according to the additive homomorphism of Paillier, the algorithm can be figured out as follows.

E(m)=E(m1)×E(m2)mod n2(8)

Then, using the decryption formula (Eq. 3), the plaintext can be established.

5 Parallel Encryption Scheme based on the MapReduce

As is illustrated in this paper, the parallel encryption scheme based on the MapReduce combines the cloud environment and additive homomorphism of the Paillier algorithm, and realizes parallel encryptions by conducting blocking on the plaintexts. Thus, the encryption efficiency can be greatly enhanced.

5.1 Algorithm Flow

In the programming model of MapReduce, Split function is used to divide the input data into splints with fixed sizes based on the user's needs. Afterwards, the master node distributes them to different processors based on the relevant scheduling mechanism. The Map function conducts relevant operations on each splint after splinting blocks based on an algorithm self-defined by the user, with each Map accomplishing one part of the results. The Reduce function is responsible for integrating the results of all the Maps. The encryption calculation of each splint of the parallel scheme is independent. Therefore, it can be distributed to multiple Maps for encryption at the same time, and it can be defined as a tuple of polynomial time algorithms =(Split,Map,Reduce)(9)

The specific process is as shown in Fig. 1.

Parallel encryption process of Paillier
Figure 1

Parallel encryption process of Paillier

5.2 Split Algorithm

We assume that p is the number of the cores for set processing, and the primary file is divided into n blocks (n > 1), and blocksizei indicates the size of the i-th block (in). As the length of the encryption key is 256 bits, the size of the plaintext splint is an integral multiple of 32 bits, and specifically the size of each data block can be calculated with Eq. 10.

Bsizei(B)=L×22032×n×32i<nL×220(n1)L×22032×n×32i=n(10)

Fig. 2 shows the specific block algorithm. The content of the i-th splint is stored using the file buffer, and Key indicates the offset of the initial part of i-th splint in the file. Moreover, the value is stored in each file buffer.

shows the specific block algorithm
Figure 2

shows the specific block algorithm

5.3 Map Functions and Reduce Functions

Map ( ) is a single task, and the Mapper class refers to the one for realizing the Map tasks. Hadoop provides an abstract Mapper base class. We inherit this base class, and realize the functions of the relevant interfaces among them. The detailed interface of the Map ( ) method is defined as follows: public void map (Object key, Text value, Context context) throws IOException Interrupted Exception. The parameter key is the key value transmitted into the map, and the values is the corresponding key value, the context is parameter of the environment subject that an environmental subject for the access of Hadoop by the programs. Map functions repeatedly on each group of data in each block data. Fig. 3 shows the encryption algorithm implemented in each Map ( ).

Map function
Figure 3

Map function

The detailed interface of Reduce ( ) is defined as follows: protected void reduce (KEYIN key, Iterable<VALUEIN>values, Context context) throws IOException, Interrupted Exception { }. The input parameter key is the key value that is input to reduce, and the values are the lists of the corresponding key value, the context is the parameter of the environment subjects, which were the environment subjects for the accesses of Hadoop by the programs.

The <key, value> are merged using the key values of the Map functions outputs, and the different values under the same keys are merged into the same list, they are sequenced based on the key values. When the files are written, only the value portion is output without recording the key values. As the key values are the offsets of the texts, the local ciphertext can be spliced into the complete ciphertext output in order.

5.4 Performance Analysis

We assume that the entire encryption time of the Paillier algorithm, key generation and the encryption times are Tseq, Tkey and TEnc respectively. Then Tseq can be expressed as Tseq = Tkey + TEnc. We can know that the main computation overhead in the encryption scheme is the modular exponentiation computation.

Assuming that the time cost for a single modular exponentiation operation is Tpd, then the time cost for conducting one encryption computation is represented as 2Tpd. Based on the above assumption, the plaintext with a length of L(MB) is divided into χ groups. Hence, the time needed to encrypt the plaintext of length L(MB) is defined as TEnc = 2 × χ × Tpd, the entire encryption time is Tseq = Tkey + 2 × χ × Tpd. Based on Eq. 5, the following equation can be obtained Tseq=Tkey+2×L×22032×Tpd(11)

When the value of L is bigger, Tkey can be ignored, and it can be seen that with the increase of L value, the encryption time Tseq basically presents a trend of linear rise.

During the process of the parallel encryptions, we assume that the operational process has no overlapping, the execution time Tn of the parallel encryption algorithm is composed of four parts, that is Tn=Tcomm+Tkey+TMap+Treduce(12) where Tcomm is the communication time, TMap is the parallel encryption time of Map, and Treduce is the time for conducting merging of the ciphertext based on the key values after encrypting.

Assume that the CPU has p cores, the plaintext with a length of L(MB) is divided into n blocks, and the processing time of the i-th block on the CPU core by a map is Ti, then Tn=Tcomm+Tkey+Max(Ti)×np+Treduce(13)

Based on the size of each splint given by Eq. 10, it can be inferred that each block of plaintext contains the number of packets Bxi=L×22032×n, then Ti=2×L×22032×n×Tpd(14)

Assuming the same size of plaintext data, the size of the generated ciphertext is the same, and the Reduce time needed by each group of data for the generation of the ciphertext is Tric. Afterwards, the Reduce time Treduce needed by the entire ciphertexts is listed as follows Treduce=x×Tric=L×22032×Tric(15)

It can be learned that Treduce presents a proportional relation with the size of the generated ciphertext, and it can be derived as follows.

Tn=Tcomm+Tkey+2×L×22032×n×Tpd×np+L×22032×Tric(16)

If the value of L is bigger, Tkey and Tcomm can be ignored. According to the Eq. 16, we can conclude that with the value increasing, the main overhead of the encryption time Tn is shifted from being dominated by Map to Reduce.

The integral speed-up ratio Sp can be expressed as follows Sp=TseqTn=Tkey+TEncTcomm+Tkey+TMap+Treduce(17)

As Tcomm and Tkey can be ignored, then the following equation can be obtained.

Sp2×L×22032×Tpd2×L×22032×n×Tpd×np+L×22032×Tric11n×np+Tric2×Tpd(18)

We can know that when the total core number p is fixed and n ∈ (kp, kp + p], the speed-up ratio ηT presents an increasing trend, and the maximum will not exceed p with k a natural number.

5.5 Security Analysis

This scheme is homomorphic, and under the selection of the plaintext attacks, it is semantically secure. In paper [6], the encryption scheme proposed by Paillier is designed based on the calculations among the ZN2 group, of which N is a module of RSA. As < N, g > ∈ Pn is satisfied, the following equation can be satisfied PN,g:Z×ZNZN2.

Paillier has proved that PN,g is a one-way trapdoor permutation. As Pn is a bijection, it is known as < N, g > ∈ Pn. For an element cZN2, there exists a sole (m,r)ZN×ZN, which satisfies c = gm rN mod N2, m is known as a class that c relates to N and g, and is recorded as ClassN, g(c). In case the decomposition of N is known, it is easy to calculate ClassN, g(c). On the other hand, if the decomposition of N is not known, up till now, no polynomial time algorithm for calculating ClassN, g(c) has been found. We represent the unidirectional polarity of PN,g as Pailier1[N, g]. This is where the hardness in the calculations of composite residuosity class, i.e. Pailier1[N, g] = Class[N, g].

The safety of the MapReduce operation mainly derives from Job Submission and Task shuffle. In the Submission Job phase all of the work submitted or the operation of the track are using RPC (Remote Procedure Call Protocol) with the Kerberos implementation of the certification. In the stage of Task, each task submitted by a user is initiated by the user's identity, so that a user's task will not be sent the operating system of signal to Task Tracker or other user's task. In the shuffle stage, when the end of a map task is running, it may provide the results to management Task Tracker. Afterwards, each reduce task will request the data processed by itself from the Task Tracker by HTTP, and then Hadoop should ensure that other users can't obtain the intermediate results of map task. Based on the above definitions, the safety of data in the MapReduce process can be guaranteed.

6 Experiment Testing and Analysis

6.1 Experiment settings

The hardware platform used in this experiment includes a Master node and three slave nodes. Particularly, the Master node is responsible for the monitoring and scheduling of jobs, while the slave node is in charge of storage file distribution and calculation. The specific hardware configurations and software environment of each node are listed in the node configurations of experiment cluster in Table 1.

Table 1

Software and hardware configuration

6.2 Experiment Results Analysis

In this experiment, encryption tests are conducted on the plaintexts with respective sizes of 256MB, 512MB, 768MB, 1024 MB,1280 MB,1536 MB,1792 MB and 2048 MB. The experiment involves two kinds of serial and the parallel test. In the serial environment, direct encryption is conducted on the plaintexts without the need of splinting. In contrast, the data needs to be splinted into lots of blocks in the parallel environment. The block size used in this experimental environment is set to 64MB, therefore, these files are divided into different data blocks, with the number of data blocks being 4, 8, 12, 16, 20, 24,28 and 32 respectively.

In the experiments, four computational nodes are used, with the core number of the CPU being 4, and the total core number for the CPU is 16. Table 2 respectively records the sizes of the different files, the encryption time of the files and the total speed-up ratio under the serial and the parallel environment. FS, SET, PET, MMT and RT respectively mean File size, Serial encryption time, Parallel encryption time, Max Map time and Reduce time. Fig. 4 indicates the encryption time of files of different sizes under the serial and the parallel environments, Fig. 5 indicates the integral speed-up ratio of files of different sizes under the serial and the parallel environments.

The contrast diagram of encryption time
Figure 4

The contrast diagram of encryption time

The speed-up rate of the different size file
Figure 5

The speed-up rate of the different size file

Table 2

The test results of different files

From the above experimental results (shown in Table 2, Fig. 4 and Fig. 5), the following conclusions can be drawn.

  1. Under the circumstance of fixed node numbers, the time needed by the serial encryption presents a proportional relation with the plaintext size.

  2. The time needed by the parallel encryption increases along with the increase of the plaintext.

  3. When n < p, the time consumption of the map functions is the main time consumption of the encryption process, and when n is increasingly approaching p, the difference between the time consumed by the reduce functions and the map functions gradually decreases.

  4. When n > p, and along with the increase of n, the time consumption of reduce functions becomes the main time consumption during the encryption process. The main reason being that only one node is responsible for Reduce, and this also constrains the speed-up ratio.

  5. When n < p, the speed-up ratio Sp increases faster, and when n = p, the maximum value of Sp is reached. When n > p, at each n ∈ (kp, kp + P), the speed-up ratio presents a trend of slow increase, and when n = (k + 1) × p, the maximum value is reached.

7 Conclusion

In this paper, we propose a novel privacy-preserving PHE scheme based on MapReduce, which uses the additive homomorphism of the Paillier algorithm. The proposed scheme is suitable for cloud computing environments due to its parallelism. Experimental results indicate that the scheme enabled the speed-up ratio to reach 7.1 through the parallelism of clusters.

In the future, we will extend this work in the following aspects: (1) To conduct the optimization of the Reduce functions and then adopt two or more Reducers. (2) To study the parallelism of other HE algorithms, and then compare them with the efficiency of this algorithm so as to find out the optimal HE algorithm.

Acknowledgement

This research is supported by the National Natural Science Foundation of China under the Grants nos. 61572263, 61502251 and 61502243, Natural Science Foundation of Jiangsu Province under the Grants nos. BK20151511, BK20141429 and BK20161516, Colleges and Universities in Jiangsu Province plans to graduate research and innovation under Grant no. KYLX_0816, Natural Science Research Project of Jiangsu University under the Grants nos.(14KJB520031 and 15KJB520027),the Nature Science Project of Nanjing University of Posts and Telecommunications under the Grants nos.(NY215097 and NY214127).

References

  • [1]

    Huang R.W., Gui X.L., Yu S., et al., Privacy-preserving computable encryption scheme of cloud computing, CJC., 2011, 34, 2391-2402. Google Scholar

  • [2]

    Gao W., Zali M.R., Degree-based indices computation for special chemical molecular structures using edge dividing method, AMNS., 2016, 1, 94-117. Google Scholar

  • [3]

    Rivest R.L., Adleman L., Dertouzos M.L., On data banks and privacy homomorphisms, J. FDN SECURE COMPUTATI., 1978, 169-179. Google Scholar

  • [4]

    Rivest R., Shamir A., Adleman L., A method for obtaining digital signatures and public-key cryptosystems, COMMUM ACM., 1978, 21, 120-126. CrossrefGoogle Scholar

  • [5]

    Elgamal T., A public key cryptosystem and a signature scheme based on discrete logarithms, IEEE Trans. Inf. Theory., 1985, 31, 469-472. CrossrefGoogle Scholar

  • [6]

    Paillier P., Public-key cryptosystems based on composite degree residuosity classes, EUROCRYPT'99 (2-6 May 1999,Prague, Czech republic), Springer, Berlin, Heidelberg, 1999, 223-238. Google Scholar

  • [7]

    Boneh D., Goh E.J., Nissim K., Evaluating 2-DNF formulas on ciphertexts, Second Theory of Cryptography Conference (10-12, February 2005, Cambridge, MA, United states), Springer, Berlin, Heidelberg, 2005, 325-341. Google Scholar

  • [8]

    Gentry C., Fully homomorphic encryption using ideal lattices, ACM symposium on Theory of Computing (May 31-June 2, 2009, Bethesda, MD, United states), Association for Computing Machinery, New York, 2009, 169-178. Google Scholar

  • [9]

    Dijk M., Gentry C., Halevi S., et al., Full homomorphic encryption over the integers, Advances in Cryptology EUROCRYPT 2010, Springer, Berlin Heidelberg, 2009, 24-43. Google Scholar

  • [10]

    Brakerski Z., Gentry C., Vaikuntanathan V., (Leveled) Full homomorphic encryption without bootstrapping, the 3th Innovations in Theoretical Computer Science Conference (8-10, January, 2012, Cambridge, Massachusetts, UK), ACM, New York, 2012, 309-325. Google Scholar

  • [11]

    Gentry C., Sahai A., Waters B., Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based, 33rd Annual International Cryptology Conference (18-22, August, 2013, Santa Barbara, CA, United states), Springer, Heidelberg, 2013, 75-92. Google Scholar

  • [12]

    López-Alt A., Tromer E., On-the-fly multiparty computation on the cloud via multikey full homomorphic encryption, 44rd annual ACM symposium on Theory of computing (19-22, May, 2012, New York, NY, United states), ACM, New York, 2012, 1219-1234. Google Scholar

  • [13]

    Tembhurne J.V., Sathe S.R., RSA Public Key Acceleration on CUDA GPU, Proceedings of ICAIECES 2015 (22-23, April, 2015, Chennai, India), Springer Verlag, 2016, 365-375. Google Scholar

  • [14]

    Victor M., Pérez G., Susan F., et al., Applied mathematics and nonlinear sciences in the war on cancer, AMNS., 2016, 1, 423-436.CrossrefGoogle Scholar

  • [15]

    Lin C.H., Liu J.C.,Li C.C., Parallel modulus operations in RSA encryption by CPU/GPU hybrid computation, 9th Asia Joint Conference on Information Security (January 26, 2014, Wuchang, Wuhan, China), Institute of Electrical and Electronics Engineers Inc, 2014, 71-75.Google Scholar

  • [16]

    Ithnin M.D.N., Parallel RSA encryption based on tree architecture, J. CHIN INST ENG., 2013, 36, 658-666. CrossrefGoogle Scholar

  • [17]

    Wang X.Y., Min Z., Parallel algorithm for Hill Cipher on MapReduce, 2nd IEEE International Conference on Progress in Informatics and Computing (16-18, May, 2014, Shanghai, China), Institute of Electrical and Electronics Engineers Inc, 2014, 493-497. Google Scholar

  • [18]

    Gu C.S., Fully homomorphic encryption from approximate ideal lattices, JSW, 2015, 26, 2696-2719. Google Scholar

  • [19]

    Cheon J.H., Coron J.S., Kim J., et al., Batch full homomorphic encryption over the integer, INFORM SCIENCES, 2015, 310, 315-335. Google Scholar

  • [20]

    Hiromasa R., Abe M., Okamoto T., Packing messages and optimizing bootstrapping in GSW-FHE, IEICE Trans Fund Electron Commun Comput Sci, 2016, E99A, 73-82. Google Scholar

About the article

Received: 2016-12-02

Accepted: 2016-12-16

Published Online: 2017-04-06


Citation Information: Open Physics, Volume 15, Issue 1, Pages 135–142, ISSN (Online) 2391-5471, DOI: https://doi.org/10.1515/phys-2017-0014.

Export Citation

© 2017 Zhaoe Min et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Zhaoe Min, Geng Yang, Arun Kumar Sangaiah, Shuangjie Bai, and Guoxiu Liu
EURASIP Journal on Wireless Communications and Networking, 2019, Volume 2019, Number 1

Comments (0)

Please log in or register to comment.
Log in