Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Proceedings on Privacy Enhancing Technologies

4 Issues per year

Open Access
Online
ISSN
2299-0984
See all formats and pricing
More options …

Privacy Games: Optimal User-Centric Data Obfuscation

Reza Shokri
  • Corresponding author
  • University of Texas at Austin
  • Email:
Published Online: 2015-06-22 | DOI: https://doi.org/10.1515/popets-2015-0024

Abstract

Consider users who share their data (e.g., location) with an untrusted service provider to obtain a personalized (e.g., location-based) service. Data obfuscation is a prevalent user-centric approach to protecting users’ privacy in such systems: the untrusted entity only receives a noisy version of user’s data. Perturbing data before sharing it, however, comes at the price of the users’ utility (service quality) experience which is an inseparable design factor of obfuscation mechanisms. The entanglement of the utility loss and the privacy guarantee, in addition to the lack of a comprehensive notion of privacy, have led to the design of obfuscation mechanisms that are either suboptimal in terms of their utility loss, or ignore the user’s information leakage in the past, or are limited to very specific notions of privacy which e.g., do not protect against adaptive inference attacks or the adversary with arbitrary background knowledge.

In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user’s privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm.

Keywords: Data Privacy; Obfuscation; Utility; Differential Privacy; Distortion Privacy; Inference Attack; Prior Knowledge; Optimization; Game Theory

References

  • [1] M. S. Alvim, M. E. Andrés, K. Chatzikokolakis, P. Degano, and C. Palamidessi. Differential privacy: on the trade-off between utility and information leakage. In Formal Aspects of Security and Trust, pages 39–54. Springer, 2012.Google Scholar

  • [2] M. S. Alvim, M. E. Andrés, K. Chatzikokolakis, and C. Palamidessi. On the relation between differential privacy and quantitative information flow. In Automata, Languages and Programming, pages 60–76. Springer, 2011.Google Scholar

  • [3] M. S. Alvim, M. E. Andrés, K. Chatzikokolakis, and C. Palamidessi. Quantitative information flow and applications to differential privacy. In Foundations of security analysis and design VI. 2011.Google Scholar

  • [4] M. E. Andrés, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 901–914. ACM, 2013.Google Scholar

  • [5] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. Tygar. Can machine learning be secure? In Proceedings of the ACM Symposium on Information, computer and communications security, 2006.Google Scholar

  • [6] G. Barthe, B. Köpf, F. Olmedo, and S. Zanella Béguelin. Probabilistic relational reasoning for differential privacy. ACM SIGPLAN Notices, 2012.CrossrefGoogle Scholar

  • [7] J. O. Berger. Statistical decision theory and Bayesian analysis. Springer, 1985.Google Scholar

  • [8] I. Bilogrevic, K. Huguenin, S. Mihaila, R. Shokri, and J.-P. Hubaux. Predicting users’ motivations behind location check-ins and utility implications of privacy protection mechanisms. In In Network and Distributed System Security (NDSS) Symposium, 2015.Google Scholar

  • [9] N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi. Optimal geo-indistinguishable mechanisms for location privacy. In Proceedings of the 16th ACM conference on Computer and communications security, 2014.Google Scholar

  • [10] S. P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.Google Scholar

  • [11] H. Brenner and K. Nissim. Impossibility of differentially private universally optimal mechanisms. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 71–80. IEEE, 2010.Google Scholar

  • [12] J. Brickell and V. Shmatikov. The cost of privacy: Destruction of data-mining utility in anonymized data publishing. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, pages 70–78, New York, NY, USA, 2008. ACM.Google Scholar

  • [13] M. Brückner and T. Scheffer. Stackelberg games for adversarial prediction problems. In 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2011), 2011.Google Scholar

  • [14] F. Brunton and H. Nissenbaum. Vernacular resistance to data collection and analysis: A political theory of obfuscation. First Monday, 16(5), 2011.Google Scholar

  • [15] K. Chatzikokolakis, M. E. Andrés, N. E. Bordenabe, and C. Palamidessi. Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies, pages 82–102. Springer, 2013.Google Scholar

  • [16] K. Chatzikokolakis, C. Palamidessi, and P. Panangaden. Anonymity protocols as noisy channels. Information and Computation, 206(2-4):378–401, 2008.Google Scholar

  • [17] K. Chatzikokolakis, C. Palamidessi, and M. Stronati. A predictive differentially-private mechanism for mobility traces. In Privacy Enhancing Technologies, pages 21–41. Springer International Publishing, 2014.Google Scholar

  • [18] V. Conitzer and T. Sandholm. Computing the optimal strategy to commit to. In Proceedings of the 7th ACM conference on Electronic commerce, 2006.Google Scholar

  • [19] G. Danezis and C. Troncoso. You cannot hide for long: deanonymization of real-world dynamic behaviour. In Proceedings of the 12th ACM workshop on Workshop on privacy in the electronic society, pages 49–60. ACM, 2013.Google Scholar

  • [20] C. Diaz, S. Seys, J. Claessens, and B. Preneel. Towards measuring anonymity. In Privacy Enhancing Technologies, pages 54–68. Springer Berlin Heidelberg, 2003.Google Scholar

  • [21] C. Dwork. Differential privacy. In Automata, languages and programming, pages 1–12. Springer, 2006.Google Scholar

  • [22] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, pages 265–284. Springer, 2006.Google Scholar

  • [23] V. F. Farias and B. Van Roy. Tetris: A study of randomized constraint sampling. In Probabilistic and Randomized Methods for Design Under Uncertainty. 2006.Google Scholar

  • [24] Q. Geng and P. Viswanath. The optimal mechanism in differential privacy. arXiv preprint arXiv:1212.1186, 2012.Google Scholar

  • [25] A. Ghosh, T. Roughgarden, and M. Sundararajan. Universally utility-maximizing privacy mechanisms. In Proceedings of the 41st annual ACM symposium on Theory of computing, pages 351–360. ACM, 2009.Google Scholar

  • [26] A. Ghosh, T. Roughgarden, and M. Sundararajan. Universally utility-maximizing privacy mechanisms. SIAM Journal on Computing, 41(6):1673–1693, 2012.CrossrefGoogle Scholar

  • [27] M. Grötschel, L. Lovász, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1981.Google Scholar

  • [28] M. Gupte and M. Sundararajan. Universally optimal privacy mechanisms for minimax agents. In Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, 2010.Google Scholar

  • [29] X. He, A. Machanavajjhala, and B. Ding. Blowfish privacy: Tuning privacy-utility trade-offs using policies. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 1447–1458. ACM, 2014.Google Scholar

  • [30] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, 2011.Google Scholar

  • [31] S. Ioannidis, A. Montanari, U. Weinsberg, S. Bhagat, N. Fawaz, and N. Taft. Privacy tradeoffs in predictive analytics. arXiv preprint arXiv:1403.8084, 2014.Google Scholar

  • [32] D. Kifer and A. Machanavajjhala. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pages 193–204. ACM, 2011.Google Scholar

  • [33] N. Kiukkonen, J. Blom, O. Dousse, D. Gatica-Perez, and J. Laurila. Towards rich mobile phone datasets: Lausanne data collection campaign. Proc. ICPS, Berlin, 2010.Google Scholar

  • [34] B. Köpf and D. Basin. An information-theoretic model for adaptive side-channel attacks. In Proceedings of the 14th ACM conference on Computer and communications security, 2007.Google Scholar

  • [35] D. Korzhyk, Z. Yin, C. Kiekintveld, V. Conitzer, and M. Tambe. Stackelberg vs. Nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness. Journal of Artificial Intelligence Research, 41:297–327, May–August 2011.Google Scholar

  • [36] C. Li, M. Hay, V. Rastogi, G. Miklau, and A. McGregor. Optimizing linear counting queries under differential privacy. In Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 123–134. ACM, 2010.Google Scholar

  • [37] W. Liu and S. Chawla. A game theoretical model for adversarial learning. In IEEE International Conference on Data Mining Workshops (ICDM 2009), 2009.Google Scholar

  • [38] D. J. MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.Google Scholar

  • [39] M. Manshaei, Q. Zhu, T. Alpcan, T. Basar, and J.-P. Hubaux. Game theory meets network security and privacy. ACM Computing Surveys, 45(3), 2012.Google Scholar

  • [40] P. Mardziel, M. S. Alvim, M. Hicks, and M. R. Clarkson. Quantifying information flow for dynamic secrets. In IEEE Symposium on Security and Privacy, 2014.Google Scholar

  • [41] S. A. Mario, K. Chatzikokolakis, C. Palamidessi, and G. Smith. Measuring information leakage using generalized gain functions. 2012 IEEE 25th Computer Security Foundations Symposium, 2012.Google Scholar

  • [42] R. T. Marler and J. S. Arora. Survey of multi-objective optimization methods for engineering. Structural and multidisciplinary optimization, 26(6):369–395, 2004.Google Scholar

  • [43] K. Micinski, P. Phelps, and J. S. Foster. An empirical study of location truncation on android. Weather, 2:21, 2013.Google Scholar

  • [44] K. Miettinen. Nonlinear multiobjective optimization, volume 12. Springer, 1999.Google Scholar

  • [45] Y. E. Nesterov and A. Nemirovskii. Interior point polynomial methods in convex programming: Theory and algorithms. sIAM Publications. sIAM, Philadelphia, UsA, 1993.Google Scholar

  • [46] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pages 75–84. ACM, 2007.Google Scholar

  • [47] V. Pareto. Manuale di economia politica, volume 13. Societa Editrice, 1906.Google Scholar

  • [48] P. Paruchuri, J. P. Pearce, J. Marecki, M. Tambe, F. Ordóñez, and S. Kraus. Efficient algorithms to solve Bayesian Stackelberg games for security applications. In Conference on Artificial Intelligence, 2008.Google Scholar

  • [49] J. Reed and B. C. Pierce. Distance makes the types grow stronger: a calculus for differential privacy. ACM Sigplan Notices, 2010.CrossrefGoogle Scholar

  • [50] A. Serjantov and G. Danezis. Towards an information theoretic metric for anonymity. In Privacy Enhancing Technologies, pages 41–53. Springer Berlin Heidelberg, 2003.Google Scholar

  • [51] R. Shokri, G. Theodorakopoulos, J.-Y. Le Boudec, and J.-P. Hubaux. Quantifying location privacy. In Proceedings of the IEEE Symposium on Security and Privacy, 2011.Google Scholar

  • [52] R. Shokri, G. Theodorakopoulos, C. Troncoso, J.-P. Hubaux, and J.-Y. Le Boudec. Protecting location privacy: optimal strategy against localization attacks. In Proceedings of the ACM conference on Computer and communications security, 2012.Google Scholar

  • [53] G. Theodorakopoulos, R. Shokri, C. Troncoso, J.-P. Hubaux, and J.-Y. L. Boudec. Prolonging the hide-and-seek game: Optimal trajectory privacy for location-based services. In ACM Workshop on Privacy in the Electronic Society (WPES 2014), 2014.Google Scholar

  • [54] C. Troncoso and G. Danezis. The bayesian traffic analysis of mix networks. In Proceedings of the 16th ACM conference on Computer and communications security, 2009.Google Scholar

  • [55] L. Zadeh. Optimality and non-scalar-valued performance criteria. Automatic Control, IEEE Transactions on, 8, 1963.Google Scholar

About the article

Received: 2015-02-15

Revised: 2015-05-10

Accepted: 2015-05-15

Published Online: 2015-06-22

Published in Print: 2015-06-01


Citation Information: Proceedings on Privacy Enhancing Technologies, ISSN (Online) 2299-0984, DOI: https://doi.org/10.1515/popets-2015-0024.

Export Citation

© Reza Shokri. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Comments (0)

Please log in or register to comment.
Log in