Property Inference from Poisoning

Type: Article

Publication Date: 2022-05-01

Citations: 44

DOI: https://doi.org/10.1109/sp46214.2022.9833623

Abstract

Property inference attacks consider an adversary who has access to a trained ML model and tries to extract some global statistics of the training data. In this work, we study property inference in scenarios where the adversary can maliciously control a part of the training data (poisoning data) with the goal of increasing the leakage. Previous works on poisoning attacks focused on trying to decrease the accuracy of models. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. We show that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious.We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties. Then we experimentally evaluate our on different datasets (Census dataset, Enron email dataset, MNIST and CelebA), properties (that are present in the training data as features, that are not present as features, and properties that are uncorrelated with the rest of the training data or classification task) and model architectures (including Resnet-18 and Resnet-50). We were able to achieve high attack accuracy with relatively low poisoning rate, namely, 2–3% poisoning in most of our experiments. We also evaluated our attacks on models trained with DP and we show that even with very small values for $\epsilon$, the attack is still quite successful <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> . <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> Code is available at https://github.com/smahloujifar/PropertyInferenceFromPoisoning.git

Locations

  • arXiv (Cornell University) - View - PDF
  • 2022 IEEE Symposium on Security and Privacy (SP) - View

Similar Works

Action Title Year Authors
+ Property Inference From Poisoning 2021 Melissa Chase
Esha Ghosh
Saeed Mahloujifar
+ SNAP: Efficient Extraction of Private Properties with Poisoning 2022 Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan Ullman
+ PDF Chat SNAP: Efficient Extraction of Private Properties with Poisoning 2023 Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan Ullman
+ Machine Learning Security against Data Poisoning: Are We There Yet? 2022 Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
+ Lessons Learned: Defending Against Property Inference Attacks 2022 Joshua Stock
Jens Wettlaufer
Daniel Demmler
Hannes Federrath
+ Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks 2020 Avi Schwarzschild
Micah Goldblum
Arjun K. Gupta
John P. Dickerson
Tom Goldstein
+ PDF Chat Property Unlearning: A Defense Strategy Against Property Inference Attacks 2022 Joshua Stock
Jens Wettlaufer
Daniel Demmler
Hannes Federrath
+ PDF Chat Subpopulation Data Poisoning Attacks 2021 Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
+ Subpopulation Data Poisoning Attacks 2020 Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
+ Subpopulation Data Poisoning Attacks 2020 Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
+ Poisoning Attacks and Defenses on Artificial Intelligence: A Survey 2022 Miguel A. Ramirez
Song-Kyoo Kim
Hussam Al Hamadi
Ernesto Damiani
Young-Ji Byon
Tae‐Yeon Kim
Chung-Suk Cho
Chan Yeob Yeun
+ A Survey on Poisoning Attacks Against Supervised Machine Learning 2022 Wenjun Qiu
+ PDF Chat Machine Learning Security Against Data Poisoning: Are We There Yet? 2024 Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
+ Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks 2023 Yiwei Lu
Gautam Kamath
Yaoliang Yu
+ Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks 2023 Xinglong Chang
Katharina Dost
Gillian Dobbie
Jörg Wicker
+ PDF Chat A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks 2020 Samuel Deng
Sanjam Garg
Somesh Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
+ A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks 2020 Samuel Deng
Sanjam Garg
Somesh Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
+ APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses 2023 Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Cheng‐Zhong Xu
+ Certified Defenses for Data Poisoning Attacks 2017 Jacob Steinhardt
Pang Wei Koh
Percy Liang
+ Certified Defenses for Data Poisoning Attacks 2017 Jacob Steinhardt
Pang Wei Koh
Percy Liang

Works That Cite This (20)

Action Title Year Authors
+ PDF Chat SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning 2023 Harsh Chaudhari
Matthew Jagielski
Alina Oprea
+ Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning 2020 Xianglong Zhang
Xinjian Luo
+ PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps 2024 Ruixuan Liu
Tianhao Wang
Yang Cao
Li Xiong
+ Analyzing Inference Privacy Risks Through Gradients In Machine Learning 2024 Zhuohang Li
Andrew Lowy
Jing Liu
Toshiaki Koike‐Akino
Kieran Parsons
Bradley Malin
Ye Wang
+ PDF Chat Data Isotopes for Data Provenance in DNNs 2023 Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
+ PDF Chat Formalizing and Estimating Distribution Inference Risks 2022 Anshuman Suri
David Evans
+ PDF Chat Survey: Leakage and Privacy at Inference Time 2022 Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. O’Neil
Alexander Weir
Roderick Murray‐Smith
Sotirios A. Tsaftaris
+ PDF Chat Distribution Inference Risks: Identifying and Mitigating Sources of Leakage 2023 Valentin N. Hartmann
Léo Meynent
Maxime Peyrard
Dimitrios Dimitriadis
Shruti Tople
Robert West
+ PDF Chat SNAP: Efficient Extraction of Private Properties with Poisoning 2023 Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan Ullman
+ PDF Chat SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning 2023 Ahmed Salem
Giovanni Cherubin
David Evans
Boris Köpf
Andrew Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella-Béguelin