Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning

Type: Preprint

Publication Date: 2024-07-06

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2407.05112

Abstract

Machine learning models trained on vast amounts of real or synthetic data often achieve outstanding predictive performance across various domains. However, this utility comes with increasing concerns about privacy, as the training data may include sensitive information. To address these concerns, machine unlearning has been proposed to erase specific data samples from models. While some unlearning techniques efficiently remove data at low costs, recent research highlights vulnerabilities where malicious users could request unlearning on manipulated data to compromise the model. Despite these attacks' effectiveness, perturbed data differs from original training data, failing hash verification. Existing attacks on machine unlearning also suffer from practical limitations and require substantial additional knowledge and resources. To fill the gaps in current unlearning attacks, we introduce the Unlearning Usability Attack. This model-agnostic, unlearning-agnostic, and budget-friendly attack distills data distribution information into a small set of benign data. These data are identified as benign by automatic poisoning detection tools due to their positive impact on model training. While benign for machine learning, unlearning these data significantly degrades model information. Our evaluation demonstrates that unlearning this benign data, comprising no more than 1% of the total training data, can reduce model accuracy by up to 50%. Furthermore, our findings show that well-prepared benign data poses challenges for recent unlearning techniques, as erasing these synthetic instances demands higher resources than regular data. These insights underscore the need for future research to reconsider "data poisoning" in the context of machine unlearning.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning 2024 Hongsheng Hu
Shuo Wang
Tian Dong
Minhui Xue
+ PDF Chat Threats, Attacks, and Defenses in Machine Unlearning: A Survey 2024 Ziyao Liu
Huanyi Ye
Chen Chen
Kwok‐Yan Lam
+ PDF Chat Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy 2024 Yangsibo Huang
Daogao Liu
Lynn Chua
Badih Ghazi
Pritish Kamath
Ravi Kumar
Pasin Manurangsi
Milad Nasr
Amer Sinha
Chiyuan Zhang
+ PDF Chat Verification of Machine Unlearning is Fragile 2024 Binchi Zhang
Zihan Chen
Cong Shen
Jundong Li
+ Machine Unlearning: Solutions and Challenges 2023 Jie Xu
Zihan Wu
Cong Wang
Xiaohua Jia
+ PDF Chat Machine Unlearning: Solutions and Challenges 2024 Jie Xu
Zihan Wu
Cong Wang
Xiaohua Jia
+ PDF Chat Hard to Forget: Poisoning Attacks on Certified Machine Unlearning 2022 Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
+ Hard to Forget: Poisoning Attacks on Certified Machine Unlearning 2021 Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
+ PDF Chat Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy 2024 Thanveer Shaik
Xiaohui Tao
Haoran Xie
Lin Li
Xiaofeng Zhu
Qing Li
+ Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy 2023 Thanveer Shaik
Xiaohui Tao
Haoran Xie
Lin Li
Xiaofeng Zhu
Qing Li
+ A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services 2023 Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
+ A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services 2024 Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
+ Learn to Unlearn: A Survey on Machine Unlearning 2023 Youyang Qu
Xin Yuan
Ming Ding
Wei Ni
Thierry Rakotoarivelo
David J. Smith
+ PDF Chat Corrective Machine Unlearning 2024 Shashwat Goel
Ameya Prabhu
Philip H. S. Torr
Ponnurangam Kumaraguru
Amartya Sanyal
+ PDF Chat A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks 2024 Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
+ PDF Chat Adversarial Machine Unlearning 2024 Zonglin Di
Sixie Yu
Yevgeniy Vorobeychik
Yang Liu
+ A Survey of Machine Unlearning 2022 Thành Tâm Nguyên
Thanh Trung Huynh
Phi Le Nguyen
Alan Wee‐Chung Liew
Hongzhi Yin
Quoc Viet Hung Nguyen
+ PDF Chat When Machine Unlearning Jeopardizes Privacy 2021 Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
+ Machine Learning Security against Data Poisoning: Are We There Yet? 2022 Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
+ PDF Chat Machine Learning Security Against Data Poisoning: Are We There Yet? 2024 Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors