M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models

Type: Article

Publication Date: 2024-01-01

Citations: 1

DOI: https://doi.org/10.1109/tcsvt.2024.3417410

Abstract

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where a backdoored model behaves normally with clean inputs but exhibits attacker-specified behaviors upon the inputs containing triggers.Most previous backdoor attacks mainly focus on either the all-to-one or all-to-all paradigm, allowing attackers to manipulate an input to attack a single target class.Besides, the two paradigms rely on a single trigger for backdoor activation, rendering attacks ineffective if the trigger is destroyed.In light of the above, we propose a new M -to-N attack paradigm that allows an attacker to manipulate any input to attack N target classes, and each backdoor of the N target classes can be activated by any one of its M triggers.Our attack selects M clean images from each target class as triggers and leverages our proposed poisoned image generation framework to inject the triggers into clean images invisibly.By using triggers with the same distribution as clean training images, the targeted DNN models can generalize to the triggers during training, thereby enhancing the effectiveness of our attack on multiple target classes.Extensive experimental results demonstrate that our new backdoor attack is highly effective in attacking multiple target classes and robust against pre-processing operations and existing defenses.

Locations

  • IEEE Transactions on Circuits and Systems for Video Technology - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ M-to-N Backdoor Paradigm: A Stealthy and Fuzzy Attack to Deep Learning Models 2022 Linshan Hou
Zhongyun Hua
Yuhong Li
Leo Yu Zhang
+ PDF Chat An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers 2024 Xueluan Gong
Bowei Tian
Meng Xue
Yuan Wu
Yanjiao Chen
Qian Wang
+ PDF Chat Hidden Trigger Backdoor Attacks 2020 Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
+ Hidden Trigger Backdoor Attacks 2019 Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
+ Hidden Trigger Backdoor Attacks 2019 Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
+ PDF Chat Poison Ink: Robust and Invisible Backdoor Attack 2022 Jie Zhang
Dongdong Chen
Qidong Huang
Jing Liao
Weiming Zhang
Huamin Feng
Gang Hua
Nenghai Yu
+ Universal Backdoor Attacks 2023 Benjamin Schneider
Nils Lukas
Florian Kerschbaum
+ PDF Chat NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise 2024 Abdullah Arafat Miah
Kaan Icer
Resit Sendag
Yu Bi
+ CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences 2022 Shang Wang
Yansong Gao
Anmin Fu
Zhi Zhang
Yuqing Zhang
Willy Susilo
+ PDF Chat Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models 2024 Yige Li
Hanxun Huang
Jiaming Zhang
Xingjun Ma
Yu–Gang Jiang
+ Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers 2022 Nan Luo
Yuanzhang Li
Yajie Wang
Shangbo Wu
Yu‐an Tan
Quanxin Zhang
+ Model-Contrastive Learning for Backdoor Defense 2022 Zhihao Yue
Jun Xia
Zhiwei Ling
Ming Hu
Ting Wang
Xian Wei
Mingsong Chen
+ Imperceptible Backdoor Attack: From Input Space to Feature Representation 2022 Nan Zhong
Zhenxing Qian
Xinpeng Zhang
+ PDF Chat Imperceptible Backdoor Attack: From Input Space to Feature Representation 2022 Nan Zhong
Zhenxing Qian
Xinpeng Zhang
+ Backdoor Attack with Sparse and Invisible Trigger 2023 Ying-Hua Gao
Yiming Li
Xueluan Gong
Shu‐Tao Xia
Qian Wang
+ PDF Chat Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers 2024 Binxiao Huang
Jason Chun Lok
Chang Liu
Ngai Wong
+ PDF Chat Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks 2023 Bingxu Mu
Zhenxing Niu
Le Wang
Xue Wang
Qiguang Mia
Rong Jin
Gang Hua
+ Backdoor Learning: A Survey 2020 Yiming Li
Yong Jiang
Zhifeng Li
Shu‐Tao Xia
+ Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks 2022 Bingxu Mu
Zhenxing Niu
Le Wang
Xue Wang
Rong Jin
Gang Hua
+ Backdoor Attack in the Physical World 2021 Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shu‐Tao Xia

Works That Cite This (0)

Action Title Year Authors