SentiNet: Detecting Physical Attacks Against Deep Learning Systems

Type: Article

Publication Date: 2018-12-04

Citations: 94

Locations

  • arXiv (Cornell University) - View

Works That Cite This (66)

Action Title Year Authors
+ PDF Chat TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors 2022 Ren Pang
Zheng Zhang
Xiangshan Gao
Zhaohan Xi
Shouling Ji
Peng Cheng
Xiapu Luo
Ting Wang
+ PDF Chat Detecting AI Trojans Using Meta Neural Analysis 2021 Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo Li
+ PDF Chat AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning 2018 Florian Tramèr
Pascal Dupré
Gili Rusak
Giancarlo Pellegrino
Dan Boneh
+ PDF Chat AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning 2019 Florian Tramèr
Pascal Dupré
Gili Rusak
Giancarlo Pellegrino
Dan Boneh
+ CleaNN 2020 Mojan Javaheripi
Mohammad Samragh
Gregory Fields
Tara Javidi
Farinaz Koushanfar
+ PDF Chat Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification 2021 Siyuan Cheng
Yingqi Liu
Shiqing Ma
Xiangyu Zhang
+ Exposing Backdoors in Robust Machine Learning Models. 2020 Ezekiel Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
Andreas Zeller
+ PDF Chat Transferable Graph Backdoor Attack 2022 Shuiqiao Yang
Bao Gia Doan
Paul Montague
Olivier De Vel
Tamas Abraham
Seyit Camtepe
Damith C. Ranasinghe
Salil S. Kanhere
+ Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic 2019 Zhen Xiang
David J. Miller
George Kesidis
+ DBIA: Data-free Backdoor Injection Attack against Transformer Networks 2021 Peizhuo Lv
Hualong Ma
Jiachen Zhou
Ruigang Liang
Kai Chen
Shengzhi Zhang
Yunfei Yang