Michael Moore

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ Early Methods for Detecting Adversarial Images 2017 Dan Hendrycks
Kevin Gimpel
2
+ PDF Chat Identifying ECUs Using Inimitable Characteristics of Signals in Controller Area Networks 2018 Wonsuk Choi
Hyo Jin Jo
Samuel Woo
Ji Young Chun
Jooyoung Park
Dong Hoon Lee
2
+ PDF Chat Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization 2017 Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil Lupu
Fabio Roli
2
+ PDF Chat Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks 2016 Nicolas Papernot
Patrick McDaniel
Xi Wu
Somesh Jha
Ananthram Swami
2
+ Towards Deep Learning Models Resistant to Adversarial Attacks. 2018 Aleksander Mądry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
2
+ Isolation Forest 2008 Fei Tony Liu
Kai Ming Ting
Zhi‐Hua Zhou
2
+ Is feature selection secure against training data poisoning? 2018 Xiao Huang
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
2
+ Poisoning Attacks against Support Vector Machines 2012 Battista Biggio
Blaine Nelson
Pavel Laskov
2
+ Scikit-learn: Machine Learning in Python 2012 Fabián Pedregosa
Gaël Varoquaux
Alexandre Gramfort
Vincent Michel
Bertrand Thirion
Olivier Grisel
Mathieu Blondel
Peter Prettenhofer
Ron J. Weiss
Vincent Dubourg
2
+ An introduction to diffusion maps 2008 De la Porte J
Herbst Bm
Willy Hereman
Van der Walt Stéfan
2
+ Practical Black-Box Attacks against Machine Learning 2016 Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
Somesh Jha
Z. Berkay Celik
Ananthram Swami
1
+ Adversarial Machine Learning at Scale 2016 Alexey Kurakin
Ian Goodfellow
Samy Bengio
1
+ PDF Chat Wild patterns: Ten years after the rise of adversarial machine learning 2018 Battista Biggio
Fabio Roli
1
+ Detecting Adversarial Samples from Artifacts 2017 Reuben Feinman
Ryan R. Curtin
Saurabh Shintre
Andrew B. Gardner
1
+ Towards Deep Learning Models Resistant to Adversarial Attacks 2017 Aleksander Mądry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
1
+ Adversarial and Clean Data Are Not Twins 2017 Zhitao Gong
Wenlu Wang
Wei‐Shinn Ku
1
+ Explaining and Harnessing Adversarial Examples 2014 Ian Goodfellow
Jonathon Shlens
Christian Szegedy
1
+ Evasion Attacks against Machine Learning at Test Time 2013 Battista Biggio
Igino Corona
Davide Maiorca
Blaine Nelson
Nedim Šrndić
Pavel Laskov
Giorgio Giacinto
Fabio Roli
1
+ PDF Chat Is data clustering in adversarial settings secure? 2013 Battista Biggio
Ignazio Pillai
Samuel Rota Bulò
Davide Ariu
Marcello Pelillo
Fabio Roli
1
+ Poisoning Attacks against Support Vector Machines 2012 Battista Biggio
Blaine Nelson
Pavel Laskov
1
+ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. 2016 Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
Somesh Jha
Z. Berkay Celik
Ananthram Swami
1
+ Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization 2016 Alexander G. Ororbia
C. Lee Giles
Daniel Kifer
1
+ Detecting Adversarial Samples from Artifacts. 2017 Reuben Feinman
Ryan R. Curtin
Saurabh Shintre
Andrew B. Gardner
1
+ Adversarial and Clean Data Are Not Twins 2023 Zhitao Gong
Wenlu Wang
1
+ Early Methods for Detecting Adversarial Images 2016 Dan Hendrycks
Kevin Gimpel
1
+ Extending Defensive Distillation 2017 Nicolas Papernot
Patrick McDaniel
1
+ Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning 2018 Battista Biggio
Fabio Roli
1