A simple way to make neural networks robust against diverse image corruptions

Type: Preprint

Publication Date: 2020-01-16

Citations: 31

View

Locations

  • arXiv (Cornell University) - View

Similar Works

Action Title Year Authors
+ A simple way to make neural networks robust against diverse image corruptions 2020 Evgenia Rusak
Lukas Schott
R. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
+ Increasing the robustness of DNNs against image corruptions by playing the Game of Noise 2020 Evgenia Rusak
Lukas Schott
R. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
+ Defending Against Image Corruptions Through Adversarial Augmentations 2021 Dan A. Calian
Florian Stimberg
Olivia Wiles
Sylvestre-Alvise Rebuffi
András György
Timothy Mann
Sven Gowal
+ Defending Against Image Corruptions Through Adversarial Augmentations 2021 Dan A. Calian
Florian Stimberg
Olivia Wiles
Sylvestre-Alvise Rebuffi
András György
Timothy Mann
Sven Gowal
+ PDF Chat A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions 2020 Evgenia Rusak
Lukas Schott
R. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
+ Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations 2018 Dan Hendrycks
Thomas G. Dietterich
+ Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense 2023 Zunzhi You
Daochang Liu
Chang Xu
+ PRIME: A few primitives can boost robustness to common corruptions 2021 Apostolos Modas
Rahul Rade
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
Pascal Frossard
+ MNIST-C: A Robustness Benchmark for Computer Vision 2019 Norman Mu
Justin Gilmer
+ Improving robustness against common corruptions by covariate shift adaptation 2020 Steffen Schneider
Evgenia Rusak
Luisa Eck
Oliver Bringmann
Wieland Brendel
Matthias Bethge
+ Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions 2023 Harshitha Machiraju
Michael H. Herzog
Pascal Frossard
+ PDF Chat Improving robustness against common corruptions with frequency biased models 2021 Tonmoy Saikia
Cordelia Schmid
Thomas Brox
+ On the effectiveness of adversarial training against common corruptions 2021 Klim Kireev
Maksym Andriushchenko
Nicolas Flammarion
+ On the effectiveness of adversarial training against common corruptions. 2021 Klim Kireev
Maksym Andriushchenko
Nicolas Flammarion
+ Improving robustness against common corruptions with frequency biased models 2021 Tonmoy Saikia
Cordelia Schmid
Thomas Brox
+ Improving robustness against common corruptions with frequency biased models 2021 Tonmoy Saikia
Cordelia Schmid
Thomas Brox
+ PDF Chat Improving robustness to corruptions with multiplicative weight perturbations 2024 Trung Trinh
Markus Heinonen
Luigi Acerbi
Samuel Kaski
+ PDF Chat Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration 2023 Theodoros Tsiligkaridis
Athanasios Tsiligkaridis
+ PDF Chat Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond 2022 Yi Yu
Wenhan Yang
Yap‐Peng Tan
Alex C. Kot
+ Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond 2022 Yi Yu
Wenhan Yang
Yap‐Peng Tan
Alex C. Kot

Cited by (28)

Action Title Year Authors
+ PDF Chat Multimodal Co-learning: Challenges, applications with datasets, recent advances and future directions 2021 Anil Rahate
Rahee Walambe
Sheela Ramanna
Ketan Kotecha
+ Learning perturbation sets for robust machine learning 2020 Eric Wong
J. Zico Kolter
+ Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization 2021 Manli Shu
Zuxuan Wu
Micah Goldblum
Tom Goldstein
+ Untapped Potential of Data Augmentation: A Domain Generalization Viewpoint 2020 Vihari Piratla
Shiv Shankar
+ Improved Handling of Motion Blur in Online Object Detection 2020 Mohamed Sayed
Gabriel Brostow
+ A Closer Look at the Robustness of Vision-and-Language Pre-trained Models 2020 Linjie Li
Zhe Gan
Jingjing Liu
+ PDF Chat The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization 2021 Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Fengqiu Wang
Evan Dorundo
Rahul Desai
Tyler Zhu
Samyak Parajuli
Mike Guo
+ PDF Chat Adversarial momentum-contrastive pre-training 2022 Cong Xu
Dan Li
Min Yang
+ Bio-inspired Robustness: A Review 2021 Harshitha Machiraju
Oh-Hyeon Choung
Pascal Frossard
Michael H. Herzog
+ PDF Chat Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries 2021 Kun-Peng Ning
Lue Tao
Songcan Chen
Sheng-Jun Huang
+ Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks 2020 Matthew J. Roos
+ Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples 2020 Josue Ortega
Yilong Ju
Ryan Pyle
Ankit Patel
+ PDF Chat CrossNorm and SelfNorm for Generalization under Distribution Shifts 2021 Zhiqiang Tang
Yunhe Gao
Yi Zhu
Zhi Zhang
Mu Li
Dimitris Metaxas
+ Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation 2021 Chaithanya Kumar Mummadi
Robin Hutmacher
Kilian Rambach
Evgeny Levinkov
Thomas Brox
Jan Hendrik Metzen
+ Impact of Aliasing on Generalization in Deep Convolutional Networks 2021 Cristina Nader Vasconcelos
Hugo Larochelle
Vincent Dumoulin
Rob Romijnders
Nicolas Le Roux
Ross Goroshin
+ SelfNorm and CrossNorm for Out-of-Distribution Robustness 2021 Zhiqiang Tang
Yunhe Gao
Yi Zhu
Zhi Zhang
Mu Li
Dimitris Metaxas
+ Rethinking the Design Principles of Robust Vision Transformer 2021 Xiaofeng Mao
Gege Qi
Yuefeng Chen
Xiaodan Li
Shaokai Ye
Yuan He
Hui Xue
+ An Effective Anti-Aliasing Approach for Residual Networks 2020 Cristina Nader Vasconcelos
Hugo Larochelle
Vincent Dumoulin
Nicolas Le Roux
Ross Goroshin
+ Defending Against Image Corruptions Through Adversarial Augmentations 2021 Dan A. Calian
Florian Stimberg
Olivia Wiles
Sylvestre-Alvise Rebuffi
András György
Timothy Mann
Sven Gowal
+ Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks. 2021 Zeyu Qin
Yanbo Fan
Hongyuan Zha
Baoyuan Wu
+ Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs 2021 Avinash Baidya
Joel Dapello
James J. DiCarlo
Tiago Marques
+ Nuisance-Label Supervision: Robustness Improvement by Free Labels 2021 Xinyue Wei
Weichao Qiu
Yi Zhang
Zihao Xiao
Alan Yuille
+ PDF Chat Robust Image Classification Using a Low-Pass Activation Function and DCT Augmentation 2021 Md. Tahmid Hossain
Shyh Wei Teng
Ferdous Sohel
Guojun Lu
+ On the effectiveness of adversarial training against common corruptions. 2021 Klim Kireev
Maksym Andriushchenko
Nicolas Flammarion
+ Multispectral Object Detection with Deep Learning 2021 Osman Gani
Somenath Kuiry
Alaka Das
Mita Nasipuri
Nibaran Das
+ PDF Chat Deepfake Forensics via an Adversarial Game 2022 Wang Zhi
Yiwen Guo
Wangmeng Zuo
+ PDF Chat On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness 2021 Eric Mintun
Alexander Kirillov
Saining Xie
+ Improving robustness against common corruptions with frequency biased models 2021 Tonmoy Saikia
Cordelia Schmid
Thomas Brox

Citing (31)

Action Title Year Authors
+ PDF Chat ImageNet Large Scale Visual Recognition Challenge 2015 Olga Russakovsky
Jia Deng
Hao Su
Jonathan Krause
Sanjeev Satheesh
Sean Ma
Zhiheng Huang
Andrej Karpathy
Aditya Khosla
Michael S. Bernstein
+ PDF Chat Deep Residual Learning for Image Recognition 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
+ Achieving Human Parity in Conversational Speech Recognition 2016 Wayne Xiong
Jasha Droppo
Xuedong Huang
Frank Seide
Mike Seltzer
Andreas Stolcke
Dong Yu
Geoffrey Zweig
+ PDF Chat Universal Adversarial Perturbations 2017 Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
Pascal Frossard
+ Quality Resilient Deep Neural Networks 2017 Samuel Dodge
Lina J. Karam
+ PDF Chat A Study and Comparison of Human and Deep Learning Recognition Performance under Visual Distortions 2017 Samuel Dodge
Lina J. Karam
+ Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples. 2017 Jamie Hayes
George Danezis
+ A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations 2017 Logan Engstrom
Brandon Tran
Dimitris Tsipras
Ludwig Schmidt
Aleksander Mądry
+ AutoAugment: Learning Augmentation Policies from Data 2018 Ekin D. Cubuk
Barret Zoph
Dandelion Mané
Vijay Vasudevan
Quoc V. Le
+ Evaluating and Understanding the Robustness of Adversarial Logit Pairing. 2018 Logan Engstrom
Andrew Ilyas
Anish Athalye
+ Playing the Game of Universal Adversarial Perturbations 2018 Julien Pérolat
Mateusz Malinowski
Bilal Piot
Olivier Pietquin
+ ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness 2018 Robert Geirhos
Patricia Rubisch
Claudio Michaelis
Matthias Bethge
Felix A. Wichmann
Wieland Brendel
+ Quantifying Perceptual Distortion of Adversarial Examples 2019 Matt Jordan
Naren Sarayu Manoj
Surbhi Goel
Alexandros G. Dimakis
+ Transfer of Adversarial Robustness Between Perturbation Types 2019 Daniel Kang
Yi Sun
T. B. Brown
Dan Hendrycks
Jacob Steinhardt
+ Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation 2019 Raphael Gontijo Lopes
Dong Yin
Ben Poole
Justin Gilmer
Ekin D. Cubuk
+ Adversarial Examples Are a Natural Consequence of Test Error in Noise 2019 Nic Ford
Justin Gilmer
Nicolas Carlini
Dogus Cubuk
+ Making Convolutional Networks Shift-Invariant Again 2019 Richard Zhang
+ Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming 2019 Claudio Michaelis
Benjamin Mitzkus
Robert Geirhos
Evgenia Rusak
Oliver Bringmann
Alexander S. Ecker
Matthias Bethge
Wieland Brendel
+ PDF Chat Feature Denoising for Improving Adversarial Robustness 2019 Cihang Xie
Yuxin Wu
Laurens van der Maaten
Alan Yuille
Kaiming He
+ Robustness of classifiers: from adversarial to random noise 2016 Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
Pascal Frossard
+ PDF Chat Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses 2019 Jérôme Rony
Luiz G. Hafemann
Luiz S. Oliveira
Ismail Ben Ayed
Robert Sabourin
Éric Granger
+ PDF Chat Exploring the Limits of Weakly Supervised Pretraining 2018 Dhruv Mahajan
Ross Girshick
Vignesh Ramanathan
Kaiming He
Manohar Paluri
Yixuan Li
Ashwin Bharambe
Laurens van der Maaten
+ PDF Chat Understanding how image quality affects deep neural networks 2016 Samuel Dodge
Lina J. Karam
+ Towards Deep Learning Models Resistant to Adversarial Attacks. 2018 Aleksander Mądry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
+ Adversarial training for free 2019 Ali Shafahi
Mahyar Najibi
Mohammad Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
Larry S. Davis
Gavin Taylor
Tom Goldstein
+ A Fourier Perspective on Model Robustness in Computer Vision 2019 Dong Yin
Raphael Gontijo Lopes
Jonathon Shlens
Ekin D. Cubuk
Justin Gilmer
+ PDF Chat Defending Against Universal Perturbations With Shared Adversarial Training 2019 Chaithanya Kumar Mummadi
Thomas Brox
Jan Hendrik Metzen
+ PDF Chat Universal Adversarial Training 2020 Ali Shafahi
Mahyar Najibi
Zheng Xu
J.W.T. Dickerson
Larry S. Davis
Tom Goldstein
+ PDF Chat Self-Training With Noisy Student Improves ImageNet Classification 2020 Qizhe Xie
Minh-Thang Luong
Eduard Hovy
Quoc V. Le
+ Fast Differentiable Clipping-Aware Normalization and Rescaling 2020 Jonas Rauber
Matthias Bethge
+ PDF Chat Achieving Generalizable Robustness of Deep Neural Networks by Stability Training 2019 Jan Laermann
Wojciech Samek
Nils Strodthoff