A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
The pervasiveness of neural networks (NNs) in critical computer vision and image processing applications makes them very attractive for adversarial manipulation. A large body of existing research thoroughly investigates two broad categories of attacks targeting the integrity of NN models. The first category of attacks, commonly called Adversarial Examples, perturbs …