Ask a Question

Prefer a chat interface with context about you and your work?

On the Reversibility of Adversarial Attacks

On the Reversibility of Adversarial Attacks

Adversarial attacks modify images with perturbations that change the prediction of classifiers. These modified images, known as adversarial examples, expose the vulnerabilities of deep neural network classifiers. In this paper, we investigate the predictability of the mapping between the classes predicted for original images and for their corresponding adversarial examples. …