Ask a Question

Prefer a chat interface with context about you and your work?

Semantic Adversarial Examples

Semantic Adversarial Examples

Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from …