Ask a Question

Prefer a chat interface with context about you and your work?

Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection

Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection

Adversarial perturbations can be added to images to protect their content from unwanted inferences. These perturbations may, however, be ineffective against classifiers that were not {seen} during the generation of the perturbation, or against defenses {based on re-quantization, median filtering or JPEG compression. To address these limitations, we present an …