Ask a Question

Prefer a chat interface with context about you and your work?

Towards Adversarially Robust Deep Image Denoising

Towards Adversarially Robust Deep Image Denoising

This work systematically investigates the adversarial robustness of deep image denoisers (DIDs), i.e, how well DIDs can recover the ground truth from noisy observations degraded by adversarial perturbations. Firstly, to evaluate DIDs’ robustness, we propose a novel adversarial attack, namely Observation-based Zero-mean Attack (OBSATK), to craft adversarial zero-mean perturbations on …