FoolHD: Fooling Speaker Identification by Highly Imperceptible Adversarial Disturbances
FoolHD: Fooling Speaker Identification by Highly Imperceptible Adversarial Disturbances
Speaker identification models are vulnerable to carefully designed adversarial perturbations of their input signals that induce misclassification. In this work, we propose a white-box steganography-inspired adversarial attack that generates imperceptible adversarial perturbations against a speaker identification model. Our approach, FoolHD, uses a Gated Convolutional Autoencoder that operates in the DCT …