Type: Article
Publication Date: 2020-01-01
Citations: 5
DOI: https://doi.org/10.1109/access.2020.3029270
From recent research work, it has been shown that neural network (NN) classifiers are vulnerable to adversarial examples which contain special perturbations that are ignored by human eyes while can mislead NN classifiers.In this paper, we propose a practical black-box adversarial example generator, dubbed ManiGen.ManiGen does not require any knowledge of the inner state of the target classifier.It generates adversarial examples by searching along the manifold, which is a concise representation of input data.Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art whitebox generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.