Black-Box Attacks on Image Activity Prediction and its Natural Language Explanations
Black-Box Attacks on Image Activity Prediction and its Natural Language Explanations
Explainable AI (XAI) methods aim to describe the decision process of deep neural networks. Early XAI methods produced visual explanations, whereas more recent techniques generate multimodal explanations that include textual information and visual representations. Visual XAI methods have been shown to be vulnerable to white-box and gray-box adversarial attacks, with …