Ask a Question

Prefer a chat interface with context about you and your work?

Privacy-preserving Neural Representations of Text

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such …