Ask a Question

Prefer a chat interface with context about you and your work?

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

This paper studies model-inversion attacks, in which the access to a model is abused to infer information about the training data. Since its first introduction by~\cite{fredrikson2014privacy}, such attacks have raised serious concerns given that training data usually contain privacy sensitive information. Thus far, successful model-inversion attacks have only been demonstrated …