Latent Space Interpretation for Stylistic Analysis and Explainable
Authorship Attribution
Latent Space Interpretation for Stylistic Analysis and Explainable
Authorship Attribution
Recent state-of-the-art authorship attribution methods learn authorship representations of texts in a latent, non-interpretable space, hindering their usability in real-world applications. Our work proposes a novel approach to interpreting these learned embeddings by identifying representative points in the latent space and utilizing LLMs to generate informative natural language descriptions of …