Ask a Question

Prefer a chat interface with context about you and your work?

SEAT: Stable and Explainable Attention

SEAT: Stable and Explainable Attention

Attention mechanism has become a standard fixture in many state-of-the-art natural language processing (NLP) models, not only due to its outstanding performance, but also because it provides plausible innate explanations for neural architectures. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such …