Ask a Question

Prefer a chat interface with context about you and your work?

Attention Is All You Need In Speech Separation

Attention Is All You Need In Speech Separation

Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism.In this paper, we propose the …