Semantic Mask for Transformer based End-to-End Speech Recognition

Type: Preprint

Publication Date: 2019-01-01

Citations: 23

DOI: https://doi.org/10.48550/arxiv.1912.03010

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat Semantic Mask for Transformer Based End-to-End Speech Recognition 2020 Chengyi Wang
Yu Wu
Yujiao Du
Jinyu Li
Shujie Liu
Liang Lu
Shuo Ren
Guoli Ye
Sheng Zhao
Ming Zhou
+ Effective Decoder Masking for Transformer Based End-to-End Speech Recognition 2020 Shi-Yan Weng
Berlin Chen
+ Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data 2022 Junyi Ao
Ziqiang Zhang
Long Zhou
Shujie Liu
Haizhou Li
Tom Ko
Li-Rong Dai
Jinyu Li
Yao Qian
Furu Wei
+ PDF Chat Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data 2022 Junyi Ao
Ziqiang Zhang
Long Zhou
Shujie Liu
Haizhou Li
Tom Ko
Li-Rong Dai
Jinyu Li
Yao Qian
Furu Wei
+ Correction of Automatic Speech Recognition with Transformer Sequence-to-sequence Model. 2019 Oleksii Hrinchuk
Mariya Popova
Boris Ginsburg
+ Correction of Automatic Speech Recognition with Transformer Sequence-to-sequence Model 2019 Oleksii Hrinchuk
Mariya Popova
Boris Ginsburg
+ PDF Chat Correction of Automatic Speech Recognition with Transformer Sequence-To-Sequence Model 2020 Oleksii Hrinchuk
Mariya Popova
Boris Ginsburg
+ PDF Chat Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation 2023 Kangwook Jang
Sungnyun Kim
Se-Young Yun
Hoirin Kim
+ Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation 2023 Kangwook Jang
Sungnyun Kim
Se-Young Yun
Hoirin Kim
+ PDF Chat Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition 2021 Timo Lohrenz
Patrick Schwarz
Zhengyang Li
Tim Fingscheidt
+ Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition 2021 Timo Lohrenz
Patrick Schwarz
Zhengyang Li
Tim Fingscheidt
+ Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition. 2021 Timo Lohrenz
Patrick Schwarz
Zhengyang Li
Tim Fingscheidt
+ PDF Chat WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing 2022 Sanyuan Chen
Chengyi Wang
Zhengyang Chen
Yu Wu
Shujie Liu
Zhuo Chen
Jinyu Li
Naoyuki Kanda
Takuya Yoshioka
Xiong Xiao
+ Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding 2021 Wei Wang
Shuo Ren
Yao Qian
Shujie Liu
Yu Shi
Yanmin Qian
Michael Zeng
+ PDF Chat Optimizing Alignment of Speech and Language Latent Spaces for End-To-End Speech Recognition and Understanding 2022 Wei Wang
Shuo Ren
Yao Qian
Shujie Liu
Yu Shi
Yanmin Qian
Michael Zeng
+ PDF Chat Joint Encoder-Decoder Self-Supervised Pre-training for ASR 2022 A Arunkumar
S. Umesh
+ Joint Encoder-Decoder Self-Supervised Pre-training for ASR 2022 A Arunkumar
S Umesh
+ PDF Chat Transformer-Based ASR Incorporating Time-Reduction Layer and Fine-Tuning with Self-Knowledge Distillation 2021 Md. Akmal Haidar
Chao Xing
Mehdi Rezagholizadeh
+ A Transformer with Interleaved Self-attention and Convolution for Hybrid Acoustic Models 2019 Liang Lu
+ Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation 2021 Akmal Haidar
Chao Xing
Mehdi Rezagholizadeh

Works That Cite This (22)

Action Title Year Authors
+ PDF Chat High-Accuracy and Low-Latency Speech Recognition with Two-Head Contextual Layer Trajectory LSTM Model 2020 Jinyu Li
Rui Zhao
Eric Sun
Jeremy H. M. Wong
Amit Das
Zhong Meng
Yifan Gong
+ Low Latency End-to-End Streaming Speech Recognition with a Scout Network 2020 Chengyi Wang
Yu Wu
Shujie Liu
Jinyu Li
Liang Lu
Guoli Ye
Ming Zhou
+ High-Accuracy and Low-Latency Speech Recognition with Two-Head Contextual Layer Trajectory LSTM Model 2020 Jinyu Li
Rui Zhao
Eric Sun
Jeremy H. M. Wong
Amit Das
Zhong Meng
Yifan Gong
+ Investigation of Practical Aspects of Single Channel Speech Separation for ASR 2021 Jian Wu
Zhuo Chen
Sanyuan Chen
Yu Wu
Takuya Yoshioka
Naoyuki Kanda
Shujie Liu
Jinyu Li
+ Curriculum Pre-training for End-to-End Speech Translation 2020 Chengyi Wang
Yu Wu
Shujie Liu
Ming Zhou
Zhenglu Yang
+ PDF Chat Don’t Shoot Butterfly with Rifles: Multi-Channel Continuous Speech Separation with Early Exit Transformer 2021 Sanyuan Chen
Yu Wu
Zhuo Chen
Takuya Yoshioka
Shujie Liu
Jinyu Li
Xiangzhan Yu
+ PDF Chat MixSpeech: Data Augmentation for Low-Resource Automatic Speech Recognition 2021 Linghui Meng
Jin Xu
Xu Tan
Jindong Wang
Tao Qin
Bo Xu
+ Don't shoot butterfly with rifles: Multi-channel Continuous Speech Separation with Early Exit Transformer 2020 Sanyuan Chen
Yu Wu
Zhuo Chen
Takuya Yoshioka
Shujie Liu
Jinyu Li
+ PDF Chat Ultra Fast Speech Separation Model with Teacher Student Learning 2021 Sanyuan Chen
Yu Wu
Zhuo Chen
Jian Wu
Takuya Yoshioka
Shujie Liu
Jinyu Li
Xiangzhan Yu
+ PDF Chat Optimizing Alignment of Speech and Language Latent Spaces for End-To-End Speech Recognition and Understanding 2022 Wei Wang
Shuo Ren
Yao Qian
Shujie Liu
Yu Shi
Yanmin Qian
Michael Zeng