Fine-Tuning Strategies for Faster Inference Using Speech Self-Supervised Models: A Comparative Study

Type: Article

Publication Date: 2023-06-04

Citations: 8

DOI: https://doi.org/10.1109/icasspw59220.2023.10193042

Locations

  • arXiv (Cornell University) - View - PDF

Works Cited by This (19)

Action Title Year Authors
+ PDF Chat A Comparison of Techniques for Language Model Integration in Encoder-Decoder Speech Recognition 2018 Shubham Toshniwal
Anjuli Kannan
Chung‐Cheng Chiu
Yonghui Wu
Tara N. Sainath
Karen Livescu
+ Reducing Transformer Depth on Demand with Structured Dropout 2019 Angela Fan
Édouard Grave
Armand Joulin
+ wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations 2020 Alexei Baevski
Henry Zhou
Abdelrahman Mohamed
Michael Auli
+ On the effect of dropping layers of pre-trained transformer models 2022 Hassan Sajjad
Fahim Dalvi
Nadir Durrani
Preslav Nakov
+ SpeechBrain: A General-Purpose Speech Toolkit 2021 Titouan Parcollet
Mirco Ravanelli
Peter Plantinga
Aku Rouhe
Samuele Cornell
Loren Lugosch
Cem Subakan
Nauman Dawalatabad
Abdelwahab Heba
Jianyuan Zhong
+ PDF Chat Distilhubert: Speech Representation Learning by Layer-Wise Distillation of Hidden-Unit Bert 2022 Heng-Jui Chang
Shu-Wen Yang
Hung-yi Lee
+ PDF Chat WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing 2022 Sanyuan Chen
Chengyi Wang
Zhengyang Chen
Yu Wu
Shujie Liu
Zhuo Chen
Jinyu Li
Naoyuki Kanda
Takuya Yoshioka
Xiong Xiao
+ Self-supervised Learning with Random-projection Quantizer for Speech Recognition 2022 Chung‐Cheng Chiu
James Qin
Yu Zhang
Jiahui Yu
Yonghui Wu
+ HuBERT-EE: Early Exiting HuBERT for Efficient Speech Recognition 2022 Ji Won Yoon
Beom Jun Woo
Nam Soo Kim
+ PDF Chat LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT 2022 Rui Wang
Qibing Bai
Junyi Ao
Long Zhou
Zhixiang Xiong
Zhihua Wei
Yu Zhang
Tom Ko
Haizhou Li