ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning

Type: Preprint

Publication Date: 2019-07-08

Citations: 0

Locations

  • arXiv (Cornell University) - View

Similar Works

Action Title Year Authors
+ ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning 2019 Łukasz Dudziak
Mohamed S. Abdelfattah
Ravichander Vipperla
Stefanos Laskaridis
Nicholas D. Lane
+ PDF Chat ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning 2019 Łukasz Dudziak
Mohamed S. Abdelfattah
Ravichander Vipperla
Stefanos Laskaridis
Nicholas D. Lane
+ Iterative Compression of End-to-End ASR Model using AutoML 2020 Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+ Iterative Compression of End-to-End ASR Model using AutoML 2020 Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+ PDF Chat Iterative Compression of End-to-End ASR Model Using AutoML 2020 Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-Yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+ PDF Chat Continual Learning Optimizations for Auto-regressive Decoder of Multilingual ASR systems 2024 Chin Yuen Kwok
Jia Qi Yip
Eng Siong Chng
+ PDF Chat Continual Learning Optimizations for Auto-regressive Decoder of Multilingual ASR systems 2024 Chin Yuen Kwok
Jia Qi Yip
Eng Siong Chng
+ Compressing Transformer-based self-supervised models for speech processing 2022 Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
+ An Empirical Study of Efficient ASR Rescoring with Transformers 2019 Hongzhao Huang
Fuchun Peng
+ Neural Language Model Pruning for Automatic Speech Recognition 2023 Leonardo Emili
Thiago Fraga-Silva
Ernest Pusateri
Markus Nußbaum-Thom
Youssef Oualil
+ PDF Chat Efficiently Train ASR Models that Memorize Less and Perform Better with Per-core Clipping 2024 Lun Wang
Om Thakkar
Zhong Meng
Nicole Rafidi
Rohit Prabhavalkar
Arun Narayanan
+ PDF Chat Efficiently Train ASR Models that Memorize Less and Perform Better with Per-core Clipping 2024 Lun Wang
Om Thakkar
Zhong Meng
Nicole Rafidi
Rohit Prabhavalkar
Arun Narayanan
+ PDF Chat You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning 2025 Ayan Sengupta
Sardar Chaudhary
Tanmoy Chakraborty
+ Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices 2018 Jie Zhang
Xiaolong Wang
Dawei Li
Yalin Wang
+ Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices 2018 Jie Zhang
Xiaolong Wang
Dawei Li
Yalin Wang
+ PDF Chat RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs 2024 Changhai Zhou
Shijie Han
S. Zhang
Shichao Weng
Zekai Liu
Cheng Jin
+ Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing 2022 Yonggan Fu
Yang Zhang
Kaizhi Qian
Zhifan Ye
Zhongzhi Yu
Cheng-I Lai
Yingyan Lin
+ RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models 2023 David Qiu
David Rim
Shaojin Ding
Oleg Rybakov
Yanzhang He
+ COLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders 2024 Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung
+ CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders 2023 Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung

Works That Cite This (0)

Action Title Year Authors