Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning
Łukasz Dudziak
,
Mohamed S. Abdelfattah
,
Ravichander Vipperla
,
Stefanos Laskaridis
,
Nicholas D. Lane
Type:
Preprint
Publication Date:
2019-07-08
Citations:
0
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning
2019
Łukasz Dudziak
Mohamed S. Abdelfattah
Ravichander Vipperla
Stefanos Laskaridis
Nicholas D. Lane
+
PDF
Chat
ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning
2019
Łukasz Dudziak
Mohamed S. Abdelfattah
Ravichander Vipperla
Stefanos Laskaridis
Nicholas D. Lane
+
Iterative Compression of End-to-End ASR Model using AutoML
2020
Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+
Iterative Compression of End-to-End ASR Model using AutoML
2020
Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+
PDF
Chat
Iterative Compression of End-to-End ASR Model Using AutoML
2020
Abhinav Mehrotra
Łukasz Dudziak
Jinsu Yeo
Young-Yoon Lee
Ravichander Vipperla
Mohamed S. Abdelfattah
Sourav Bhattacharya
Samin Ishtiaq
Alberto Gil C. P. Ramos
Sang-Jeong Lee
+
PDF
Chat
Continual Learning Optimizations for Auto-regressive Decoder of Multilingual ASR systems
2024
Chin Yuen Kwok
Jia Qi Yip
Eng Siong Chng
+
PDF
Chat
Continual Learning Optimizations for Auto-regressive Decoder of Multilingual ASR systems
2024
Chin Yuen Kwok
Jia Qi Yip
Eng Siong Chng
+
Compressing Transformer-based self-supervised models for speech processing
2022
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
+
An Empirical Study of Efficient ASR Rescoring with Transformers
2019
Hongzhao Huang
Fuchun Peng
+
Neural Language Model Pruning for Automatic Speech Recognition
2023
Leonardo Emili
Thiago Fraga-Silva
Ernest Pusateri
Markus Nußbaum-Thom
Youssef Oualil
+
PDF
Chat
Efficiently Train ASR Models that Memorize Less and Perform Better with Per-core Clipping
2024
Lun Wang
Om Thakkar
Zhong Meng
Nicole Rafidi
Rohit Prabhavalkar
Arun Narayanan
+
PDF
Chat
Efficiently Train ASR Models that Memorize Less and Perform Better with Per-core Clipping
2024
Lun Wang
Om Thakkar
Zhong Meng
Nicole Rafidi
Rohit Prabhavalkar
Arun Narayanan
+
PDF
Chat
You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
2025
Ayan Sengupta
Sardar Chaudhary
Tanmoy Chakraborty
+
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices
2018
Jie Zhang
Xiaolong Wang
Dawei Li
Yalin Wang
+
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices
2018
Jie Zhang
Xiaolong Wang
Dawei Li
Yalin Wang
+
PDF
Chat
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs
2024
Changhai Zhou
Shijie Han
S. Zhang
Shichao Weng
Zekai Liu
Cheng Jin
+
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
2022
Yonggan Fu
Yang Zhang
Kaizhi Qian
Zhifan Ye
Zhongzhi Yu
Cheng-I Lai
Yingyan Lin
+
RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
2023
David Qiu
David Rim
Shaojin Ding
Oleg Rybakov
Yanzhang He
+
COLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders
2024
Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung
+
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders
2023
Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung
Works That Cite This (0)
Action
Title
Year
Authors
Works Cited by This (13)
Action
Title
Year
Authors
+
End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results
2014
Jan Chorowski
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
+
PDF
Chat
Effective Approaches to Attention-based Neural Machine Translation
2015
Thang Luong
Hieu Pham
Christopher D. Manning
+
Federated Learning: Strategies for Improving Communication Efficiency
2016
Jakub Konečný
H. Brendan McMahan
Felix X. Yu
Peter Richtárik
Ananda Theertha Suresh
Dave Bacon
+
PDF
Chat
Improved Training of End-to-end Attention Models for Speech Recognition
2018
Albert Zeyer
Kazuki Irie
Ralf Schlüter
Hermann Ney
+
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition
2018
Albert Zeyer
Tamer Alkhouli
Hermann Ney
+
PDF
Chat
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
2018
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li-Jia Li
Song Han
+
PDF
Chat
Streaming End-to-end Speech Recognition for Mobile Devices
2019
Yanzhang He
Tara N. Sainath
Rohit Prabhavalkar
Ian McGraw
Raziel Álvarez
Ding Zhao
David Rybach
Anjuli Kannan
Yonghui Wu
Ruoming Pang
+
PDF
Chat
Neural Machine Translation of Rare Words with Subword Units
2016
Rico Sennrich
Barry Haddow
Alexandra Birch
+
Neural Architecture Search with Reinforcement Learning
2016
Barret Zoph
Quoc V. Le
+
Monotonic Chunkwise Attention
2018
Chung‐Cheng Chiu
Colin Raffel