Chao Sun

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Best Practices for Scientific Computing 2014 Greg Wilson
D. A. Aruliah
C. Titus Brown
Neil Chue Hong
M. Ryleigh Davis
Richard Guy
Steven H. D. Haddock
Kathryn D. Huff
Ian M. Mitchell
Mark D. Plumbley
1
+ PDF Chat The NumPy Array: A Structure for Efficient Numerical Computation 2011 Stéfan van der Walt
Steven C. Colbert
Gaël Varoquaux
1
+ PDF Chat Deep Residual Learning for Image Recognition 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
1
+ PDF Chat Proceedings of the 25th international conference on Machine learning - ICML '08 2008 1
+ AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks 2017 Alexander L. Gaunt
Matthew Johnson
Maik Riechert
Daniel Tarlow
Ryota Tomioka
Dimitrios Vytiniotis
Sam Webster
1
+ Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platform 2018 Chi‐Chung Chen
Chia-Lin Yang
Hsiang-Yun Cheng
1
+ Proceedings of the 25th international conference on Machine learning 2008 William W. Cohen
Andrew McCallum
Sam T. Roweis
1
+ ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks 2018 Xintao Wang
Ke Yu
Shixiang Wu
Jinjin Gu
Yihao Liu
Chao Dong
Chen Change Loy
Yu Qiao
Xiaoou Tang
1
+ PDF Chat Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices 2017 Surat Teerapittayanon
Bradley McDanel
H. T. Kung
1
+ PDF Chat Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation 2018 Liang-Chieh Chen
Yukun Zhu
George Papandreou
Florian Schroff
Hartwig Adam
1
+ Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism 2019 Mohammad Shoeybi
Mostofa Patwary
Raul Puri
Patrick LeGresley
Jared Casper
Bryan Catanzaro
1
+ PDF Chat JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services 2019 Amir Erfan Eshratifar
Mohammad Saeed Abrishami
Massoud Pedram
1
+ XPipe: Efficient Pipeline Model Parallelism for Multi-GPU DNN Training 2019 Lei Guan
Wotao Yin
Dongsheng Li
Xicheng Lu
1
+ An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 2020 Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
Thomas Unterthiner
Mostafa Dehghani
Matthias Minderer
Georg Heigold
Sylvain Gelly
1
+ LCP: A Low-Communication Parallelization Method for Fast Neural Network Inference in Image Recognition 2020 Ramyad Hadidi
Bahar Asgari
Jiashen Cao
Younmin Bae
Da Eun Shim
Hyojong Kim
Sung Kyu Lim
Michael S. Ryoo
Hyesoon Kim
1
+ PDF Chat Sparse Communication for Distributed Gradient Descent 2017 Alham Fikri Aji
Kenneth Heafield
1
+ SSD: Single Shot MultiBox Detector 2016 Wei Liu
Dragomir Anguelov
Dumitru Erhan
Christian Szegedy
Scott Reed
Cheng-Yang Fu
Alexander C. Berg
1
+ PDF Chat Learned Gradient Compression for Distributed Deep Learning 2021 Lusine Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
1
+ PDF Chat Automatic Graph Partitioning for Very Large-scale Deep Learning 2021 Masahiro Tanaka
Kenjiro Taura
Toshihiro Hanawa
Kentaro Torisawa
1
+ PDF Chat DEFER: Distributed Edge Inference for Deep Neural Networks 2022 Arjun Parthasarathy
Bhaskar Krishnamachari
1
+ PDF Chat DistrEdge: Speeding up Convolutional Neural Network Inference on Distributed Edge Devices 2022 Xueyu Hou
Yongjie Guan
Tao Han
Ning Zhang
1
+ Nested Dithered Quantization for Communication Reduction in Distributed Training 2019 Afshin Abdi
Faramarz Fekri
1
+ PipeDream: Fast and Efficient Pipeline Parallel DNN Training 2018 Aaron Harlap
Deepak Narayanan
Amar Phanishayee
Vivek Seshadri
Nikhil R. Devanur
Greg Ganger
Phil Gibbons
1
+ PDF Chat SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems 2022 Xin Dong
B. De Salvo
Meng Li
Chiao Liu
Zhongnan Qu
H. T. Kung
Ziyun Li
1
+ PDF Chat Optimization framework for splitting DNN inference jobs over computing networks 2023 Sehun Jung
Hyang-Won Lee
1