PipeMare: Asynchronous Pipeline Parallel DNN Training

Type: Preprint

Publication Date: 2019-01-01

Citations: 10

DOI: https://doi.org/10.48550/arxiv.1910.05124

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PipeMare: Asynchronous Pipeline Parallel DNN Training 2019 Bowen Yang
Jian Zhang
Jonathan Li
Christopher RĂ©
Christopher R. Aberger
Christopher De
+ PDF Chat Efficient Pipeline Planning for Expedited Distributed DNN Training 2022 Ziyue Luo
Xiaodong Yi
Guoping Long
Shiqing Fan
Chuan Wu
Jun Yang
Wei Lin
+ PipeDream: Fast and Efficient Pipeline Parallel DNN Training 2018 Aaron Harlap
Deepak Narayanan
Amar Phanishayee
Vivek Seshadri
Nikhil R. Devanur
Greg Ganger
Phil Gibbons
+ PipeDream: Fast and Efficient Pipeline Parallel DNN Training. 2018 Aaron Harlap
Deepak Narayanan
Amar Phanishayee
Vivek Seshadri
Nikhil R. Devanur
Gregory R. Ganger
Phillip B. Gibbons
+ GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism 2018 Yanping Huang
Youlong Cheng
Ankur Bapna
Orhan Fırat
Mia Xu Chen
Dehao Chen
HyoukJoong Lee
Jiquan Ngiam
Quoc V. Le
Yonghui Wu
+ PDF Chat GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism 2024 Byungsoo Jeon
Mengdi Wu
Shiyi Cao
Sunghyun Kim
Sunghyun Park
Neeraj Aggarwal
Colin Unger
Daiyaan Arfeen
Peiyuan Liao
Xupeng Miao
+ PDF Chat BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training 2024 Houming Wu
Ling Chen
Wenjie Yu
+ PDF Chat 2BP: 2-Stage Backpropagation 2024 Christopher Rae
Joseph K. L. Lee
James Richings
+ PDF Chat Scaling Deep Learning Training with MPMD Pipeline Parallelism 2024 Anxhelo Xhebraj
Sean Lee
Hanfeng Chen
Vinod Grover
+ DAPPLE: A Pipelined Data Parallel Approach for Training Large Models 2020 Shiqing Fan
Yi Rong
Meng Chen
Zongyan Cao
Siyu Wang
Zhen Zheng
Chuan Wu
Guoping Long
Jun Yang
Lixue Xia
+ XPipe: Efficient Pipeline Model Parallelism for Multi-GPU DNN Training 2019 Lei Guan
Wotao Yin
Dongsheng Li
Xicheng Lu
+ PDF Chat Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA 2020 Mohamed Wahib
Haoyu Zhang
Truong Thao Nguyen
Aleksandr Drozd
Jens Domke
Lingqi Zhang
Ryousei Takano
Satoshi Matsuoka
+ Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA 2020 Mohamed Wahib
Haoyu Zhang
Truong Thao Nguyen
Aleksandr Drozd
Jens Domke
Lingqi Zhang
Ryousei Takano
Satoshi Matsuoka
+ HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism 2020 Jay Park
Gyeongchan Yun
Chang M. Yi
Nguyen T. Nguyen
Seungmin Lee
Jaesik Choi
Sam H. Noh
Young-ri Choi
+ HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism 2020 Jay Park
Gyeongchan Yun
Chang M. Yi
Nguyen T. Nguyen
Seungmin Lee
Jaesik Choi
Sam H. Noh
Young-ri Choi
+ PDF Chat LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling 2021 Nanda K. Unnikrishnan
Keshab K. Parhi
+ PDF Chat TiMePReSt: Time and Memory Efficient Pipeline Parallel DNN Training with Removed Staleness 2024 A. Dutta
Nabendu Chaki
Rajat K. De
+ AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks 2017 Alexander L. Gaunt
Matthew Johnson
Maik Riechert
Daniel Tarlow
Ryota Tomioka
Dimitrios Vytiniotis
Sam Webster
+ How to Train Your Neural Network: A Comparative Evaluation. 2021 Shu-Huai Lin
Daniel Nichols
Siddharth Singh
Abhinav Bhatelé
+ PDF Chat How to Train Your Neural Network: A Comparative Evaluation 2021 Shu-Huai Lin
Daniel Nichols
Siddharth Singh
Abhinav Bhatelé

Works Cited by This (0)

Action Title Year Authors