Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Linearly Converging Error Compensated SGD
Eduard Gorbunov
,
Dmitry Kovalev
,
Dmitry I. Makarenko
,
Peter Richtárik
Type:
Article
Publication Date:
2020-01-01
Citations:
19
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
Linearly Converging Error Compensated SGD
2020
Eduard Gorbunov
Dmitry Kovalev
Dmitry I. Makarenko
Peter Richtárik
+
Distributed Methods with Absolute Compression and Error Compensation
2022
Marina Danilova
Eduard Gorbunov
+
On Biased Compression for Distributed Learning
2020
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
Mher Safaryan
+
PDF
Chat
Truncated Non-Uniform Quantization for Distributed SGD
2024
Guangfeng Yan
Li Tan
Yuanzhang Xiao
Congduan Li
Linqi Song
+
Communication-Censored Distributed Stochastic Gradient Descent
2019
Weiyu Li
Tianyi Chen
Liping Li
Zhaoxian Wu
Qing Ling
+
Error Compensated Distributed SGD Can Be Accelerated
2020
Xun Qian
Peter Richtárik
Tong Zhang
+
Distributed and Stochastic Optimization Methods with Gradient Compression and Local Steps
2021
Eduard Gorbunov
+
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
2019
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
+
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
2019
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
+
PDF
Chat
Communication-Censored Distributed Stochastic Gradient Descent
2021
Weiyu Li
Zhaoxian Wu
Tianyi Chen
Liping Li
Qing Ling
+
Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization
2021
Xun Qian
Hanze Dong
Peter Richtárik
Tong Zhang
+
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
2019
Samuel Horváth
Dmitry Kovalev
Konstantin Mishchenko
Sebastian U. Stich
Peter Richtárik
+
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach
2022
Naifu Zhang
Meixia Tao
Jia Wang
Fan Xu
+
PDF
Chat
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach
2022
Naifu Zhang
Meixia Tao
Jia Wang
Fan Xu
+
PDF
Chat
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
2021
Enda Yu
Dezun Dong
Yemao Xu
Shuo Ouyang
Xiangke Liao
+
On Biased Compression for Distributed Learning.
2020
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
Mher Safaryan
+
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
2018
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
+
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
2018
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
+
On Communication Compression for Distributed Optimization on Heterogeneous Data
2020
Sebastian U. Stich
+
On the Convergence of Quantized Parallel Restarted SGD for Serverless Learning
2020
Feijie Wu
Shiqi He
Yutong Yang
Haozhao Wang
Zhihao Qu
Song Guo
Works That Cite This (18)
Action
Title
Year
Authors
+
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
2021
Max Ryabinin
Eduard Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
+
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization
2021
Rafał Szlendak
Alexander Tyurin
Peter Richtárik
+
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
2021
Ilyas Fatkhullin
Igor Sokolov
Eduard Gorbunov
Zhize Li
Peter Richtárik
+
A Field Guide to Federated Optimization
2021
Jianyu Wang
Zachary Charles
Zheng Xu
Gauri Joshi
H. Brendan McMahan
Blaise Agüera y Arcas
Maruan Al-Shedivat
Galen Andrew
Salman Avestimehr
Katharine Daly
+
On Communication Compression for Distributed Optimization on Heterogeneous Data
2020
Sebastian U. Stich
+
Parallel and Distributed algorithms for ML problems
2020
Darina Dvinskikh
Alexander Gasnikov
Alexander Rogozin
Aleksandr Beznosikov
+
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
2021
Aritra Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
+
What Do We Mean by Generalization in Federated Learning?
2021
Honglin Yuan
Warren R. Morningstar
Lin Ning
Karan Singhal
+
MARINA: Faster Non-Convex Distributed Learning with Compression
2021
Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks
2021
Dmitry Kovalev
Elnur Gasanov
Peter Richtárik
Alexander Gasnikov
Works Cited by This (0)
Action
Title
Year
Authors