Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression

Type: Preprint

Publication Date: 2022-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2206.03665

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression 2023 Yutong He
Xinmeng Huang
Yiming Chen
Wotao Yin
Kun Yuan
+ EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization 2022 Laurent Condat
Kai Yi
Peter Richtárik
+ On Biased Compression for Distributed Learning 2020 Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
Mher Safaryan
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression 2022 Kaja Gruntkowska
Alexander Tyurin
Peter Richtárik
+ A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning 2020 Samuel Horváth
Peter Richtárik
+ A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning 2020 Samuel Horváth
Peter Richtárik
+ Artemis: tight convergence guarantees for bidirectional compression in Federated Learning. 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Unbiased Compression Saves Communication in Distributed Optimization: When and How Much? 2023 Yutong He
Xinmeng Huang
Kun Yuan
+ Preserved central model for faster bidirectional compression in distributed settings 2021 Constantin Philippenko
Aymeric Dieuleveut
+ PDF Chat Preserved central model for faster bidirectional compression in distributed settings 2021 Constantin Philippenko
Aymeric Dieuleveut
+ PDF Chat Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity 2024 Dmitry Bylinkin
Aleksandr Beznosikov
+ On Biased Compression for Distributed Learning. 2020 Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
Mher Safaryan
+ Stochastic Distributed Learning with Gradient Quantization and Variance Reduction 2019 Samuel Horváth
Dmitry Kovalev
Konstantin Mishchenko
Sebastian U. Stich
Peter Richtárik
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2024 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2021 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors