Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity

Type: Preprint

Publication Date: 2024-12-20

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2412.16414

View Chat PDF

Abstract

In recent years, as data and problem sizes have increased, distributed learning has become an essential tool for training high-performance models. However, the communication bottleneck, especially for high-dimensional data, is a challenge. Several techniques have been developed to overcome this problem. These include communication compression and implementation of local steps, which work particularly well when there is similarity of local data samples. In this paper, we study the synergy of these approaches for efficient distributed optimization. We propose the first theoretically grounded accelerated algorithms utilizing unbiased and biased compression under data similarity, leveraging variance reduction and error feedback frameworks. Our results are of record and confirmed by experiments on different average losses and datasets.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat Accelerated Methods with Compression for Horizontal and Vertical Federated Learning 2024 Sergey Stanko
Timur Karimullin
Aleksandr Beznosikov
Alexander Gasnikov
+ Error Compensated Distributed SGD Can Be Accelerated 2020 Xun Qian
Peter Richtárik
Tong Zhang
+ EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization 2022 Laurent Condat
Kai Yi
Peter Richtárik
+ Unbiased Compression Saves Communication in Distributed Optimization: When and How Much? 2023 Yutong He
Xinmeng Huang
Kun Yuan
+ Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization 2020 Zhize Li
Dmitry Kovalev
Xun Qian
Peter Richtárik
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity 2022 Artavazd Maranjyan
Mher Safaryan
Peter Richtárik
+ PDF Chat Communication Compression for Distributed Learning without Control Variates 2024 Tomàs Ortega
Chun-Hsiang Huang
Xiaoxiao Li
Hamid Jafarkhani
+ PDF Chat LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression 2024 Laurent Condat
Artavazd Maranjyan
Peter Richtárik
+ PDF Chat Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning 2024 Dmitry Bylinkin
Kirill Degtyarev
Aleksandr Beznosikov
+ A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning 2020 Samuel Horváth
Peter Richtárik
+ A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning 2020 Samuel Horváth
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ A Double Residual Compression Algorithm for Efficient Distributed Learning 2019 Xiaorui Liu
Li Yao
Jiliang Tang
Ming Yan
+ TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation 2023 Laurent Condat
Grigory Malinovsky
Peter Richtárik
+ PDF Chat Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning 2024 Dmitry Bylinkin
Kirill Degtyarev
Aleksandr Beznosikov
+ EControl: Fast Distributed Optimization with Compression and Error Control 2023 Yuan Gao
Rustem Islamov
Sebastian U. Stich
+ Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression 2022 Laurent Condat
Ivan Agarský
Peter Richtárik

Cited by (0)

Action Title Year Authors

Citing (0)

Action Title Year Authors