MARINA: Faster Non-Convex Distributed Learning with Compression

Type: Article

Publication Date: 2021-07-18

Citations: 10

View

Locations

  • International Conference on Machine Learning - View

Similar Works

Action Title Year Authors
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2021 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2024 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Artemis: tight convergence guarantees for bidirectional compression in Federated Learning. 2020 Constantin Philippenko
Aymeric Dieuleveut
+ TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation 2023 Laurent Condat
Grigory Malinovsky
Peter Richtárik
+ CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression 2021 Zhize Li
Peter Richtárik
+ Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top 2022 Eduard Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
+ EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression 2022 Kaja Gruntkowska
Alexander Tyurin
Peter Richtárik
+ Federated Learning with Compression: Unified Analysis and Sharp Guarantees 2020 Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+ GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity 2022 Artavazd Maranjyan
Mher Safaryan
Peter Richtárik
+ Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression 2022 Laurent Condat
Ivan Agarský
Peter Richtárik
+ Federated Learning with Compression: Unified Analysis and Sharp Guarantees 2020 Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+ Faster Non-Convex Federated Learning via Global and Local Momentum 2020 Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
+ PDF Chat Byzantine-Robust and Communication-Efficient Distributed Learning via Compressed Momentum Filtering 2024 Changxin Liu
Yonghui Li
Yuhao Yi
Karl Henrik Johansson
+ Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates 2023 Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard Gorbunov
Peter Richtárik
+ Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization 2020 Zhize Li
Dmitry Kovalev
Xun Qian
Peter Richtárik
+ Faster Non-Convex Federated Learning via Global and Local Momentum 2020 Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu

Citing (0)

Action Title Year Authors