Faster Rates for Compressed Federated Learning with Client-Variance Reduction

Type: Article

Publication Date: 2024-03-11

Citations: 0

DOI: https://doi.org/10.1137/23m1553820

Locations

  • SIAM Journal on Mathematics of Data Science - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2021 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Artemis: tight convergence guarantees for bidirectional compression in Federated Learning. 2020 Constantin Philippenko
Aymeric Dieuleveut
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ Federated Learning with Compression: Unified Analysis and Sharp Guarantees 2020 Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+ CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning 2021 Haibo Yang
Jia Liu
Elizabeth Serena Bentley
+ PDF Chat CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning 2021 Haibo Yang
Jia Liu
Elizabeth Serena Bentley
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression 2022 Xiaoyun Li
Ping Li
+ Federated Learning with Compression: Unified Analysis and Sharp Guarantees 2020 Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+ TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation 2023 Laurent Condat
Grigory Malinovsky
Peter Richtárik
+ Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression 2022 Laurent Condat
Ivan AgarskĂ˝
Peter Richtárik
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Communication-Efficient Adaptive Federated Learning 2022 Yujia Wang
Lu Lin
Jinghui Chen
+ Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization 2020 Zhize Li
Dmitry Kovalev
Xun Qian
Peter Richtárik
+ Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity 2022 Samuel Horváth
+ PDF Chat Fed-CVLC: Compressing Federated Learning Communications with Variable-Length Codes 2024 Xiaoxin Su
Yipeng Zhou
Laizhong Cui
John C. S. Lui
Jiangchuan Liu
+ PDF Chat Communication-efficient Vertical Federated Learning via Compressed Error Feedback 2024 Pedro Valdeira
JoĂŁo Xavier
Cláudia Soares
Yuejie Chi
+ Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates 2023 Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard Gorbunov
Peter Richtárik

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (1)

Action Title Year Authors
+ PDF Chat Artificial intelligence and statistics 2018 Bin Yu
Karl Kumbier