Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard Gorbunov
,
Konstantin Burlachenko
,
Zhize Li
,
Peter Richtárik
Type:
Article
Publication Date:
2021-07-18
Citations:
10
View
Share
Locations
International Conference on Machine Learning -
View
Similar Works
Action
Title
Year
Authors
+
MARINA: Faster Non-Convex Distributed Learning with Compression
2021
Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+
MARINA: Faster Non-Convex Distributed Learning with Compression
2021
Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
2021
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
2024
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
2020
Constantin Philippenko
Aymeric Dieuleveut
+
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
2020
Constantin Philippenko
Aymeric Dieuleveut
+
Artemis: tight convergence guarantees for bidirectional compression in Federated Learning.
2020
Constantin Philippenko
Aymeric Dieuleveut
+
TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation
2023
Laurent Condat
Grigory Malinovsky
Peter Richtárik
+
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
2021
Zhize Li
Peter Richtárik
+
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
2022
Eduard Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
+
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression
2022
Kaja Gruntkowska
Alexander Tyurin
Peter Richtárik
+
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
2020
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
2022
Artavazd Maranjyan
Mher Safaryan
Peter Richtárik
+
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression
2022
Laurent Condat
Ivan Agarský
Peter Richtárik
+
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
2020
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi
+
Faster Non-Convex Federated Learning via Global and Local Momentum
2020
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
+
PDF
Chat
Byzantine-Robust and Communication-Efficient Distributed Learning via Compressed Momentum Filtering
2024
Changxin Liu
Yonghui Li
Yuhao Yi
Karl Henrik Johansson
+
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
2023
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard Gorbunov
Peter Richtárik
+
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
2020
Zhize Li
Dmitry Kovalev
Xun Qian
Peter Richtárik
+
Faster Non-Convex Federated Learning via Global and Local Momentum
2020
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
Cited by (8)
Action
Title
Year
Authors
+
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
2021
Zhize Li
Peter Richtárik
+
Optimal Data Splitting in Distributed Optimization for Machine Learning
2024
Daniil Medyakov
Gleb Molodtsov
Aleksandr Beznosikov
Alexander Gasnikov
+
Federated Learning is Better with Non-Homomorphic Encryption
2023
Konstantin Burlachenko
Abdulmajeed Alrowithi
Fahad Albalawi
Peter Richtárik
+
Real Acceleration of Communication Process in Distributed Algorithms with Compression
2023
С. Й. Ткаченко
Artem Andreev
Aleksandr Beznosikov
Alexander Gasnikov
+
PDF
Chat
Federated Learning with Flexible Control
2023
Shiqiang Wang
Jake Perazzone
Mingyue Ji
Kevin Chan
+
Activations and Gradients Compression for Model-Parallel Training
2024
Mikhail Rudakov
Aleksandr Beznosikov
Yaroslav Kholodov
Alexander Gasnikov
+
PDF
Chat
DSAG: A Mixed Synchronous-Asynchronous Iterative Method for Straggler-Resilient Learning
2022
Albin Severinson
Eirik Rosnes
Salim El Rouayheb
Alexandre Graell i Amat
+
Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
2023
Hanmin Li
Avetik Karagulyan
Peter Richtárik
Citing (0)
Action
Title
Year
Authors