Preserved central model for faster bidirectional compression in distributed settings

Type: Preprint

Publication Date: 2021-01-01

Citations: 3

DOI: https://doi.org/10.48550/arxiv.2102.12528

View

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat Preserved central model for faster bidirectional compression in distributed settings 2021 Constantin Philippenko
Aymeric Dieuleveut
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Communication Compression for Decentralized Training 2018 Hanlin Tang
Shaoduo Gan
Ce Zhang
Tong Zhang
Liu Ji
+ Artemis: tight convergence guarantees for bidirectional compression in Federated Learning. 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression 2022 Xinmeng Huang
Yiming Chen
Wotao Yin
Kun Yuan
+ PDF Chat Communication Compression for Distributed Learning without Control Variates 2024 Tomàs Ortega
Chun-Hsiang Huang
Xiaoxiao Li
Hamid Jafarkhani
+ EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression 2022 Kaja Gruntkowska
Alexander Tyurin
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ MARINA: Faster Non-Convex Distributed Learning with Compression 2021 Eduard Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ 2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression 2023 Alexander Tyurin
Peter Richtárik
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2021 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation 2023 Laurent Condat
Grigory Malinovsky
Peter Richtárik
+ DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations 2023 Guojun Xiong
Gang Yan
Shiqiang Wang
Jian Li
+ PDF Chat DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations 2024 Guojun Xiong
Gang Yan
Shiqiang Wang
Jian Li
+ Faster Rates for Compressed Federated Learning with Client-Variance Reduction 2024 Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
+ PDF Chat Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity 2024 Dmitry Bylinkin
Aleksandr Beznosikov
+ Improving Accelerated Federated Learning with Compression and Importance Sampling 2023 Michał Grudzień
Grigory Malinovsky
Peter Richtárik
+ Federated Learning with Compression: Unified Analysis and Sharp Guarantees 2020 Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
Mehrdad Mahdavi

Citing (37)

Action Title Year Authors
+ DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients 2016 Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
+ Federated Optimization: Distributed Machine Learning for On-Device Intelligence 2016 Jakub Konečný
H. Brendan McMahan
Daniel Ramage
Peter Richtárik
+ TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning 2017 Wei Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Li
+ Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms 2017 Xiao Han
Kashif Rasul
Roland Vollgraf
+ Gradient Sparsification for Communication-Efficient Distributed Optimization 2017 Jianqiao Wangni
Jialei Wang
Liu Ji
Tong Zhang
+ QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding 2016 Dan Alistarh
Demjan Grubic
Jerry Li
Ryota Tomioka
Milan Vojnović
+ D$^2$: Decentralized Training over Decentralized Data 2018 Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
+ cpSGD: Communication-efficient and differentially-private distributed SGD 2018 Naman Agarwal
Ananda Theertha Suresh
Felix Yu
Sanjiv Kumar
H. Brendan McMahan
+ Optimal Algorithms for Non-Smooth Distributed Optimization in Networks 2018 Kevin Scaman
Francis Bach
Sébastien Bubeck
Yin Tat Lee
Laurent Massoulié
+ Stochastic Distributed Learning with Gradient Quantization and Variance Reduction 2019 Samuel Horváth
Dmitry Kovalev
Konstantin Mishchenko
Sebastian U. Stich
Peter Richtárik
+ Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback 2019 Shuai Zheng
Ziyue Huang
James T. Kwok
+ DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression 2019 Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
+ Decentralized Collaborative Learning of Personalized Models over Networks 2016 Paul Vanhaesebrouck
Aurélien Bellet
Marc Tommasi
+ TensorFlow: A system for large-scale machine learning 2016 Martı́n Abadi
Paul Barham
Jianmin Chen
Zhifeng Chen
Andy Davis
Jay B. Dean
Matthieu Devin
Sanjay Ghemawat
Geoffrey Irving
Michael Isard
+ PDF Chat Randomized Smoothing for Stochastic Optimization 2012 John C. Duchi
Peter L. Bartlett
Martin J. Wainwright
+ PDF Chat Perturbed Iterate Analysis for Asynchronous Stochastic Optimization 2017 Horia Mania
Xinghao Pan
Dimitris Papailiopoulos
Benjamin Recht
Kannan Ramchandran
Michael I. Jordan
+ The Convergence of Sparsified Gradient Methods 2018 Dan Alistarh
Torsten Hoefler
Mikael Johansson
Nikola Konstantinov
Sarit Khirirat
Cédric Renggli
+ PDF Chat A data-driven statistical model for predicting the critical temperature of a superconductor 2018 Kam Hamidieh
+ Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization 2018 Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
+ A Double Residual Compression Algorithm for Efficient Distributed Learning 2019 Xiaorui Liu
Li Yao
Jiliang Tang
Ming Yan
+ Privacy for Free: Communication-Efficient Learning with Differential Privacy Using Sketches 2019 Tian Li
Zaoxing Liu
Vyas Sekar
Virginia Smith
+ Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data 2019 Felix Sattler
Simon Wiedemann
Klaus‐Robert Müller
Wojciech Samek
+ Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization 2020 Zhize Li
Dmitry Kovalev
Xun Qian
Peter Richtárik
+ Communication Efficient Sparsification for Large Scale Machine Learning 2020 Sarit Khirirat
Sindri Magnússon
Arda Aytekin
Mikael Johansson
+ PDF Chat RATQ: A Universal Fixed-Length Quantizer for Stochastic Optimization 2021 Prathamesh Mayekar
Himanshu Tyagi
+ Artemis: tight convergence guarantees for bidirectional compression in Federated Learning. 2020 Constantin Philippenko
Aymeric Dieuleveut
+ Sparsified Privacy-Masking for Communication-Efficient and Privacy-Preserving Federated Learning. 2020 Rui Hu
Yanmin Gong
Yuanxiong Guo
+ Training Faster with Compressed Gradient 2020 An Xu
Zhouyuan Huo
Heng Huang
+ Linearly Converging Error Compensated SGD 2020 Eduard Gorbunov
Dmitry Kovalev
Dmitry I. Makarenko
Peter Richtárik
+ Free-rider Attacks on Model Aggregation in Federated Learning 2020 Yann Fraboni
R. Vidal
Marco Lorenzi
+ PDF Chat Green Algorithms: Quantifying the Carbon Footprint of Computation 2021 Loïc Lannelongue
Jason Grealey
Michael Inouye
+ On Biased Compression for Distributed Learning 2020 Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
Mher Safaryan
+ Communication-efficient distributed SGD with Sketching 2019 Nikita Ivkin
Daniel Rothchild
Enayat Ullah
Vladimir Braverman
Ion Stoica
Raman Arora
+ LEAF: A Benchmark for Federated Settings 2018 Sebastian Caldas
Sai Meher Karthik Duddu
Peter Wu
Tian Li
Jakub Konečný
H. Brendan McMahan
Virginia Smith
Ameet Talwalkar
+ PDF Chat Advances and Open Problems in Federated Learning 2021 Peter Kairouz
H. Brendan McMahan
Brendan Avent
Aurélien Bellet
Mehdi Bennis
Arjun Nitin Bhagoji
Kallista Bonawitz
Zachary Charles
Graham Cormode
Rachel Cummings
+ Communication-Efficient Learning of Deep Networks from Decentralized Data 2016 H. Brendan McMahan
Eider Moore
Daniel Ramage
Seth Hampson
Blaise Agüera y Arcas
+ Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions 2016 Igor Colin
Aurélien Bellet
Joseph Salmon
Stéphan Clémençon