A Communication-Efficient Decentralized Newton's Method With Provably Faster Convergence

Type: Article

Publication Date: 2023-01-01

Citations: 6

DOI: https://doi.org/10.1109/tsipn.2023.3290397

Abstract

In this paper, we consider a strongly convex finite-sum minimization problem over a decentralized network and propose a communication-efficient decentralized Newton's method for solving it. The main challenges in designing such an algorithm come from three aspects: (i) mismatch between local gradients/Hessians and the global ones; (ii) cost of sharing second-order information; (iii) tradeoff among computation and communication. To handle these challenges, we first apply dynamic average consensus (DAC) so that each node is able to use a local gradient approximation and a local Hessian approximation to track the global gradient and Hessian, respectively. Second, since exchanging Hessian approximations is far from communication-efficient, we require the nodes to exchange the compressed ones instead and then apply an error compensation mechanism to correct for the compression noise. Third, we introduce multi-step consensus for exchanging local variables and local gradient approximations to balance between computation and communication. With novel analysis, we establish the globally linear (resp., asymptotically super-linear) convergence rate of the proposed algorithm when <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$m$</tex-math></inline-formula> is constant (resp., tends to infinity), where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$m\ge 1$</tex-math></inline-formula> is the number of consensus inner steps. To the best of our knowledge, this is the first super-linear convergence result for a communication-efficient decentralized Newton's method. Moreover, the rate we establish is provably faster than those of first-order methods. Our numerical results on various applications corroborate the theoretical findings.

Locations

  • arXiv (Cornell University) - View - PDF
  • IEEE Transactions on Signal and Information Processing over Networks - View

Similar Works

Action Title Year Authors
+ A Communication-Efficient Decentralized Newton's Method with Provably Faster Convergence 2022 Huikang Liu
Jiaojiao Zhang
Anthony Man–Cho So
Qing Ling
+ PDF Chat A Hessian Inversion-Free Exact Second Order Method for Distributed Consensus Optimization 2022 Dušan Jakovetić
Nataša Krejić
Nataša Krklec Jerinkić
+ A Fast Proximal Gradient Algorithm for Decentralized Composite Optimization over Directed Networks 2016 Jinshan Zeng
Tao He
Mingwen Wang
+ A Hessian inversion-free exact second order method for distributed consensus optimization 2022 Dušan Jakovetić
Nataša Krejić
Nataša Krklec Jerinkić
+ PDF Chat Distributed Newton Method for Large-Scale Consensus Optimization 2019 Rasul Tutunov
Haitham Bou-Ammar
Ali Jadbabaie
+ EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization 2014 Wei Shi
Qing Ling
Gang Wu
Wotao Yin
+ Distributed Adaptive Newton Methods with Global Superlinear Convergence 2020 Jiaqi Zhang
Keyou You
Tamer Başar
+ PDF Chat EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization 2015 Wei Shi
Qing Ling
Gang Wu
Wotao Yin
+ Decentralized Approximate Newton Methods for In-Network Optimization 2019 Hejie Wei
Zhihai Qu
Xuyang Wu
Hao Wang
Jie Lu
+ PDF Chat Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks 2019 Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth Serena Bentley
+ A Distributed Newton Method for Large Scale Consensus Optimization 2016 Rasul Tutunov
Haitham Bou Ammar
Ali Jadbabaie
+ A Distributed Newton Method for Large Scale Consensus Optimization 2016 Rasul Tutunov
Haitham Bou Ammar
Ali Jadbabaie
+ Gradient-Consensus Method for Distributed Optimization in Directed Networks 2019 Vivek Khatana
Govind Saraswat
Sourav Patel
Murti V. Salapaka
+ Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks 2018 Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth Serena Bentley
+ PDF Chat Decentralized Approximate Newton Methods for Convex Optimization on Networked Systems 2021 Hejie Wei
Zhihai Qu
Xuyang Wu
Hao Wang
Jie Lu
+ DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization 2022 Adel Nabli
Edouard Oyallon
+ Multi-consensus Decentralized Accelerated Gradient Descent 2020 Haishan Ye
Luo Luo
Ziang Zhou
Tong Zhang
+ Decentralized Approximate Newton Methods for Convex Optimization on Networked Systems 2019 Hejie Wei
Zhihai Qu
Xuyang Wu
Hao Wang
Jie Lu
+ PDF Chat On the Convergence of Decentralized Gradient Descent 2016 Kun Yuan
Qing Ling
Wotao Yin
+ Communication-Censored Linearized ADMM for Decentralized Consensus Optimization 2019 Weiyu Li
Yaohua Liu
Zhi Tian
Qing Ling