Newton-like Method with Diagonal Correction for Distributed Optimization
Newton-like Method with Diagonal Correction for Distributed Optimization
We consider distributed optimization problems where networked nodes cooperatively minimize the sum of their locally known convex costs. A popular class of methods to solve these problems are the distributed gradient methods, which are attractive due to their inexpensive iterations, but have a drawback of slow convergence rates. This motivates …