On Maintaining Linear Convergence of Distributed Learning and Optimization Under Limited Communication
On Maintaining Linear Convergence of Distributed Learning and Optimization Under Limited Communication
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems. To do this, the nodes need to compress important algorithm information to bits so that it can be communicated over a digital channel. The communication time of these algorithms follows a complex interplay between a) the algorithm's …