Blake Woodworth

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat Gradient Descent Converges Linearly to Flatter Minima than Gradient Flow in Shallow Linear Networks 2025 Pierfrancesco Beneventano
Blake Woodworth
+ PDF Chat Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy 2023 Blake Woodworth
Konstantin Mishchenko
Francis Bach
+ Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy 2023 Blake Woodworth
Konstantin Mishchenko
Francis Bach
+ PDF Chat Lower bounds for non-convex stochastic optimization 2022 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake Woodworth
+ Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares 2022 Blake Woodworth
Francis Bach
Alessandro Rudi
+ Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays 2022 Konstantin Mishchenko
Francis Bach
Mathieu Even
Blake Woodworth
+ A Stochastic Newton Algorithm for Distributed Convex Optimization 2021 Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake Woodworth
+ An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning 2021 Blake Woodworth
Nathan Srebro
+ The Minimax Complexity of Distributed Optimization 2021 Blake Woodworth
+ An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning 2021 Blake Woodworth
Nathan Srebro
+ The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication 2021 Blake Woodworth
Brian Bullins
Ohad Shamir
Nathan Srebro
+ On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent 2021 Shahar Azulay
Edward Moroshko
Mor Shpigel Nacson
Blake Woodworth
Nathan Srebro
Amir Globerson
Daniel Soudry
+ The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication 2021 Blake Woodworth
Brian Bullins
Ohad Shamir
Nathan Srebro
+ A Field Guide to Federated Optimization 2021 Jianyu Wang
Zachary Charles
Zheng Xu
Gauri Joshi
H. Brendan McMahan
Blaise Agüera y Arcas
Maruan Al-Shedivat
Galen Andrew
Salman Avestimehr
Katharine Daly
+ A Stochastic Newton Algorithm for Distributed Convex Optimization 2021 Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake Woodworth
+ An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning 2021 Blake Woodworth
Nathan Srebro
+ The Minimax Complexity of Distributed Optimization 2021 Blake Woodworth
+ Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy 2020 Edward Moroshko
Blake Woodworth
Suriya Gunasekar
Jason D. Lee
Nati Srebro
Daniel Soudry
+ Guaranteed validity for empirical approaches to adaptive data analysis 2020 Ryan Rogers
Aaron Roth
Adam Smith
Nathan Srebro
Om Thakkar
Blake Woodworth
+ The Gradient Complexity of Linear Regression 2020 Mark Braverman
Elad Hazan
Max Simchowitz
Blake Woodworth
+ Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy 2020 Edward Moroshko
Suriya Gunasekar
Blake Woodworth
Jason D. Lee
Nathan Srebro
Daniel Soudry
+ Minibatch vs Local SGD for Heterogeneous Distributed Learning 2020 Blake Woodworth
Kumar Kshitij Patel
Nathan Srebro
+ Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow. 2020 Suriya Gunasekar
Blake Woodworth
Nathan Srebro
+ Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent 2020 Suriya Gunasekar
Blake Woodworth
Nathan Srebro
+ Kernel and Rich Regimes in Overparametrized Models 2020 Blake Woodworth
Suriya Gunasekar
Jason D. Lee
Edward Moroshko
Pedro Savarese
Itay Golan
Daniel Soudry
Nathan Srebro
+ Is Local SGD Better than Minibatch SGD? 2020 Blake Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. Brendan McMahan
Ohad Shamir
Nathan Srebro
+ Minibatch vs Local SGD for Heterogeneous Distributed Learning 2020 Blake Woodworth
Kumar Kshitij Patel
Nathan Srebro
+ Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy 2020 Edward Moroshko
Suriya Gunasekar
Blake Woodworth
Jason D. Lee
Nathan Srebro
Daniel Soudry
+ Minibatch vs Local SGD for Heterogeneous Distributed Learning 2020 Blake Woodworth
Kumar Kshitij Patel
Nathan Srebro
+ Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent 2020 Suriya Gunasekar
Blake Woodworth
Nathan Srebro
+ Kernel and Rich Regimes in Overparametrized Models 2020 Blake Woodworth
Suriya Gunasekar
Jason D. Lee
Edward Moroshko
Pedro Savarese
Itay Golan
Daniel Soudry
Nathan Srebro
+ Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory 2019 Blake Woodworth
Nathan Srebro
+ The Complexity of Making the Gradient Small in Stochastic Convex Optimization 2019 Dylan J. Foster
Ayush Sekhari
Ohad Shamir
Nathan Srebro
Karthik Sridharan
Blake Woodworth
+ Guaranteed Validity for Empirical Approaches to Adaptive Data Analysis 2019 Ryan Rogers
Aaron Roth
Adam Smith
Nathan Srebro
Om Thakkar
Blake Woodworth
+ Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory 2019 Blake Woodworth
Nathan Srebro
+ The gradient complexity of linear regression 2019 Mark Braverman
Elad Hazan
Max Simchowitz
Blake Woodworth
+ Lower Bounds for Non-Convex Stochastic Optimization 2019 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake Woodworth
+ The Complexity of Making the Gradient Small in Stochastic Convex Optimization 2019 Dylan J. Foster
Ayush Sekhari
Ohad Shamir
Nathan Srebro
Karthik Sridharan
Blake Woodworth
+ Kernel and Rich Regimes in Overparametrized Models 2019 Blake Woodworth
Suriya Gunasekar
Pedro Savarese
Edward Moroshko
Itay Golan
Jason Lee
Daniel Soudry
Nathan Srebro
+ Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization 2018 Blake Woodworth
Jialei Wang
Adam Smith
Brendan McMahan
Nati Srebro
+ Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints 2018 Andrew Cotter
Maya R. Gupta
Heinrich Jiang
Nathan Srebro
Karthik Sridharan
Serena Wang
Blake Woodworth
Seungil You
+ PDF Chat Implicit Regularization in Matrix Factorization 2018 Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nathan Srebro
+ The Everlasting Database: Statistical Validity at a Fair Price 2018 Blake Woodworth
Vitaly Feldman
Saharon Rosset
Nathan Srebro
+ Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization 2018 Blake Woodworth
Jialei Wang
Adam Smith
Brendan McMahan
Nathan Srebro
+ Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints 2018 Andrew Cotter
Maya R. Gupta
Heinrich Jiang
Nathan Srebro
Karthik Sridharan
Serena Wang
Blake Woodworth
Seungil You
+ The Everlasting Database: Statistical Validity at a Fair Price 2018 Blake Woodworth
Vitaly Feldman
Saharon Rosset
Nathan Srebro
+ Lower Bound for Randomized First Order Convex Optimization 2017 Blake Woodworth
Nathan Srebro
+ Implicit Regularization in Matrix Factorization 2017 Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nathan Srebro
+ Learning Non-Discriminatory Predictors 2017 Blake Woodworth
Suriya Gunasekar
Mesrob I. Ohannessian
Nathan Srebro
+ Learning Non-Discriminatory Predictors 2017 Blake Woodworth
Suriya Gunasekar
Mesrob I. Ohannessian
Nathan Srebro
+ Implicit Regularization in Matrix Factorization 2017 Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nati Srebro
+ Lower Bound for Randomized First Order Convex Optimization 2017 Blake Woodworth
Nathan Srebro
+ Implicit Regularization in Matrix Factorization 2017 Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nathan Srebro
+ Tight Complexity Bounds for Optimizing Composite Objectives 2016 Blake Woodworth
Nathan Srebro
+ Tight Complexity Bounds for Optimizing Composite Objectives 2016 Blake Woodworth
Nathan Srebro
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ Lower Bound for Randomized First Order Convex Optimization 2017 Blake Woodworth
Nathan Srebro
6
+ PDF Chat Lower bounds for finding stationary points I 2019 Yair Carmon
John C. Duchi
Oliver Hinder
Aaron Sidford
6
+ Introductory Lectures on Convex Optimization: A Basic Course 2014 Ю Е Нестеров
5
+ Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms 2018 Jianyu Wang
Gauri Joshi
5
+ Unified Optimal Analysis of the (Stochastic) Gradient Method 2019 Sebastian U. Stich
4
+ Local SGD Converges Fast and Communicates Little 2018 Sebastian U. Stich
4
+ Characterizing Implicit Bias in Terms of Optimization Geometry 2018 Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
4
+ On Lazy Training in Differentiable Programming 2018 Lénaïc Chizat
Edouard Oyallon
Francis Bach
4
+ On the Convergence Properties of a K-step Averaging Stochastic Gradient Descent Algorithm for Nonconvex Optimization 2018 Fan Zhou
Guojing Cong
4
+ Neural Tangent Kernel: Convergence and Generalization in Neural Networks 2018 Arthur Paul Jacot
Franck Gabriel
Clément Hongler
4
+ On the Randomized Complexity of Minimizing a Convex Quadratic Function 2018 Max Simchowitz
4
+ Optimization Methods for Large-Scale Machine Learning 2018 Léon Bottou
Frank E. Curtis
Jorge Nocedal
4
+ Parallel SGD: When does averaging help? 2016 Jian Zhang
Christopher De
Ioannis Mitliagkas
Christopher Ré
4
+ In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning 2014 Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
4
+ Better Communication Complexity for Local SGD 2019 Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
4
+ Minibatch vs Local SGD for Heterogeneous Distributed Learning 2020 Blake Woodworth
Kumar Kshitij Patel
Nathan Srebro
4
+ Parallelized Stochastic Gradient Descent 2010 Martin Zinkevich
Markus Weimer
Lihong Li
Alex Smola
4
+ Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization 2018 Blake Woodworth
Jialei Wang
Adam Smith
Brendan McMahan
Nathan Srebro
3
+ Cubic regularization of Newton method and its global performance 2006 Yurii Nesterov
B. T. Polyak
3
+ Federated Accelerated Stochastic Gradient Descent 2020 Honglin Yuan
Tengyu Ma
3
+ PDF Chat Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning 2019 Hao Yu
Sen Yang
Shenghuo Zhu
3
+ Divide and Conquer Kernel Ridge Regression 2013 Yuchen Zhang
John C. Duchi
Martin J. Wainwright
3
+ An Elementary Introduction to Modern Convex Geometry 1997 Keith Ball
3
+ Don't Use Large Mini-Batches, Use Local SGD 2018 Tao Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
3
+ Implicit Regularization in Matrix Factorization 2017 Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nati Srebro
3
+ A Convergence Theory for Deep Learning via Over-Parameterization 2018 Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
3
+ Tighter Theory for Local SGD on Identical and Heterogeneous Data 2020 Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
3
+ Information-theoretic lower bounds for distributed statistical estimation with communication constraints 2014 John C. Duchi
Michael I. Jordan
Martin J. Wainwright
Yuchen Zhang
3
+ The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication. 2019 Sebastian U. Stich
Sai Praneeth Karimireddy
3
+ Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization 2019 Farzin Haddadpour
Mohammad Mahdi Kamani
Mehrdad Mahdavi
Viveck R. Cadambe
3
+ PDF Chat Lower bounds for non-convex stochastic optimization 2022 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake Woodworth
3
+ Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations 2017 Yuanzhi Li
Tengyu Ma
Hongyang Zhang
3
+ Implicit Bias of Gradient Descent on Linear Convolutional Networks 2018 Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
3
+ Gradient Descent Provably Optimizes Over-parameterized Neural Networks 2018 Simon S. Du
Xiyu Zhai
Barnabás Póczos
Aarti Singh
3
+ SCAFFOLD: Stochastic Controlled Averaging for Federated Learning 2019 Sai Praneeth Karimireddy
Satyen Kale
Mehryar Mohri
Sashank J. Reddi
Sebastian U. Stich
Ananda Theertha Suresh
3
+ Convex Analysis and Monotone Operator Theory in Hilbert Spaces 2017 Heinz H. Bauschke
Patrick L. Combettes
3
+ PDF Chat Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming 2013 Saeed Ghadimi
Guanghui Lan
3
+ SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator 2018 Cong Fang
Chris Junchi Li
Zhouchen Lin
Tong Zhang
3
+ AIDE: Fast and Communication Efficient Distributed Optimization 2016 Sashank J. Reddi
Jakub Konečný
Peter Richtárik
Barnabás Póczos
Alexander J. Smola
3
+ PDF Chat Communication lower bounds for statistical estimation problems via a distributed data processing inequality 2016 Mark Braverman
Ankit Garg
Tengyu Ma
Huy L. Nguyễn
David P. Woodruff
3
+ Geometry of Optimization and Implicit Regularization in Deep Learning 2017 Behnam Neyshabur
Ryota Tomioka
Ruslan Salakhutdinov
Nathan Srebro
3
+ Non-convex Finite-Sum Optimization Via SCSG Methods 2017 Lihua Lei
Cheng Ju
Jianbo Chen
Michael I. Jordan
3
+ DiSCO: Distributed Optimization for Self-Concordant Empirical Loss 2015 Yuchen Zhang
Lin Xiao
3
+ Gradient Descent Maximizes the Margin of Homogeneous Neural Networks 2019 Kaifeng Lyu
Jian Li
3
+ Information-theoretic lower bounds for distributed statistical estimation with communication constraints 2013 Yuchen Zhang
John C. Duchi
Michael I. Jordan
Martin J. Wainwright
3
+ PDF Chat Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization 2010 Benjamin Recht
Maryam Fazel
Pablo A. Parrilo
3
+ Is Local SGD Better than Minibatch SGD? 2020 Blake Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. Brendan McMahan
Ohad Shamir
Nathan Srebro
3
+ Gradient methods for minimizing composite functions 2012 Yu. Nesterov
2
+ On the Universality of Online Mirror Descent 2011 Nathan Srebro
Karthik Sridharan
Ambuj Tewari
2
+ Mirror descent and nonlinear projected subgradient methods for convex optimization 2003 Amir Beck
Marc Teboulle
2