The Complexity of Making the Gradient Small in Stochastic Convex Optimization

Type: Preprint

Publication Date: 2019-01-01

Citations: 4

DOI: https://doi.org/10.48550/arxiv.1902.04686

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ The Complexity of Making the Gradient Small in Stochastic Convex Optimization 2019 Dylan J. Foster
Ayush Sekhari
Ohad Shamir
Nathan Srebro
Karthik Sridharan
Blake Woodworth
+ Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise 2021 Eduard Gorbunov
М. А. Данилова
Innokentiy Shibaev
Pavel Dvurechensky
Alexander Gasnikov
+ Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise. 2021 Eduard Gorbunov
Marina Danilova
Innokentiy Shibaev
Pavel Dvurechensky
Alexander Gasnikov
+ Lower Bounds for Non-Convex Stochastic Optimization 2019 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake Woodworth
+ Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization 2014 Mehrdad Mahdavi
+ Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n) 2013 Francis Bach
Éric Moulines
+ On the Complexity of Finding Small Subgradients in Nonsmooth Optimization 2022 Guy Kornowski
Ohad Shamir
+ Asymptotic Optimality in Stochastic Optimization 2016 John C. Duchi
Feng Ruan
+ Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations 2020 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
+ Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations 2020 Yossi Arjevani
Yair Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
+ Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD 2023 Matthew Faw
Litu Rout
Constantine Caramanis
Sanjay Shakkottai
+ Anytime Online-to-Batch Conversions, Optimism, and Acceleration 2019 Ashok Cutkosky
+ ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm 2020 Chris Junchi Li
Wenlong Mou
Martin J. Wainwright
Michael I. Jordan
+ Anytime Online-to-Batch Conversions, Optimism, and Acceleration. 2019 Ashok Cutkosky
+ PDF Chat Asymptotic optimality in stochastic optimization 2021 John C. Duchi
Feng Ruan
+ An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias 2021 Lu Yu
Krishnakumar Balasubramanian
Stanislav Volgushev
Murat A. Erdogdu
+ An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias 2020 Lu Yu
Krishnakumar Balasubramanian
Stanislav Volgushev
Murat A. Erdogdu
+ Sharp Analysis of Stochastic Optimization under Global Kurdyka-Łojasiewicz Inequality 2022 Ilyas Fatkhullin
Jalal Etesami
Niao He
Negar Kiyavash
+ Never Go Full Batch (in Stochastic Convex Optimization) 2021 Idan Amir
Yair Carmon
Tomer Koren
Roi Livni
+ Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints 2021 Shaojie Li
Yong Liu

Works Cited by This (0)

Action Title Year Authors