Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD
Lam M. Nguyen
,
Phuong Ha Nguyen
,
Dzung T. Phan
,
Jayant Kalagnanam
,
Marten van Dijk
Type:
Preprint
Publication Date:
2019-01-22
Citations:
0
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD
2019
Lam M. Nguyen
Phuong Ha Nguyen
Dzung T. Phan
Jayant Kalagnanam
Marten van Dijk
+
A strong convergence theorem for H1( $$\mathbb{T}$$ )
1983
Brent Smith
+
Rate of Convergence in Trotter’s Approximation Theorem
2008
Michele Campıtı
Cristian Tacelli
+
Convergence rates and approximation results for SGD and its continuous-time counterpart
2020
Xavier Fontaine
Valentin De Bortoli
Alain Durmus
+
On the rate of convergence of 2-term recursions in ℝ d
1997
Luíz Alberto Oliveira Rocha
+
Lower bound for the rate of convergence in the CLT in a Hilbert space
1990
Mindaugas Bloznelis
+
The sampling theorem, LqT-approximation and ϵ-dimension
1992
Ðinh Dũng
+
Truncation Error for L.F.T. Algorithms {Tn (w)}
2020
W. J. Thron
+
A high-dimensional CLT in $$\mathcal {W}_2$$ W 2 distance with near optimal convergence rate
2017
Alex Zhai
+
Optimal rate of convergence for sequences of a prescribed form
2013
Ioan Gavrea
Mircea Ivan
+
New Approximating Results by Weak Convergence of Forked Sequences
2024
Bareq Baqi Salman
Salwa Salman Abed
+
PDF
Chat
The convergence of the Stochastic Gradient Descent (SGD) : a self-contained proof
2021
Gabrel Turinici
+
The convergence of the Stochastic Gradient Descent (SGD) : a self-contained proof
2021
Gabriel Turinici
+
Convergence rate of a penalty-function scheme
1971
David G. Luenberger
+
Nonuniform estimate of the rate of convergence in the CLT with stable limit distribution
1989
V. Bentkus
Mindaugas Bloznelis
+
Review on Generalized Convergence in Different Spaces
2024
Gursimran Kaur
MEENAKSHI MEENAKSHI
+
A survey on the high convergence orders and computational convergence orders of sequences
2018
Emil Cătinaş
+
Generalization of the Dvoretzky theorem of convergence rate of the stochastic approximation algorithms
2016
T. P. Krasulina
+
A remark on a recent paper on the convergence of “amrts”
1984
Somnath Chatterji
+
Weak Convergence in M1(ℝ d )
2022
Karl Stromberg
Kuppusamy Ravindran
Works That Cite This (0)
Action
Title
Year
Authors
Works Cited by This (24)
Action
Title
Year
Authors
+
Semi-Stochastic Gradient Descent Methods
2013
Jakub Konečný
Peter Richtárik
+
Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition
2013
Mark Schmidt
Nicolas Le Roux
+
Robust Stochastic Approximation Approach to Stochastic Programming
2009
Arkadi Nemirovski
Anatoli Juditsky
Guanghui Lan
Alexander Shapiro
+
Principles of mathematical analysis
1964
Walter Rudin
+
Optimization with First-Order Surrogate Functions
2013
Julien Mairal
+
PDF
Chat
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
2011
Dimitri P. Bertsekas
+
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
2017
Lam M. Nguyen
Jie Liu
Katya Scheinberg
Martin Takáč
+
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
2017
Zeyuan Allen-Zhu
+
Theoretical properties of the global optimizer of two layer neural network
2017
Digvijay Boob
Guanghui Lan
+
When Does Stochastic Gradient Algorithm Work Well?
2018
Lam M. Nguyen
Nam Nguyen
Dzung T. Phan
Jayant Kalagnanam
Katya Scheinberg