Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness

Type: Preprint

Publication Date: 2020-01-01

Citations: 11

DOI: https://doi.org/10.48550/arxiv.2003.10973

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Finite-Time Analysis of Markov Gradient Descent 2022 Thinh T. Doan
+ Convergence Rates of Accelerated Markov Gradient Descent with Applications in Reinforcement Learning 2020 Thinh T. Doan
Lam M. Nguyen
Nhan H. Pham
Justin Romberg
+ Stochastic Gradient Descent under Markovian Sampling Schemes 2023 Mathieu Even
+ PDF Chat Stochastic Variance-Reduced Policy Gradient 2018 Matteo Papini
Damiano Binaghi
Giuseppe Canonaco
Matteo Pirotta
Marcello Restelli
+ Stochastic Variance-Reduced Policy Gradient 2018 Matteo Papini
Damiano Binaghi
Giuseppe Canonaco
Matteo Pirotta
Marcello Restelli
+ PDF Chat Stochastic First-Order Methods for Average-Reward Markov Decision Processes 2024 Tianjiao Li
Feiyang Wu
Guanghui Lan
+ Online covariance estimation for stochastic gradient descent under Markovian sampling 2023 Abhishek Roy
Krishnakumar Balasubramanian
+ PDF Chat Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods 2024 Caleb Ju
Guanghui Lan
+ Stochastic first-order methods for average-reward Markov decision processes 2022 Tianjiao Li
Feiyang Wu
Guanghui Lan
+ A variational analysis of stochastic gradient algorithms 2016 Stephan Mandt
Matthew D. Hoffman
David M. Blei
+ A Variational Analysis of Stochastic Gradient Algorithms 2016 Stephan Mandt
Matthew D. Hoffman
David M. Blei
+ Non-asymptotic bounds for stochastic optimization with biased noisy gradient oracles 2020 Nirav Bhavsar
Prashanth L. A
+ An Improved Convergence Analysis of Stochastic Variance-Reduced Policy Gradient 2019 Pan Xu
Felicia Gao
Quanquan Gu
+ Non-asymptotic Analysis of Biased Stochastic Approximation Scheme 2019 Belhal Karimi
Błażej Miasojedow
Éric Moulines
Hoi-To Wai
+ PDF Chat Nonasymptotic Bounds for Stochastic Optimization With Biased Noisy Gradient Oracles 2022 Nirav Bhavsar
L. A. Prashanth
+ An Improved Convergence Analysis of Stochastic Variance-Reduced Policy Gradient 2019 Pan Xu
Felicia Gao
Quanquan Gu
+ The ODE Method for Stochastic Approximation and Reinforcement Learning with Markovian Noise 2024 Shuze Liu
Shuhang Chen
Shangtong Zhang
+ Understanding the Effect of Stochasticity in Policy Optimization 2021 Jincheng Mei
Bo Dai
Chenjun Xiao
Csaba Szepesvári
Dale Schuurmans
+ Understanding the Effect of Stochasticity in Policy Optimization 2021 Jincheng Mei
Bo Dai
Chenjun Xiao
Csaba Szepesvári
Dale Schuurmans
+ Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes 2019 Alekh Agarwal
Sham M. Kakade
Jason D. Lee
Gaurav Mahajan