Online convex optimization in the bandit setting: gradient descent without a gradient

Type: Preprint

Publication Date: 2004-01-01

Citations: 12

DOI: https://doi.org/10.48550/arxiv.cs/0408007

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Minimizing Regret in Bandit Online Optimization in Unconstrained and Constrained Action Spaces. 2018 Tatiana Tatarenko
Maryam Kamgarpour
+ (Bandit) Convex Optimization with Biased Noisy Gradient Oracles 2016 Xiaowei Hu
L. A. Prashanth
András György
Csaba Szepesvári
+ PDF Chat Comparator-adaptive Convex Bandits 2020 Dirk van der Hoeven
Ashok Cutkosky
Haipeng Luo
+ Comparator-adaptive Convex Bandits 2020 Dirk van der Hoeven
Ashok Cutkosky
Haipeng Luo
+ Minimizing Regret of Bandit Online Optimization in Unconstrained Action Spaces 2018 Tatiana Tatarenko
Maryam Kamgarpour
+ Bandit) Convex Optimization with Biased Noisy Gradient Oracles 2016 Xiaowei Hu
L. A. Prashanth
András György
Csaba Szepesvári
+ A Modern Introduction to Online Learning 2019 Francesco Orabona
+ A Modern Introduction to Online Learning. 2019 Francesco Orabona
+ Bandit Optimization with Upper-Confidence Frank-Wolfe. 2017 Quentin Berthet
Vianney Perchet
+ Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization 2021 Aadirupa Saha
Nagarajan Natarajan
Praneeth Netrapalli
Prateek Jain
+ Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints 2011 Mehrdad Mahdavi
Rong Jin
Tianbao Yang
+ Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints 2011 Mehrdad Mahdavi
Rong Jin
Tianbao Yang
+ On the Complexity of Bandit Linear Optimization 2014 Ohad Shamir
+ An optimal algorithm for bandit convex optimization 2016 Elad Hazan
Yuanzhi Li
+ An optimal algorithm for bandit convex optimization 2016 Elad Hazan
Yuanzhi Li
+ Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization 2021 Aadirupa Saha
Nagarajan Natarajan
Praneeth Netrapalli
Prateek Jain
+ Bandit Convex Optimisation Revisited: FTRL Achieves $\tilde{O}(t^{1/2})$ Regret 2023 D. R. Young
Douglas J. Leith
George Iosifidis
+ PDF Chat Second Order Methods for Bandit Optimization and Control 2024 Arun Sai Suggala
Y. Jennifer Sun
Praneeth Netrapalli
Elad Hazan
+ Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe 2017 Quentin Berthet
Vianney Perchet
+ Regret in Online Combinatorial Optimization 2012 Jean-Yves Audibert
SĂ©bastien Bubeck
Gábor Lugosi