Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Mostly Exploration-Free Algorithms for Contextual Bandits
Hamsa Bastani
,
Mohsen Bayati
,
Khashayar Khosravi
Type:
Preprint
Publication Date:
2017-04-28
Citations:
10
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
Mostly Exploration-Free Algorithms for Contextual Bandits
2017
Hamsa Bastani
Mohsen Bayati
Khashayar Khosravi
+
PDF
Chat
Mostly Exploration-Free Algorithms for Contextual Bandits
2020
Hamsa Bastani
Mohsen Bayati
Khashayar Khosravi
+
Exploiting the Natural Exploration In Contextual Bandits.
2017
Hamsa Bastani
Mohsen Bayati
Khashayar Khosravi
+
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
2020
Mohsen Bayati
Nima Hamidi
Ramesh Johari
Khashayar Khosravi
+
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
2020
Mohsen Bayati
Nima Hamidi
Ramesh Johari
Khashayar Khosravi
+
PDF
Chat
Truthful mechanisms for linear bandit games with private contexts
2025
Hu Yiting
Lingjie Duan
+
PDF
Chat
Thompson Sampling in Partially Observable Contextual Bandits
2024
Hongju Park
Mohamad Kazem Shirani Faradonbeh
+
Adaptive Exploration in Linear Contextual Bandit
2019
Botao Hao
Tor Lattimore
Csaba Szepesvári
+
Adaptive Exploration in Linear Contextual Bandit
2019
Botao Hao
Tor Lattimore
Csaba Szepesvári
+
Squeeze All: Novel Estimator and Self-Normalized Bound for Linear Contextual Bandits
2022
Wonyoung Kim
Min-hwan Oh
Myunghee Cho Paik
+
PDF
Chat
Forced Exploration in Bandit Problems
2024
Qi Han
Li Zhu
Fei Guo
+
Forced Exploration in Bandit Problems
2023
Han Qi
Fei Guo
Li Zhu
+
PDF
Chat
Robustly Improving Bandit Algorithms with Confounded and Selection Biased Offline Data: A Causal Approach
2024
Wen Huang
Xintao Wu
+
Robustly Improving Bandit Algorithms with Confounded and Selection Biased Offline Data: A Causal Approach
2023
Wen Huang
Xintao Wu
+
Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling
2023
Aadirupa Saha
Branislav Kveton
+
Bayesian bandits: balancing the exploration-exploitation tradeoff via double sampling
2017
Iñigo Urteaga
Chris H. Wiggins
+
The Role of Contextual Information in Best Arm Identification
2021
Masahiro Kato
Kaito Ariu
+
CORe: Capitalizing On Rewards in Bandit Exploration
2021
Nan Wang
Branislav Kveton
Maryam Karimzadehgan
+
PDF
Chat
CORe: Capitalizing On Rewards in Bandit Exploration
2021
Nan Wang
Branislav Kveton
Maryam Karimzadehgan
+
PDF
Chat
Effects of Model Misspecification on BayesianBandits: Case Studies in UX Optimization
2020
Mack Sweeney
Matthew van Adelsberg
Kathryn Blackmond Laskey
Carlotta Domeniconi
Works That Cite This (10)
Action
Title
Year
Authors
+
Dynamic Batch Learning in High-Dimensional Sparse Linear Contextual Bandits
2020
Zhimei Ren
Zhengyuan Zhou
+
Model selection for contextual bandits
2019
Dylan J. Foster
Akshay Krishnamurthy
Haipeng Luo
+
OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits
2019
Niladri S. Chatterji
V. Sai Muthukumar
Peter L. Bartlett
+
PDF
Chat
Diversity and Exploration in Social Learning
2019
Nicole Immorlica
Jieming Mao
Christos Tzamos
+
PDF
Chat
Input perturbations for adaptive control and learning
2020
Mohamad Kazem Shirani Faradonbeh
Ambuj Tewari
George Michailidis
+
Rarely-switching linear bandits: optimization of causal effects for the real world
2019
Benjamin Lansdell
Sofia Triantafillou
Konrad P. Körding
+
PDF
Chat
Bayesian Exploration with Heterogeneous Agents
2019
Nicole Immorlica
Jieming Mao
Aleksandrs Slivkins
Zhiwei Steven Wu
+
Lifelong Learning in Multi-Armed Bandits
2020
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
+
PDF
Chat
Incentivising Exploration and Recommendations for Contextual Bandits with Payments
2020
Priyank Agrawal
Theja Tulabandhula
+
Outdoor mmWave Base Station Placement: A Multi-Armed Bandit Learning Approach.
2020
Fatih Erden
Chethan Kumar Anjinappa
Ender Öztürk
İsmail Güvenç
Works Cited by This (19)
Action
Title
Year
Authors
+
PDF
Chat
Estimation of the Warfarin Dose with Clinical and Pharmacogenetic Data
2009
Teri E. Klein
Altman Rb
Niclas Eriksson
Gage Bf
Kimmel Se
Lee Mt
Limdi Na
David Page
Roden Dm
Wagner Mj
+
PDF
Chat
User-Friendly Tail Bounds for Matrix Martingales
2011
Joel A. Tropp
+
PDF
Chat
One-Armed Bandit Problems with Covariates
1991
Jyotirmoy Sarkar
+
ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES
1933
W. R THOMPSON
+
PDF
Chat
Optimal aggregation of classifiers in statistical learning
2004
A. B. Tsybakov
+
PDF
Chat
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
2012
Sébastien Bubeck
Nicolò Cesa‐Bianchi
+
PDF
Chat
Strong consistency of maximum quasi-likelihood estimators in generalized linear models with fixed and adaptive designs
1999
Kani Chen
Inchi Hu
Zhiliang Ying
+
PDF
Chat
A contextual-bandit approach to personalized news article recommendation
2010
Lihong Li
Wei Chu
John Langford
Robert E. Schapire
+
Bandit problems with side observations
2005
Chih-Chun Wang
Sanjeev R. Kulkarni
H. Vincent Poor
+
PDF
Chat
Learning to Optimize via Posterior Sampling
2014
Daniel Russo
Benjamin Van Roy