Provable Fictitious Play for General Mean-Field Games

Type: Preprint

Publication Date: 2020-10-08

Citations: 0

Abstract

We propose a reinforcement learning algorithm for stationary mean-field games, where the goal is to learn a pair of mean-field state and stationary policy that constitutes the Nash equilibrium. When viewing the mean-field state and the policy as two players, we propose a fictitious play algorithm which alternatively updates the mean-field state and the policy via gradient-descent and proximal policy optimization, respectively. Our algorithm is in stark contrast with previous literature which solves each single-agent reinforcement learning problem induced by the iterates mean-field states to the optimum. Furthermore, we prove that our fictitious play algorithm converges to the Nash equilibrium at a sublinear rate. To the best of our knowledge, this seems the first provably convergent single-loop reinforcement learning algorithm for mean-field games based on iterative updates of both mean-field state and policy.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Provable Fictitious Play for General Mean-Field Games 2020 Qiaomin Xie
Zhuoran Yang
Zhaoran Wang
Andreea Minca
+ Approximate Fictitious Play for Mean Field Games 2019 Romuald Élie
Julien Pérolat
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
+ Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games 2019 Zuyue Fu
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
+ PDF Chat Reinforcement Learning for Mean-Field Game 2022 Mridul Agarwal
Vaneet Aggarwal
Arnob Ghosh
Nilay Tiwari
+ PDF Chat When is Mean-Field Reinforcement Learning Tractable and Relevant? 2024 Batuhan Yardim
Artur Goldman
Niao He
+ PDF Chat A Single Online Agent Can Efficiently Learn Mean Field Games 2024 Chenyu Zhang
Xu Chen
Xuan Di
+ PDF Chat A Single Online Agent Can Efficiently Learn Mean Field Games 2024 Chenyu Zhang
Xu Chen
Xuan Di
+ Oracle-free Reinforcement Learning in Mean-Field Games along a Single Sample Path 2022 Muhammad Aneeq uz Zaman
Alec Koppel
Sujay Bhatt
Tamer Başar
+ PDF Chat On the Convergence of Model Free Learning in Mean Field Games 2020 Romuald Élie
Julien Pérolat
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
+ On the Convergence of Model Free Learning in Mean Field Games 2019 Romuald Élie
Julien Pérolat
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
+ On the Convergence of Model Free Learning in Mean Field Games 2019 Romuald Élie
Julien Pérolat
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
+ PDF Chat A Single-Loop Finite-Time Convergent Policy Optimization Algorithm for Mean Field Games (and Average-Reward Markov Decision Processes) 2024 Sihan Zeng
Sujay Bhatt
Alec Koppel
Sumitra Ganesh
+ Reinforcement Learning for Mean Field Game. 2019 Nilay Tiwari
Arnob Ghosh
Vaneet Aggarwal
+ Reinforcement Learning for Mean Field Game 2019 Mridul Agarwal
Vaneet Aggarwal
Arnob Ghosh
Nilay Tiwari
+ PDF Chat A General Framework for Learning Mean-Field Games 2022 Xin Guo
Anran Hu
Renyuan Xu
Junzi Zhang
+ Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning 2021 Kai Cui
Heinz Koeppl
+ Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning 2021 Kai Cui
Heinz Koeppl
+ PDF Chat Learning Discrete-Time Major-Minor Mean Field Games 2024 Kai Cui
Gökçe Dayanıklı
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
Heinz Koeppl
+ Learning Discrete-Time Major-Minor Mean Field Games 2023 Kai Cui
Gökçe Dayanıklı
Mathieu Laurière
Matthieu Geist
Olivier Pietquin
Heinz Koeppl
+ A General Framework for Learning Mean-Field Games 2020 Xin Guo
Anran Hu
Renyuan Xu
Junzi Zhang

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors