Continual Reinforcement Learning in 3D Non-stationary Environments

Type: Article

Publication Date: 2020-06-01

Citations: 31

DOI: https://doi.org/10.1109/cvprw50498.2020.00132

Abstract

High-dimensional always-changing environments constitute a hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained off-line in very static and controlled conditions in simulation such that training observations can be thought as sampled i.i.d. from the entire observations space. However, in real world settings, the environment is often non-stationary and subject to unpredictable, frequent changes. In this paper we propose and openly release CRLMaze, a new benchmark for learning continually through reinforcement in a complex 3D non-stationary task based on ViZDoom and subject to several environmental changes. Then, we introduce an end-to-end model-free continual reinforcement learning strategy showing competitive results with respect to four different baselines and not requiring any access to additional supervised signals, previously encountered environmental conditions or observations.

Locations

  • arXiv (Cornell University) - View - PDF
  • Archivio istituzionale della ricerca (Alma Mater Studiorum UniversitĂ  di Bologna) - View - PDF
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) - View

Similar Works

Action Title Year Authors
+ Continual Reinforcement Learning in 3D Non-stationary Environments 2019 Vincenzo Lomonaco
Karan Desai
Eugenio Culurciello
Davide Maltoni
+ Continual Reinforcement Learning in 3D Non-stationary Environments 2019 Vincenzo Lomonaco
Karan Desai
Eugenio Culurciello
Davide Maltoni
+ Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning 2018 Hugo Caselles-Dupré
Louis Annabi
Oksana Hagen
Michael Garcia-Ortiz
David Filliat
+ Avalanche RL: a Continual Reinforcement Learning Library 2022 NicolĂČ Lucchesi
Antonio Carta
Vincenzo Lomonaco
Davide Bacciu
+ DIAMBRA Arena: a New Reinforcement Learning Platform for Research and Experimentation 2022 Alessandro Palmas
+ PDF Chat A Study of Continual Learning Methods for Q-Learning 2022 Benedikt Bagus
Alexander Gepperth
+ A Study of Continual Learning Methods for Q-Learning 2022 Benedikt Bagus
Alexander Gepperth
+ APES: a Python toolbox for simulating reinforcement learning environments 2018 Aqeel Labash
Ardi Tampuu
Tambet Matiisen
Jaan Aru
RaĂșl Vicente
+ PDF Chat From Two-Dimensional to Three-Dimensional Environment with Q-Learning: Modeling Autonomous Navigation with Reinforcement Learning and no Libraries 2024 Ergon Cugler de Moraes Silva
+ Planning to Explore via Self-Supervised World Models 2020 R. Sekar
Oleh Rybkin
Kostas Daniilidis
Pieter Abbeel
Danijar Hafner
Deepak Pathak
+ DisCoRL: Continual Reinforcement Learning via Policy Distillation 2019 Kalifou René Traoré
Hugo Caselles-Dupré
Timothée Lesort
Te Sun
Guanghang Cai
Natalia DĂ­az-RodrĂ­guez
David Filliat
+ PDF Chat Towards Continual Reinforcement Learning: A Review and Perspectives 2022 Khimya Khetarpal
Matthew Riemer
Irina Rish
Doina Precup
+ Towards Continual Reinforcement Learning: A Review and Perspectives 2020 Khimya Khetarpal
Matthew Riemer
Irina Rish
Doina Precup
+ PDF Chat Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey 2020 Wenshuai Zhao
Jorge Peña Queralta
Tomi Westerlund
+ Explore, Exploit or Listen: Combining Human Feedback and Policy Model to Speed up Deep Reinforcement Learning in 3D Worlds 2017 Zhiyu Lin
Brent Harrison
Aaron Keech
Mark Riedl
+ Planning to Explore via Self-Supervised World Models 2020 R. Sekar
Oleh Rybkin
Kostas Daniilidis
Pieter Abbeel
Danijar Hafner
Deepak Pathak
+ PDF Chat Deep reinforcement learning boosted by external knowledge 2018 Nicolas Bougie
Ryutaro Ichise
+ Autonomous Reinforcement Learning: Formalism and Benchmarking 2021 Archit Sharma
Kelvin Xu
Nikhil Sardana
Abhishek Gupta
Karol Hausman
Sergey Levine
Chelsea Finn
+ Learning when to observe: A frugal reinforcement learning framework for a high-cost world 2023 Colin Bellinger
Mark Crowley
Isaac Tamblyn
+ PDF Chat Transferable Reinforcement Learning via Generalized Occupancy Models 2024 Chuning Zhu
Xinqi Wang
Tyler Han
Simon S. Du
Abhishek Gupta

Works That Cite This (20)

Action Title Year Authors
+ PDF Chat Resilient Robot Teams: a Review Integrating Decentralised Control, Change-Detection, and Learning 2022 David M. Bossens
Sarvapali D. Ramchurn
Danesh Tarapore
+ PDF Chat Continual Reinforcement Learning in 3D Non-stationary Environments 2020 Vincenzo Lomonaco
Karan Desai
Eugenio Culurciello
Davide Maltoni
+ PDF Chat Evaluating Continual Learning Algorithms by Generating 3D Virtual Environments 2022 Enrico Meloni
Alessandro Betti
Lapo Faggi
Simone Marullo
Matteo Tiezzi
Stefano Melacci
+ More Is Better: An Analysis of Instance Quantity/Quality Trade-off in Rehearsal-based Continual Learning. 2021 Francesco Pelosin
Andrea Torsello
+ PDF Chat Dynamics-Adaptive Continual Reinforcement Learning via Progressive Contextualization 2023 Tiantian Zhang
Zichuan Lin
Yuxing Wang
Deheng Ye
Qiang Fu
Wei Yang
Xueqian Wang
Bin Liang
Bo Yuan
Xiu Li
+ Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges 2019 Timothée Lesort
Vincenzo Lomonaco
Andrei Stoian
Davide Maltoni
David Filliat
Natalia DĂ­az-RodrĂ­guez
+ PDF Chat A Survey of Zero-shot Generalisation in Deep Reinforcement Learning 2023 Robert Kirk
Amy Zhang
Edward Grefenstette
Tim RocktÀschel
+ Continual Learning : Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes 2020 Timothée Lesort
+ Avalanche: an End-to-End Library for Continual Learning 2021 Vincenzo Lomonaco
Lorenzo Pellegrini
Andrea Cossu
Antonio Carta
Gabriele Graffieti
Tyler L. Hayes
Matthias De Lange
Marc Masana
Jary Pomponi
Gido M. van de Ven
+ Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation 2021 Tiantian Zhang
Xueqian Wang
Bin Liang
Bo Yuan

Works Cited by This (29)

Action Title Year Authors
+ Overcoming catastrophic forgetting in neural networks 2017 James Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
Joel Veness
Guillaume Desjardins
Andrei A. Rusu
Kieran Milan
John Quan
Tiago Ramalho
Agnieszka Grabska‐BarwiƄska
+ Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics 2017 Ken Kansky
Tom Silver
David A. MĂ©ly
Mohamed Eldawy
Miguel LĂĄzaro-Gredilla
Xinghua Lou
Nimrod Dorfman
Szymon Sidor
Scott Phoenix
Dileep George
+ Proximal Policy Optimization Algorithms 2017 John Schulman
Filip Wolski
Prafulla Dhariwal
Alec Radford
Oleg Klimov
+ Some Considerations on Learning to Explore via Meta-Reinforcement Learning 2018 Bradly C. Stadie
Ge Yang
Rein Houthooft
Xi Chen
Yan Duan
Yuhuai Wu
Pieter Abbeel
Ilya Sutskever
+ PDF Chat State representation learning for control: An overview 2018 Timothée Lesort
Natalia DĂ­az-RodrĂ­guez
Jean-Frano̧is Goudou
David Filliat
+ Continual lifelong learning with neural networks: A review 2019 German I. Parisi
Ronald Kemker
Jose L. Part
Christopher Kanan
Stefan Wermter
+ Accelerated Methods for Deep Reinforcement Learning 2018 Adam Stooke
Pieter Abbeel
+ Progress & Compress: A scalable framework for continual learning 2018 Jonathan Schwarz
Jelena Luketina
Wojciech Marian Czarnecki
Agnieszka Grabska‐BarwiƄska
Yee Whye Teh
Razvan Pascanu
Raia Hadsell
+ FeUdal Networks for Hierarchical Reinforcement Learning 2017 Alexander Sasha Vezhnevets
Simon Osindero
Tom Schaul
Nicolas Heess
Max Jaderberg
David Silver
Koray Kavukcuoglu
+ Model Primitive Hierarchical Lifelong Reinforcement Learning 2019 Bo-Han Wu
Jayesh K. Gupta
Mykel J. Kochenderfer