Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
DeepMDP: Learning Continuous Latent Space Models for Representation Learning
Carles Gelada
,
Saurabh Kumar
,
Jacob Buckman
,
Ofir Nachum
,
Marc G. Bellemare
Type:
Preprint
Publication Date:
2019-06-06
Citations:
47
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
DeepMDP: Learning Continuous Latent Space Models for Representation Learning
2019
Carles Gelada
Saurabh Kumar
Jacob Buckman
Ofir Nachum
Marc G. Bellemare
+
Harnessing Discrete Representations For Continual Reinforcement Learning
2023
Edan Meyer
Adam White
Marlos C. Machado
+
PDF
Chat
Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning
2024
Marvin Alles
Philip Becker-Ehmck
Patrick van der Smagt
Maximilian Karl
+
Learning Temporally-Consistent Representations for Data-Efficient Reinforcement Learning
2021
Trevor McInroe
Lukas Schäfer
Stefano V. Albrecht
+
Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
2021
Nicolò Botteghi
Mannes Poel
Beril Sırmaçek
Christoph Brüne
+
Learning World Models with Identifiable Factorization
2023
Yu-Ren Liu
Biwei Huang
Zhengmao Zhu
Honglong Tian
Mingming Gong
Yang Yu
Kun Zhang
+
Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning
2020
Daniel Guo
Bernardo Ávila Pires
Bilal Piot
Jean-Bastien Grill
Florent Altché
Rémi Munos
Mohammad Gheshlaghi Azar
+
Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning
2023
Hongyu Zang
Xin Li
Leiji Zhang
Liu Yang
Baigui Sun
Riashat Islam
Rémi Tachet des Combes
Romain Laroche
+
Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning Approach
2022
Xuezhou Zhang
Yuda Song
Masatoshi Uehara
Mengdi Wang
Alekh Agarwal
Wen Sun
+
PDF
Chat
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics
2024
Yuda Song
Lili Wu
Dylan J. Foster
Akshay Krishnamurthy
+
Representation Learning in Deep RL via Discrete Information Bottleneck
2022
Riashat Islam
Hongyu Zang
Manan Tomar
Aniket Didolkar
Md Mofijul Islam
Samin Yeasar Arnob
Iqbal Tariq
Xin Li
Anirudh Goyal
Nicolas Heess
+
Learning Invariant Representations for Reinforcement Learning without Reconstruction
2020
Amy Zhang
Rowan McAllister
Roberto Calandra
Yarin Gal
Sergey Levine
+
Learning Invariant Representations for Reinforcement Learning without Reconstruction
2020
Amy Zhang
Rowan McAllister
Roberto Calandra
Yarin Gal
Sergey Levine
+
PDF
Chat
Representation Learning For Efficient Deep Multi-Agent Reinforcement Learning
2024
Dom Huh
Prasant Mohapatra
+
Learning Markov State Abstractions for Deep Reinforcement Learning
2021
Cameron Allen
Neev Parikh
Omer Gottesman
George Konidaris
+
Deep Model-Based Reinforcement Learning for High-Dimensional Problems, a Survey
2020
Aske Plaat
Walter A. Kosters
Mike Preuß
+
Deep Model-Based Reinforcement Learning for High-Dimensional Problems, a Survey
2020
Aske Plaat
Walter A. Kosters
Mike Preuß
+
Latent Variable Representation for Reinforcement Learning
2022
Tongzheng Ren
Chenjun Xiao
Tianjun Zhang
Na Li
Zhaoran Wang
Sujay Sanghavi
Dale Schuurmans
Bo Dai
+
Learning Latent State Spaces for Planning through Reward Prediction
2019
Aaron Havens
Yi Ouyang
Prabhat Nagarajan
Yasuhiro Fujita
+
Model-Based Deep Reinforcement Learning for High-Dimensional Problems, a Survey.
2020
Aske Plaat
Walter A. Kosters
Mike Preuß
Works That Cite This (39)
Action
Title
Year
Authors
+
PDF
Chat
Mastering Atari, Go, chess and shogi by planning with a learned model
2020
Julian Schrittwieser
Ioannis Antonoglou
Thomas Hubert
Karen Simonyan
Laurent Sifre
Simon Schmitt
Arthur Guez
Edward Lockhart
Demis Hassabis
Thore Graepel
+
Learning Robust State Abstractions for Hidden-Parameter Block MDPs
2020
Amy Zhang
Shagun Sodhani
Khimya Khetarpal
Joëlle Pineau
+
Return-Based Contrastive Representation Learning for Reinforcement Learning
2021
Guoqing Liu
Chuheng Zhang
Li Zhao
Tao Qin
Jinhua Zhu
Li Jian
Nenghai Yu
Tie‐Yan Liu
+
Contrastive Learning of Structured World Models
2019
Thomas Kipf
Elise van der Pol
Max Welling
+
Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
2021
Kate Rakelly
Abhishek Gupta
Carlos Florensa
Sergey Levine
+
Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP
2020
Amy Zhang
Shagun Sodhani
Khimya Khetarpal
Joëlle Pineau
+
Learning Representations for Pixel-based Control: What Matters and Why?
2021
Manan Tomar
Utkarsh Mishra
Amy Zhang
Matthew E. Taylor
+
Deep Reinforcement Learning and Its Neuroscientific Implications
2020
Matthew Botvinick
Jane X. Wang
Will Dabney
Kevin J Miller
Zeb Kurth‐Nelson
+
A Geometric Perspective on Optimal Representations for Reinforcement Learning
2019
Marc G. Bellemare
Will Dabney
Robert Dadashi
Adrien Ali Taïga
Pablo Samuel Castro
Nicolas Le Roux
Dale Schuurmans
Tor Lattimore
Clare Lyle
+
Towards Robust Bisimulation Metric Learning
2021
Mete Kemertas
Tristan Aumentado-Armstrong
Works Cited by This (33)
Action
Title
Year
Authors
+
Equivalence of distance-based and RKHS-based statistics in hypothesis testing
2013
Dino Sejdinović
Bharath K. Sriperumbudur
Arthur Gretton
Kenji Fukumizu
+
Convex Analysis and Nonlinear Optimization
2006
Jonathan M. Borwein
Adrian S. Lewis
+
A kernel two-sample test
2012
Arthur Gretton
Karsten Borgwardt
Malte J. Rasch
Bernhard Schölkopf
Alexander J. Smola
+
Integral Probability Metrics and Their Generating Classes of Functions
1997
Alfred Müller
+
The Cramer Distance as a Solution to Biased Wasserstein Gradients
2017
Marc G. Bellemare
Ivo Danihelka
Will Dabney
Shakir Mohamed
Balaji Lakshminarayanan
Stephan Hoyer
Rémi Munos
+
Visual Interaction Networks
2017
Nicholas Watters
Andrea Tacchetti
Théophane Weber
Razvan Pascanu
Peter Battaglia
Daniel Zoran
+
Probabilistic Recurrent State-Space Models
2018
Andreas Doerr
Christian Daniel
Martin Schiegg
Duy Nguyen-Tuong
Stefan Schaal
Marc Toussaint
Sebastian Trimpe
+
Learning and Querying Fast Generative Models for Reinforcement Learning
2018
Lars Buesing
Théophane Weber
Sébastien Racanière
S. M. Ali Eslami
Danilo Jimenez Rezende
David Reichert
Fabio Viola
Frederic Besse
Karol Gregor
Demis Hassabis
+
Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning
2018
Vladimir Feinberg
Alvin Wan
Ion Stoica
Michael I. Jordan
Joseph E. Gonzalez
Sergey Levine
+
Regularisation of Neural Networks by Enforcing Lipschitz Continuity
2018
Henry Gouk
Eibe Frank
Bernhard Pfahringer
Michael J. Cree