Continuously Improving Mobile Manipulation with Autonomous Real-World RL
Continuously Improving Mobile Manipulation with Autonomous Real-World RL
We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision. This is enabled by 1) task-relevant autonomy, which guides exploration towards object interactions and prevents stagnation near goal states, 2) efficient policy learning by leveraging basic task knowledge in …