Progressive Transfer Learning for Dexterous In-Hand Manipulation with Multi-Fingered Anthropomorphic Hand

Type: Article

Publication Date: 2024-05-29

Citations: 1

DOI: https://doi.org/10.1109/tcds.2024.3406730

View Chat PDF

Abstract

Dexterous in-hand manipulation poses significant challenges for a multi-fingered anthropomorphic hand due to the high-dimensional state and action spaces, as well as the intricate contact patterns between the fingers and objects. Although deep reinforcement learning has made moderate progress and demonstrated its strong potential for manipulation, it faces certain challenges, including large-scale data collection and high sample complexity. Particularly in scenes with slight changes, it necessitates the re-collection of vast amounts of data and numerous iterations of fine-tuning. Remarkably, humans can quickly transfer their learned manipulation skills to different scenarios with minimal supervision. Inspired by the flexible transfer learning capability of humans, we propose a novel framework called Progressive Transfer Learning (PTL) for dexterous in-hand manipulation. This framework efficiently utilizes the collected trajectories and the dynamics model trained on a source dataset. It adopts progressive neural networks for dynamics model transfer learning on samples selected using a new method based on dynamics properties, rewards, and trajectory scores. Experimental results on contact-rich anthropomorphic hand manipulation tasks demonstrate that our method can efficiently and effectively learn in-hand manipulation skills with just a few online attempts and adjustment learning in the new scene. Moreover, compared to learning from scratch, our method significantly reduces training time costs by 85%.

Locations

  • IEEE Transactions on Cognitive and Developmental Systems - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Progressive Transfer Learning for Dexterous In-Hand Manipulation with Multi-Fingered Anthropomorphic Hand 2023 Yongkang Luo
Wanyi Li
Peng Wang
Haonan Duan
Wei Wei
Sun Ji
+ Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost 2018 Henry Zhu
Abhishek Gupta
Aravind Rajeswaran
Sergey Levine
Vikash Kumar
+ PDF Chat Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost 2019 Henry Zhu
Abhishek Gupta
Aravind Rajeswaran
Sergey Levine
Vikash Kumar
+ Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations 2018 Aravind Rajeswaran
Vikash Kumar
Abhishek Gupta
Giulia Vezzani
John Schulman
Emanuel Todorov
Sergey Levine
+ Deep Dynamics Models for Learning Dexterous Manipulation 2019 Anusha Nagabandi
Kurt Konolige
Sergey Levine
Vikash Kumar
+ Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations 2017 Aravind Rajeswaran
Vikash Kumar
Abhishek Gupta
John Schulman
Emanuel Todorov
Sergey Levine
+ Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations 2017 Aravind Rajeswaran
Vikas Kumar
Abhishek Gupta
Giulia Vezzani
John Schulman
Emanuel Todorov
Sergey Levine
+ Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum 2023 Yunbo Zhang
Alexander Clegg
Sehoon Ha
Greg Turk
Yuting Ye
+ Deep Dynamics Models for Learning Dexterous Manipulation 2019 Anusha Nagabandi
Kurt Konoglie
Sergey Levine
Vikash Kumar
+ DexDeform: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics 2023 Sizhe Li
Zhiao Huang
Tao Chen
Tao Du
Hao Su
Joshua B. Tenenbaum
Chuang Gan
+ PDF Chat Learning to Transfer In‐Hand Manipulations Using a Greedy Shape Curriculum 2023 Yunbo Zhang
Alexander Clegg
Sehoon Ha
Greg Turk
Yuting Ye
+ PDF Chat Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient Dexterous Manipulation 2023 Sridhar Pandian Arunachalam
Sneha Silwal
Ben Evans
Lerrel Pinto
+ Dexterous In-Hand Manipulation of Slender Cylindrical Objects through Deep Reinforcement Learning with Tactile Sensing 2023 Wenbin Hu
Bidan Huang
Wang Wei Lee
Sicheng Yang
Yu Zheng
Zhibin Li
+ H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation 2023 Yanjie Ze
Yuyao Liu
Ruizhe Shi
Jiaxin Qin
Zhecheng Yuan
Jiashun Wang
Huazhe Xu
+ PDF Chat ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos 2024 Zerui Chen
Shizhe Chen
Etienne Arlaud
Ivan Laptev
Cordelia Schmid
+ Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient Dexterous Manipulation 2022 Sridhar Pandian Arunachalam
Sneha Silwal
Ben Evans
Lerrel Pinto
+ Learning Generalizable Dexterous Manipulation from Human Grasp Affordance 2022 Yueh-Hua Wu
Jiashun Wang
Xiaolong Wang
+ PDF Chat Force-Centric Imitation Learning with Force-Motion Capture System for Contact-Rich Manipulation 2024 Wenhai Liu
Junbo Wang
Yiming Wang
Weiming Wang
Cewu Lu
+ PDF Chat Leveraging Pretrained Latent Representations for Few-Shot Imitation Learning on a Dexterous Robotic Hand 2024 Davide Liconti
Yasunori Toshimitsu
Robert K. Katzschmann
+ PDF Chat Object-Centric Dexterous Manipulation from Human Motion Data 2024 Yuanpei Chen
Chen Wang
Yaodong Yang
C. Karen Liu

Cited by (0)

Action Title Year Authors

Citing (26)

Action Title Year Authors
+ Prioritized Experience Replay 2015 Tom Schaul
John Quan
Ioannis Antonoglou
David Silver
+ PDF Chat Domain randomization for transferring deep neural networks from simulation to the real world 2017 Josh Tobin
Rachel Fong
Alex Ray
Jonas Schneider
Wojciech Zaremba
Pieter Abbeel
+ Stochastic Neural Networks for Hierarchical Reinforcement Learning 2017 Carlos Florensa
Yan Duan
Pieter Abbeel
+ Distributed Distributional Deterministic Policy Gradients 2018 Gabriel Barth-Maron
Matthew W. Hoffman
David Budden
Will Dabney
Dan Horgan
Dhruva Tb
Alistair Muldal
Nicolas Heess
Timothy Lillicrap
+ PDF Chat Learning dexterous in-hand manipulation 2019 OpenAI Marcin Andrychowicz
Bowen Baker
Maciek Chociej
Rafał Józefowicz
Bob McGrew
Jakub Pachocki
Arthur J Petron
Matthias Plappert
Glenn Powell
Alex Ray
+ Benchmarking In-Hand Manipulation 2020 Silvia Cruciani
Balakumar Sundaralingam
Kaiyu Hang
Vikash Kumar
Tucker Hermans
Danica Kragić
+ PDF Chat Domain Adversarial Reinforcement Learning for Partial Domain Adaptation 2020 Jin Chen
Xinxiao Wu
Lixin Duan
Shenghua Gao
+ PDF Chat Sim-to-Real Transfer of Robotic Control with Dynamics Randomization 2018 Xue Bin Peng
Marcin Andrychowicz
Wojciech Zaremba
Pieter Abbeel
+ PDF Chat Mastering Atari, Go, chess and shogi by planning with a learned model 2020 Julian Schrittwieser
Ioannis Antonoglou
Thomas Hubert
Karen Simonyan
Laurent Sifre
Simon Schmitt
Arthur Guez
Edward Lockhart
Demis Hassabis
Thore Graepel
+ In-Hand Object-Dynamics Inference Using Tactile Fingertips 2021 Balakumar Sundaralingam
Tucker Hermans
+ How to train your robot with deep reinforcement learning: lessons we have learned 2021 Julian Ibarz
Jie Tan
Chelsea Finn
Mrinal Kalakrishnan
Peter Pástor
Sergey Levine
+ PDF Chat Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger 2022 Arthur Allshire
Mayank MittaI
Varun Lodaya
Viktor Makoviychuk
Denys Makoviichuk
Felix Widmaier
Manuel Wüthrich
Stefan Bauer
Ankur Handa
Animesh Garg
+ PDF Chat On the Feasibility of Learning Finger-gaiting In-hand Manipulation with Intrinsic Sensing 2022 Gagan Khandate
Maximilian Haas-Heger
Matei Ciocarlie
+ PDF Chat Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention 2021 Abhishek Gupta
Justin Yu
Tony Z. Zhao
Vikash Kumar
Aaron Rovinsky
Kelvin Xu
T. Devlin
Sergey Levine
+ Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning 2021 Wenlong Huang
Igor Mordatch
Pieter Abbeel
Deepak Pathak
+ PDF Chat Multi-Fingered In-Hand Manipulation With Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors 2022 Satoshi Funabashi
Tomoki Isobe
Fei Hongyi
Atsumu Hiramoto
Alexander Schmitz
Shigeki Sugano
Tetsuya Ogata
+ Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning 2022 Denis Yarats
David Brandfonbrener
Hao Liu
Michael Laskin
Pieter Abbeel
Alessandro Lazaric
Lerrel Pinto
+ Multi-Source Transfer Learning for Deep Model-Based Reinforcement Learning 2022 Remo Sasso
Matthia Sabatelli
Marco Wiering
+ A New Representation of Successor Features for Transfer across Dissimilar Environments 2021 Majid Abdolshah
Hung Lê
Thommen George Karimpanal
Sunil Gupta
Santu Rana
Svetha Venkatesh
+ MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale 2021 Dmitry Kalashnikov
Jacob Varley
Yevgen Chebotar
Benjamin J. Swanson
Rico Jonschkowski
Chelsea Finn
Sergey Levine
Karol Hausman
+ Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers 2020 Benjamin Eysenbach
Swapnil Asawa
Shreyas Chaudhari
Sergey Levine
Ruslan Salakhutdinov
+ Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research 2018 Matthias Plappert
Marcin Andrychowicz
Alex Ray
Bob McGrew
Bowen Baker
Glenn Powell
Jonas Schneider
Joshua W.D. Tobin
Maciek Chociej
Peter Welinder
+ One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL 2020 Saurabh Kumar
Aviral Kumar
Sergey Levine
Chelsea Finn
+ Progressive Reinforcement Learning with Distillation for Multi-Skilled Motion Control 2018 Glen Berseth
Cheng Xie
Paul Cernek
Michiel van de Panne
+ Progressive Neural Networks 2016 Andrei A. Rusu
Neil C. Rabinowitz
Guillaume Desjardins
Hubert Soyer
James Kirkpatrick
Koray Kavukcuoglu
Razvan Pascanu
Raia Hadsell
+ PDF Chat A Review of Deep Transfer Learning and Recent Advancements 2023 Mohammadreza Iman
Hamid R. Arabnia
Khaled Rasheed