Learning to push by grasping: Using multiple tasks for effective learning

Type: Article

Publication Date: 2017-05-01

Citations: 107

DOI: https://doi.org/10.1109/icra.2017.7989249

Download PDF

Abstract

Recently, end-to-end learning frameworks are gaining prevalence in the field of robot control. These frameworks input states/images and directly predict the torques or the action parameters. However, these approaches are often critiqued due to their huge data requirements for learning a task. The argument of the difficulty in scalability to multiple tasks is well founded, since training these tasks often require hundreds or thousands of examples. But do end-to-end approaches need to learn a unique model for every task? Intuitively, it seems that sharing across tasks should help since all tasks require some common understanding of the environment. In this paper, we attempt to take the next step in data-driven end-to-end learning frameworks: move from the realm of task-specific models to joint learning of multiple robot tasks. In an astonishing result we show that models with multi-task learning tend to perform better than task-specific models trained with same amounts of data. For example, a deep-network learned with 2.5K grasp and 2.5K push examples performs better on grasping than a network trained on 5K grasp examples.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Learning to Push by Grasping: Using multiple tasks for effective learning 2016 Lerrel Pinto
Abhinav Gupta
+ Learning to Push by Grasping: Using multiple tasks for effective learning 2016 Lerrel Pinto
Abhinav Gupta
+ PDF Chat Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours 2016 Lerrel Pinto
Abhinav Gupta
+ Learning to Grasp from a Single Demonstration 2018 Pieter Van Molle
Tim Verbelen
Elias De Coninck
Cedric De Boom
Pieter Simoens
Bart Dhoedt
+ Deep Learning Approaches to Grasp Synthesis: A Review 2022 R. Newbury
Morris Gu
Lachlan Chumbley
Arsalan Mousavian
Clemens Eppner
Jürgen Leitner
Jeannette Bohg
Antonio Morales
Tamim Asfour
Danica Kragić
+ PDF Chat GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping 2021 Anil Kurkcu
Cihan Acar
Domenico Campolo
Keng Peng Tee
+ A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch 2023 Federico Ceola
Elisa Maiettini
Lorenzo Rosasco
Lorenzo Natale
+ PDF Chat Deep Learning Approaches to Grasp Synthesis: A Review 2023 R. Newbury
Morris Gu
Lachlan Chumbley
Arsalan Mousavian
Clemens Eppner
Jürgen Leitner
Jeannette Bohg
Antonio Morales
Tamim Asfour
Danica Kragić
+ Improving Data Efficiency of Self-supervised Learning for Robotic Grasping 2019 Lars Berscheid
Thomas Rühr
Torsten Kröger
+ Learning Bifunctional Push-grasping Synergistic Strategy for Goal-agnostic and Goal-oriented Tasks 2022 Dafa Ren
Shuang Wu
Xiaofan Wang
Yan Peng
Xiaoqiang Ren
+ Grasp Learning: Models, Methods, and Performance 2022 Robert Platt
+ PDF Chat Improving Data Efficiency of Self-supervised Learning for Robotic Grasping 2019 Lars Berscheid
Thomas Rühr
Torsten Kröger
+ PDF Chat Grasp Learning: Models, Methods, and Performance 2022 Robert W. Platt
+ PDF Chat A Grasp Pose is All You Need: Learning Multi-Fingered Grasping with Deep Reinforcement Learning from Vision and Touch 2023 Federico Ceola
Elisa Maiettini
Lorenzo Rosasco
Lorenzo Natale
+ Learning to See before Learning to Act: Visual Pre-training for Manipulation 2021 Yen-Chen Lin
Andy Zeng
Shuran Song
Phillip Isola
Tsung-Yi Lin
+ Accelerating Grasp Learning via Pretraining with Coarse Affordance Maps of Objects 2022 Yan-Xu Hou
Jun Li
+ PDF Chat Learning Bifunctional Push-Grasping Synergistic Strategy for Goal-Agnostic and Goal-Oriented Tasks 2023 Dafa Ren
Shuang Wu
Xiaofan Wang
Yan Peng
Xiaoqiang Ren
+ PDF Chat Learning to See before Learning to Act: Visual Pre-training for Manipulation 2020 Yen-Chen Lin
Andy Zeng
Shuran Song
Phillip Isola
Tsung-Yi Lin
+ Geometry Matching for Multi-Embodiment Grasping 2023 Maria Attarian
Muhammad Adil Asif
Jingzhou Liu
Ruthrash Hari
Animesh Garg
Igor Gilitschenski
Jonathan Tompson
+ Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration 2017 Rouhollah Rahmatizadeh
Pooya Abolghasemi
Ladislau Bölöni
Sergey Levine