Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration

Type: Preprint

Publication Date: 2019-01-01

Citations: 3

DOI: https://doi.org/10.48550/arxiv.1911.11744

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics 2024 Letian Chen
Matthew Gombolay
+ CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks 2021 Oier Mees
LukĂĄs Hermann
Erick Rosete-Beas
Wolfram Burgard
+ Language-Conditioned Imitation Learning for Robot Manipulation Tasks 2020 Simon Stepputtis
Joseph Campbell
Mariano Phielipp
Stefan Lee
Chitta Baral
Heni Ben Amor
+ PDF Chat CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision 2024 Gi-Cheon Kang
Jung-Hyun Kim
Kyu-Hwan Shim
Jun Ki Lee
Byoung‐Tak Zhang
+ PDF Chat Learning with Language-Guided State Abstractions 2024 Andi Peng
Ilia Sucholutsky
Belinda Z. Li
Theodore R. Sumers
Thomas L. Griffiths
Jacob Andreas
Julie Shah
+ RoboCLIP: One Demonstration is Enough to Learn Robot Policies 2023 Sumedh Sontakke
Jesse Zhang
SĂ©bastien M. R. Arnold
Karl Pertsch
Erdem Bıyık
Dorsa Sadigh
Chelsea Finn
Laurent Itti
+ PDF Chat Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation 2024 Vivek Myers
Bill Chunyuan Zheng
Oier Mees
Sergey Levine
Kuan Fang
+ PDF Chat Robotic Control via Embodied Chain-of-Thought Reasoning 2024 Zawalski MichaƂ
Chen William
Pertsch Karl
Mees Oier
Finn Chelsea
Levine Sergey
+ PDF Chat TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation 2024 Junjie Wen
Yichen Zhu
Jinming Li
Minjie Zhu
Kun Wu
Zhiyuan Xu
Ning Liu
Ran Cheng
Chaomin Shen
Yaxin Peng
+ PDF Chat RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning 2024 Yinpei Dai
Jayjun Lee
Nima Fazeli
Joyce Chai
+ PDF Chat VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions 2024 Guanyan Chen
Meiling Wang
Te Cui
Yao Mu
Haoyang Lu
Tianxing Zhou
Zicai Peng
Mengxiao Hu
Haizhou Li
Yuan Li
+ Language-Conditioned Semantic Search-Based Policy for Robotic Manipulation Tasks 2023 Jannik Sheikh
Andrew Melnik
Gora Chand Nandi
Robert Haschke
+ PDF Chat Learning Novel Skills from Language-Generated Demonstrations 2024 Ao-Qun Jin
Tian-Yu Xiang
Xiao-Hu Zhou
Mei-Jiang Gui
Xiao‐Liang Xie
Shi-Qi Liu
Shuang-Yi Wang
Yue Cao
Sheng-Bin Duan
Feng Xie
+ PDF Chat Benchmarking Vision, Language, & Action Models on Robotic Learning Tasks 2024 Pranav Guruprasad
Harshvardhan Sikka
Jaewoo Song
Yangyue Wang
Paul Pu Liang
+ PDF Chat Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models 2024 Xinghang Li
Peiyan Li
Minghuan Liu
Dong Wang
Jirong Liu
Bingyi Kang
Xiao Ma
Tao Kong
Han‐Bo Zhang
Huaping Liu
+ Imitation Learning: Progress, Taxonomies and Challenges 2021 Boyuan Zheng
Sunny Verma
Jianlong Zhou
Ivor W. Tsang
Fang Chen
+ PDF Chat Meta-Controller: Few-Shot Imitation of Unseen Embodiments and Tasks in Continuous Control 2024 Seongwoong Cho
Donggyun Kim
Jin Woo Lee
Seunghoon Hong
+ Watch, Try, Learn: Meta-Learning from Demonstrations and Reward 2019 Allan Zhou
Eric Jang
Daniel Kappler
Alex Herzog
Mohi Khansari
Paul Wohlhart
Yunfei Bai
Mrinal Kalakrishnan
Sergey Levine
Chelsea Finn
+ PDF Chat LIMT: Language-Informed Multi-Task Visual World Models 2024 Elie Aljalbout
Nikolaos Sotirakis
Patrick van der Smagt
Maximilian Karl
Nutan Chen
+ Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards 2020 Allan Zhou
Eric Jang
Daniel Kappler
Alex Herzog
Mohi Khansari
Paul Wohlhart
Yunfei Bai
Mrinal Kalakrishnan
Sergey Levine
Chelsea Finn