What Do We Learn from a Large-Scale Study of Pre-Trained Visual Representations in Sim and Real Environments?

Type: Article

Publication Date: 2024-05-13

Citations: 0

DOI: https://doi.org/10.1109/icra57147.2024.10610218

Download PDF

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? 2023 Sneha Silwal
Karmesh Yadav
Tingfan Wu
Jay Vakil
Arjun Majumdar
Sergio Arnaud
Claire Chen
Vincent-Pierre Berges
Dhruv Batra
Aravind Rajeswaran
+ PDF Chat What is learned in visual statistical learning? 2010 Fumitaka Nakahara
Sachio Otsuka
Megumi Nishiyama
Jun Kawaguchi
+ Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? 2023 Arjun Majumdar
Karmesh Yadav
Sergio Arnaud
Yecheng Jason Ma
Claire Chen
Sneha Silwal
Aryan Jain
Vincent-Pierre Berges
Pieter Abbeel
Jitendra Malik
+ How much "human-like" visual experience do current self-supervised learning algorithms need to achieve human-level object recognition? 2021 A. Emin Orhan
+ Self-supervised video pretraining yields human-aligned visual representations 2022 Nikhil Parthasarathy
S. M. Ali Eslami
JoĂŁo Carreira
Olivier J. HĂ©naff
+ Learning Structured Representations of Visual Scenes 2022 Meng-Jiun Chiou
+ How much human-like visual experience do current self-supervised learning algorithms need in order to achieve human-level object recognition? 2021 A. Emin Orhan
+ DNN Architecture for High Performance Prediction on Natural Videos Loses Submodule's Ability to Learn Discrete-World Dataset 2019 Lana Sinapayen
Atsushi Noda
+ PDF Chat DNN Architecture for High Performance Prediction on Natural Videos Loses Submodule’s Ability to Learn Discrete-World Dataset 2019 Lana Sinapayen
Atsushi Noda
+ PDF Chat Aligning Machine and Human Visual Representations across Abstraction Levels 2024 Lukas Muttenthaler
Klaus Greff
Frieda Born
Bernhard Spitzer
Simon Kornblith
Michael C. Mozer
Klaus‐Robert MĂŒller
Thomas Unterthiner
Andrew K. Lampinen
+ Dilated Convolution with Learnable Spacings makes visual models more aligned with humans: a Grad-CAM study 2024 Rabih Chamas
Ismail Khalfaoui-Hassani
Timothée Masquelier
+ PDF Chat Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations 2024 Yu Xie
Weichen Huang
Esther Alter
J. Schwartz
Joshua B. Tenenbaum
James J. DiCarlo
+ PDF Chat Decoding Generic Visual Representations from Human Brain Activity Using Machine Learning 2019 Angeliki Papadimitriou
Nikolaos Passalis
Anastasios Tefas
+ When Does Contrastive Visual Representation Learning Work? 2021 Elijah Cole
Xuan Yang
Michael J. Wilber
Oisin Mac Aodha
Serge Belongie
+ PDF Chat When Does Contrastive Visual Representation Learning Work? 2022 Elijah Cole
Xuan Yang
Michael J. Wilber
Oisin Mac Aodha
Serge Belongie
+ Improving generalization by mimicking the human visual diet 2022 Spandan Madan
You Li
Mengmi Zhang
Hanspeter Pfister
Gabriel Kreiman
+ PDF Chat Contrasting Contrastive Self-Supervised Representation Learning Pipelines 2021 Klemen Kotar
Gabriel Ilharco
Ludwig Schmidt
Kiana Ehsani
Roozbeh Mottaghi
+ Contrasting Contrastive Self-Supervised Representation Learning Pipelines 2021 Klemen Kotar
Gabriel Ilharco
Ludwig Schmidt
Kiana Ehsani
Roozbeh Mottaghi
+ Capturing the objects of vision with neural networks 2021 Benjamin Peters
Nikolaus Kriegeskorte
+ DINOv2: Learning Robust Visual Features without Supervision 2023 Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Vo
Marc Szafraniec
Vasil Khalidov
Pierre Fernandez
Daniel Haziza
Francisco Massa
Alaaeldin El-Nouby

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (22)

Action Title Year Authors
+ Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics 2017 Jeffrey Mahler
Jacky Liang
Sherdil Niyaz
Michael Laskey
Richard Doan
Xinyu Liu
Juan Pablo Aparicio
Ken Goldberg
+ PDF Chat Target-driven visual navigation in indoor scenes using deep reinforcement learning 2017 Yuke Zhu
Roozbeh Mottaghi
Eric Kolve
Joseph J. Lim
Abhinav Gupta
Li Fei-Fei
Ali Farhadi
+ PDF Chat Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks 2019 Stephen James
Paul Wohlhart
Mrinal Kalakrishnan
Dmitry Kalashnikov
Alex Irpan
Julian Ibarz
Sergey Levine
Raia Hadsell
Konstantinos Bousmalis
+ PDF Chat Habitat: A Platform for Embodied AI Research 2019 Manolis Savva
Abhishek Kadian
Oleksandr Maksymets
Yili Zhao
Erik Wijmans
Bhavana Jain
Julian Straub
Jia Liu
Vladlen Koltun
Jitendra Malik
+ PDF Chat RL-CycleGAN: Reinforcement Learning Aware Simulation-to-Real 2020 Kanishka Rao
C.J. Harris
Alex Irpan
Sergey Levine
Julian Ibarz
Mohi Khansari
+ PDF Chat Sim2Real Transfer for Reinforcement Learning without Dynamics Randomization 2020 Manuel Kaspar
Juan D. Muñoz Osorio
JĂŒrgen Bock
+ PDF Chat RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer 2021 Daniel E. Ho
Kanishka Rao
Zhuo Xu
Eric Jang
Mohi Khansari
Yunfei Bai
+ PDF Chat Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes 2021 Martin Sundermeyer
Arsalan Mousavian
Rudolph Triebel
Dieter Fox
+ PDF Chat Simple but Effective: CLIP Embeddings for Embodied AI 2022 Apoorv Khandelwal
Luca Weihs
Roozbeh Mottaghi
Aniruddha Kembhavi
+ R3M: A Universal Visual Representation for Robot Manipulation 2022 Suraj Nair
Aravind Rajeswaran
Vikash Kumar
Chelsea Finn
Abhinav Gupta