LXMERT: Learning Cross-Modality Encoder Representations from Transformers

Type: Article

Publication Date: 2019-01-01

Citations: 1890

DOI: https://doi.org/10.18653/v1/d19-1514

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning 2021 Mengzhou Xia
Guo‐qing Zheng
Subhabrata Mukherjee
Milad Shokouhi
Graham Neubig
Ahmed Hassan Awadallah
+ Lightweight Cross-Lingual Sentence Representation Learning 2021 Zhuoyuan Mao
Prakhar Gupta
Chenhui Chu
Martin Jaggi
Sadao Kurohashi
+ PDF Chat TxT: Crossmodal End-to-End Learning with Transformers 2021 Jan-Martin O. Steitz
Jonas Pfeiffer
Iryna Gurevych
Stefan Roth
+ PDF Chat MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations 2023 Calum Heggan
Tim Hospedales
Sam Budgett
Mehrdad Yaghoobi
+ Hitachi at MRP 2019: Unified Encoder-to-Biaffine Network for Cross-Framework Meaning Representation Parsing 2019 Yuta Koreeda
Gaku Morio
Terufumi Morishita
Hiroaki Ozaki
Kohsuke Yanai
+ Unsupervised Cross-lingual Representation Learning at Scale 2020 Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Édouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
+ Mittens: an Extension of GloVe for Learning Domain-Specialized Representations 2018 Nicholas Dingwall
Christopher Potts
+ Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks 2019 Haoyang Huang
Yaobo Liang
Nan Duan
Ming Gong
Linjun Shou
Daxin Jiang
Ming Zhou
+ PDF Chat Transferable Neural Projection Representations 2019 Chinnadhurai Sankar
Sujith Ravi
Zornitsa Kozareva
+ PDF Chat XNLI: Evaluating Cross-lingual Sentence Representations 2018 Alexis Conneau
Ruty Rinott
Guillaume Lample
Adina Williams
Samuel Bowman
Holger Schwenk
Veselin Stoyanov
+ Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning 2023 Clifton Poth
Hannah Sterz
Indraneil Paul
Sukannya Purkayastha
Leon Engländer
Timo Imhof
Ivan Vuli
Sebastian Ruder
Iryna Gurevych
Jonas Pfeiffer
+ How Do Multilingual Encoders Learn Cross-lingual Representation? 2022 Shijie Wu
+ Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning 2023 Barun Patra
Saksham Singhal
Shaohan Huang
Zewen Chi
Li Dong
Furu Wei
Vishrav Chaudhary
Song Xia
+ PDF Chat Lifting the Curse of Multilinguality by Pre-training Modular Transformers 2022 Jonas Pfeiffer
Naman Goyal
Xi Lin
Xian Li
James H. Cross
Sebastian Riedel
Mikel Artetxe
+ DiTTO: A Feature Representation Imitation Approach for Improving Cross-Lingual Transfer 2023 Shanu Kumar
Soujanya Abbaraju
Sandipan Dandapat
Sunayana Sitaram
Monojit Choudhury
+ ShanghaiTech at MRP 2019: Sequence-to-Graph Transduction with Second-Order Edge Inference for Cross-Framework Meaning Representation Parsing 2019 Xinyu Wang
Yixian Liu
Zixia Jia
Chengyue Jiang
Kewei Tu
+ ShanghaiTech at MRP 2019: Sequence-to-Graph Transduction with Second-Order Edge Inference for Cross-Framework Meaning Representation Parsing 2020 Xinyu Wang
Yixian Liu
Zixia Jia
Chengyue Jiang
Kewei Tu
+ KILM: Knowledge Injection into Encoder-Decoder Language Models 2023 Yan Xu
Mahdi Namazifar
Devamanyu Hazarika
Aishwarya Padmakumar
Yang Liu
Dilek Hakkani‐Tür
+ Probabilistic Knowledge Transfer for Deep Representation Learning. 2018 Nikolaos Passalis
Anastasios Tefas
+ Generative Pre-trained Transformer: A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions 2023 Gokul Yenduri
M. Ramalingam
Чурашов А.Г.
Y. Supriya
Gautam Srivastava
Praveen Kumar Reddy Maddikunta
Deepti Raj Gurrammagari
Rutvij H. Jhaveri
B. Prabadevi
Weizheng Wang

Works That Cite This (1032)

Action Title Year Authors
+ Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text. 2021 Christopher Clark
Jordi Salvador
Dustin Schwenk
Derrick Bonafilia
Mark Yatskar
Eric Kolve
Alvaro Herrasti
Jonghyun Choi
Sachin Mehta
Sam Skjonsberg
+ Revisiting spatio-temporal layouts for compositional action recognition. 2021 Gorjan Radevski
Marie‐Francine Moens
Tinne Tuytelaars
+ PDF Chat Resource Optimization for Semantic-Aware Networks With Task Offloading 2024 Zelin Ji
Zhijin Qin
Xiaoming Tao
Zhu Han
+ Learning More May Not Be Better: Knowledge Transferability in Vision-and-Language Tasks 2024 Tianwei Chen
Noa García
Mayu Otani
Chenhui Chu
Yuta Nakashima
Hajime Nagahara
+ PDF Chat Human-Centric Spatio-Temporal Video Grounding With Visual Transformers 2021 Zongheng Tang
Yue Liao
Si Liu
Guanbin Li
Xiaojie Jin
Hongxu Jiang
Qian Yu
Dong Xu
+ PDF Chat Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification 2023 Yue Yang
Artemis Panagopoulou
Shenghao Zhou
Daniel Jin
Chris Callison-Burch
Mark Yatskar
+ PDF Chat Multimodal attention-based deep learning for Alzheimer’s disease diagnosis 2022 Michal Golovanevsky
Carsten Eickhoff
Ritambhara Singh
+ Survey: Transformer based video-language pre-training 2022 Ludan Ruan
Qin Jin
+ PDF Chat Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go 2023 Arnav Arora
Preslav Nakov
Momchil Hardalov
Sheikh Muhammad Sarwar
Vibha Nayak
Yoan Dinkov
Dimitrina Zlatkova
Kyle Dent
Ameya Bhatawdekar
Guillaume Bouchard
+ PDF Chat It’s Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers 2023 Ana-Maria Bucur
Adrian Cosma
Paolo Rosso
Liviu P. Dinu