Leon Engländer

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization 2018 Shashi Narayan
Shay B. Cohen
Mirella Lapata
1
+ GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding 2018 Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel Bowman
1
+ Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition 2002 Erik F. Tjong Kim Sang
1
+ PDF Chat Universal Language Model Fine-tuning for Text Classification 2018 Jeremy Howard
Sebastian Ruder
1
+ Know What You Don’t Know: Unanswerable Questions for SQuAD 2018 Pranav Rajpurkar
Robin Jia
Percy Liang
1
+ Parameter-Efficient Transfer Learning for NLP 2019 Neil Houlsby
Andrei Giurgiu
Stanisław Jastrzȩbski
Bruna Morrone
Quentin de Laroussilhe
Andréa Gesmundo
Mona Attariyan
Sylvain Gelly
1
+ Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning 2019 Lifu Huang
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
1
+ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension 2020 Mike Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdelrahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
1
+ An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 2020 Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
Thomas Unterthiner
Mostafa Dehghani
Matthias Minderer
Georg Heigold
Sylvain Gelly
1
+ AdapterHub: A Framework for Adapting Transformers 2020 Jonas Pfeiffer
Andreas Rücklé
Clifton Poth
Aishwarya Kamath
Ivan Vulić
Sebastian Ruder
Kyunghyun Cho
Iryna Gurevych
1
+ MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer 2020 Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
1
+ AdapterFusion: Non-Destructive Task Composition for Transfer Learning 2021 Jonas Pfeiffer
Aishwarya Kamath
Andreas Rücklé
Kyunghyun Cho
Iryna Gurevych
1
+ Learning Transferable Visual Models From Natural Language Supervision 2021 Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya Ramesh
Gabriel Goh
Sandhini Agarwal
Girish Sastry
Amanda Askell
Pamela Mishkin
Jack Clark
1
+ LoRA: Low-Rank Adaptation of Large Language Models 2021 J. Edward Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Weizhu Chen
1
+ Prefix-Tuning: Optimizing Continuous Prompts for Generation 2021 Xiang Lisa Li
Percy Liang
1
+ PDF Chat xGQA: Cross-Lingual Visual Question Answering 2022 Jonas Pfeiffer
Gregor Geigle
Aishwarya Kamath
Jan-Martin O. Steitz
Stefan Roth
Ivan Vulić
Iryna Gurevych
1
+ Efficient Test Time Adapter Ensembling for Low-resource Language Varieties 2021 Xinyi Wang
Yulia Tsvetkov
Sebastian Ruder
Graham Neubig
1
+ PDF Chat Single-dataset Experts for Multi-dataset Question Answering 2021 Dan Friedman
Ben Dodge
Danqi Chen
1
+ PDF Chat SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer 2022 Tu Vu
Brian Lester
Noah Constant
Rami Al‐Rfou
Daniel Cer
1
+ Towards a Unified View of Parameter-Efficient Transfer Learning 2021 Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
1
+ PDF Chat UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning 2022 Yuning Mao
Lambert Mathias
Rui Hou
Amjad Almahairi
Hao Ma
Jiawei Han
Scott Yih
Madian Khabsa
1
+ PDF Chat What to Pre-Train on? Efficient Intermediate Task Selection 2021 Clifton Poth
Jonas Pfeiffer
Andreas Rücklé
Iryna Gurevych
1
+ PDF Chat The Power of Scale for Parameter-Efficient Prompt Tuning 2021 Brian Lester
Rami Al‐Rfou
Noah Constant
1
+ PDF Chat AdapterDrop: On the Efficiency of Adapters in Transformers 2021 Andreas Rücklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
1
+ PDF Chat UNKs Everywhere: Adapting Multilingual Language Models to New Scripts 2021 Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
1
+ Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning 2022 Haokun Liu
Derek Tam
Abdul Mohammed
Jay Mohta
Tenghao Huang
Mohit Bansal
Colin Raffel
1
+ Compacter: Efficient Low-Rank Hypercomplex Adapter Layers 2021 Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
1
+ PDF Chat Lifting the Curse of Multilinguality by Pre-training Modular Transformers 2022 Jonas Pfeiffer
Naman Goyal
Xi Lin
Xian Li
James H. Cross
Sebastian Riedel
Mikel Artetxe
1
+ Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 2019 Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
1
+ AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning 2023 Han Zhou
Xingchen Wan
Ivan Vulić
Anna Korhonen
1
+ Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning 2023 Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
1
+ Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference 2023 Tao Lei
Junwen Bai
Siddhartha Brahma
Joshua Ainslie
Kenton Lee
Yanqi Zhou
Nan Du
Vincent Y. Zhao
Yuexin Wu
Bo Li
1
+ PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques 2023 Mohammed Sabry
Anja Belz
1
+ OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models 2023 Shengding Hu
Ning Ding
Weilin Zhao
Xingtai Lv
Zhen Zhang
Zhiyuan Liu
Maosong Sun
1
+ AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models 2023 Alexandra Chronopoulou
Matthew E. Peters
Alexander Fraser
Jesse Dodge
1
+ PDF Chat LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models 2023 Zhiqiang Hu
Lei Wang
Yihuai Lan
Wanyu Xu
Ee‐Peng Lim
Lidong Bing
Xing Xu
Soujanya Poria
Roy Lee
1