Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
Matthew E. Peters
,
Sebastian Ruder
,
Noah A. Smith
Type:
Article
Publication Date:
2019-01-01
Citations:
385
DOI:
https://doi.org/10.18653/v1/w19-4302
Share
Locations
arXiv (Cornell University) -
View
-
PDF
Similar Works
Action
Title
Year
Authors
+
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
2019
Matthew E. Peters
Sebastian Ruder
Noah A. Smith
+
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
2019
Matthew E. Peters
Sebastian Ruder
Noah A. Smith
+
Parameter-Efficient Transfer Learning for NLP
2019
Neil Houlsby
Andrei Giurgiu
Stanisław Jastrzȩbski
Bruna Morrone
Quentin de Laroussilhe
Andréa Gesmundo
Mona Attariyan
Sylvain Gelly
+
Parameter-Efficient Transfer Learning for NLP
2019
Neil Houlsby
Andrei Giurgiu
Stanisław Jastrzȩbski
Bruna Morrone
Quentin de Laroussilhe
Andréa Gesmundo
Mona Attariyan
Sylvain Gelly
+
When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning
2022
Orion Weller
Kevin Seppi
Matt Gardner
+
When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning
2022
Orion Weller
Kevin Seppi
Matt Gardner
+
On the Domain Adaptation and Generalization of Pretrained Language Models: A Survey
2022
Xu Guo
Han Yu
+
On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
2021
Ruidan He
Linlin Liu
Hai Ye
Qingyu Tan
Bosheng Ding
Liying Cheng
Jia-Wei Low
Lidong Bing
Luo Si
+
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
2020
Jonathan Pilault
Amine Elhattami
Christopher Pal
+
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
2020
Jonathan Pilault
Amine Elhattami
Christopher Pal
+
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rücklé
Kyunghyun Cho
Iryna Gurevych
+
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
2021
Jonathan Pilault
Amine El Hattami
Christopher Pal
+
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rücklé
Kyunghyun Cho
Iryna Gurevych
+
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
2020
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
+
Parameter-Efficient Transfer Learning with Diff Pruning
2020
Demi Guo
Alexander M. Rush
Yoon Kim
+
PDF
Chat
Neural Unsupervised Domain Adaptation in NLP—A Survey
2020
Alan Ramponi
Barbara Plank
+
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
2022
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
+
PDF
Chat
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
2022
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
+
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
2020
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
+
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
2020
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
Works That Cite This (206)
Action
Title
Year
Authors
+
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
2023
Anastasiia Grishina
Max Hort
Leon Moonen
+
PDF
Chat
Sense representations for Portuguese: experiments with sense embeddings and deep neural language models
2021
Jéssica Rodrigues da Silva
Helena de Medeiros Caseli
+
Reliable Gradient-free and Likelihood-free Prompt Tuning
2023
Maohao Shen
Soumya K. Ghosh
Prasanna Sattigeri
Subhro Das
Yuheng Bu
Gregory W. Wornell
+
PDF
Chat
A Closer Look at How Fine-tuning Changes BERT
2022
Yichu Zhou
Vivek Srikumar
+
Adapting by Pruning: A Case Study on BERT
2021
Yang Gao
Nicolò Colombo
Wei Wang
+
Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling
2019
Xiaochuang Han
Jacob Eisenstein
+
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
2021
Rabeeh Karimi Mahabadi
Sebastian Ruder
Mostafa Dehghani
James Henderson
+
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
2019
Samuel Broscheit
+
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rücklé
Kyunghyun Cho
Iryna Gurevych
+
Towards non-toxic landscapes: Automatic toxic comment detection using DNN
2019
Ashwin Geet d'Sa
Irina Illina
Dominique Fohr
Works Cited by This (34)
Action
Title
Year
Authors
+
Convolutional Neural Networks for Sentence Classification
2014
Yoon Kim
+
PDF
Chat
Neural Architectures for Named Entity Recognition
2016
Guillaume Lample
Miguel Ballesteros
Sandeep Subramanian
Kazuya Kawakami
Chris Dyer
+
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
2016
Yossi Adi
Einat Kermany
Yonatan Belinkov
Ofer Lavi
Yoav Goldberg
+
PDF
Chat
Enhanced LSTM for Natural Language Inference
2017
Qian Chen
Xiaodan Zhu
Zhen-Hua Ling
Si Wei
Hui Jiang
Diana Inkpen
+
An efficient framework for learning sentence representations
2018
Lajanugen Logeswaran
Honglak Lee
+
AllenNLP: A Deep Semantic Natural Language Processing Platform
2018
Matt Gardner
Joël Grus
Mark E Neumann
Oyvind Tafjord
Pradeep Dasigi
Nelson Liu
Matthew E. Peters
Michael Schmitz
Luke Zettlemoyer
+
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
2018
Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
+
PDF
Chat
Do Better ImageNet Models Transfer Better?
2019
Simon Kornblith
Jonathon Shlens
Quoc V. Le
+
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2018
Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
+
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
2018
Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel Bowman