A Semantic-based Layer Freezing Approach to Efficient Fine-Tuning of
Language Models
A Semantic-based Layer Freezing Approach to Efficient Fine-Tuning of
Language Models
Finetuning language models (LMs) is crucial for adapting the models to downstream data and tasks. However, full finetuning is usually costly. Existing work, such as parameter-efficient finetuning (PEFT), often focuses on \textit{how to finetune} but neglects the issue of \textit{where to finetune}. As a pioneering work on answering where to …