RoRA: Efficient Fine-Tuning of LLM with Reliability Optimization for
Rank Adaptation
RoRA: Efficient Fine-Tuning of LLM with Reliability Optimization for
Rank Adaptation
Fine-tuning helps large language models (LLM) recover degraded information and enhance task performance.Although Low-Rank Adaptation (LoRA) is widely used and effective for fine-tuning, we have observed that its scaling factor can limit or even reduce performance as the rank size increases. To address this issue, we propose RoRA (Rank-adaptive Reliability …