LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA
Optimization
LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA
Optimization
Low-rank adaption (LoRA) is a widely used parameter-efficient finetuning method for LLM that reduces memory requirements. However, current LoRA optimizers lack transformation invariance, meaning the actual updates to the weights depends on how the two LoRA factors are scaled or rotated. This deficiency leads to inefficient learning and sub-optimal solutions …