Abstract
Fine-tuning large-scale Transformer-based models is computationally expensive due to the enormous parameter space. Low-Rank Adaptation (LoRA) substantially reduces the number of trainable parameters while maintaining performance; however, identifying the optimal LoRA configuration—such as rank r, scaling factor α, and insertion positions—remains challenging. To address this issue, we propose a zero-shot proxy metric, termed Gradient Projection Score (GPS), which enables rapid evaluation of candidate configurations using only a few forward and backward passes. Building upon this metric, we further introduce EvoLoRA, a zero-shot evolutionary architecture search method that jointly optimises three objectives: performance proxy, evaluation stability, and trainable parameter size. EvoLoRA automatically discovers effective LoRA configurations across different models and datasets. Experimental results demonstrate that GPS is strongly correlated with final model performance; moreover, on tasks such as image classification and object detection, EvoLoRA markedly reduces search and training costs while generally outperforming other fine-tuning methods and manually designed LoRA configurations.