Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation

Donald Shenaj, Federico Errica, Antonio Carta
3/23/2026
cs.CVcs.AIcs.LG

Abstract

Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA$^2$, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual.

View on arXivView PDF

Code Implementations(6)

Official repository of "Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation" by D. Shenaj, F. Errica, A. Carta (2026).

30Mar 22, 20263 weeks ago

Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch

11,3381,089Apr 7, 20221 years agoMIT
artificial-intelligencedeep-learningtext-to-image

[PKPES01] Personalized Learning: One-size-fits-all education often leaves students behind. How can AI or adaptive learning techniques create personalized education experiences?

11Dec 6, 20251 months ago

Turn your two-bit doodles into fine artworks with deep neural networks, generate seamless textures from photos, transfer style from one image to another, perform example-based upscaling, but wait... there's more! (An implementation of Semantic Style Transfer.)

9,880899Mar 5, 20165 years agoAGPL-3.0
deep-learningdeep-neural-networksimage-generationimage-manipulationimage-processing

Traditional study methods can be rigid and one-size-fits-all. This challenge calls on you to leverage AI to create dynamic, personalized learning tools that adapt to individual student needs. Choose a project below and build an application that makes studying smarter, not harder.

09Oct 8, 20256 months agoMIT

Ameena AI is an intelligent and personalized self-study assistant that uses NLP and machine learning to help students learn effectively. It offers concept explanations, summaries, quizzes, and adaptive study guidance, creating an interactive and empowering learning experience for all learners.

50Dec 8, 20251 months ago

Discussion