Reward Fine-Tuning Two-Step Diffusion Models via Learning Differentiable Latent-Space Surrogate Reward
Recent research has shown that fine-tuning diffusion models (DMs) with arbitrary rewards, including non-differentiable ones, is feasible with reinforcement learning (RL) techniques, enabling flexible model alignment. However, applying existing RL methods to timestep-distilled DMs is challenging for...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent research has shown that fine-tuning diffusion models (DMs) with
arbitrary rewards, including non-differentiable ones, is feasible with
reinforcement learning (RL) techniques, enabling flexible model alignment.
However, applying existing RL methods to timestep-distilled DMs is challenging
for ultra-fast ($\le2$-step) image generation. Our analysis suggests several
limitations of policy-based RL methods such as PPO or DPO toward this goal.
Based on the insights, we propose fine-tuning DMs with learned differentiable
surrogate rewards. Our method, named LaSRO, learns surrogate reward models in
the latent space of SDXL to convert arbitrary rewards into differentiable ones
for efficient reward gradient guidance. LaSRO leverages pre-trained latent DMs
for reward modeling and specifically targets image generation $\le2$ steps for
reward optimization, enhancing generalizability and efficiency. LaSRO is
effective and stable for improving ultra-fast image generation with different
reward objectives, outperforming popular RL methods including PPO and DPO. We
further show LaSRO's connection to value-based RL, providing theoretical
insights. See our webpage at https://sites.google.com/view/lasro. |
---|---|
DOI: | 10.48550/arxiv.2411.15247 |