CRAFT: Aligning Diffusion Models with Fine-Tuning Is Easier Than You Think
AI 摘要
CRAFT通过复合奖励过滤和增强型SFT,在少量数据下高效对齐扩散模型。
主要贡献
- 提出CRAFT框架,减少数据依赖和计算成本
- 提出复合奖励过滤(CRF)技术,构建高质量数据集
- 理论证明CRAFT优化了群组强化学习的下界
方法论
采用复合奖励过滤筛选高质量数据,然后进行增强的监督微调,优化群组强化学习的下界。
原文摘要
Aligning Diffusion models has achieved remarkable breakthroughs in generating high-quality, human preference-aligned images. Existing techniques, such as supervised fine-tuning (SFT) and DPO-style preference optimization, have become principled tools for fine-tuning diffusion models. However, SFT relies on high-quality images that are costly to obtain, while DPO-style methods depend on large-scale preference datasets, which are often inconsistent in quality. Beyond data dependency, these methods are further constrained by computational inefficiency. To address these two challenges, we propose Composite Reward Assisted Fine-Tuning (CRAFT), a lightweight yet powerful fine-tuning paradigm that requires significantly reduced training data while maintaining computational efficiency. It first leverages a Composite Reward Filtering (CRF) technique to construct a high-quality and consistent training dataset and then perform an enhanced variant of SFT. We also theoretically prove that CRAFT actually optimizes the lower bound of group-based reinforcement learning, establishing a principled connection between SFT with selected data and reinforcement learning. Our extensive empirical results demonstrate that CRAFT with only 100 samples can easily outperform recent SOTA preference optimization methods with thousands of preference-paired samples. Moreover, CRAFT can even achieve 11-220$\times$ faster convergences than the baseline preference optimization methods, highlighting its extremely high efficiency.