InverFill: One-Step Inversion for Enhanced Few-Step Diffusion Inpainting
AI 摘要
InverFill通过一步反演注入语义信息,提升少步扩散模型图像修复质量。
主要贡献
- 提出InverFill单步反演方法,用于提升少步扩散模型修复效果
- 利用文本到图像模型进行图像修复,无需训练特定的修复模型
- 在低NFE下,性能媲美专业图像修复模型
方法论
通过一步反演将图像语义信息注入初始噪声,然后使用混合采样pipeline结合文本到图像模型进行修复。
原文摘要
Recent diffusion-based models achieve photorealism in image inpainting but require many sampling steps, limiting practical use. Few-step text-to-image models offer faster generation, but naively applying them to inpainting yields poor harmonization and artifacts between the background and inpainted region. We trace this cause to random Gaussian noise initialization, which under low function evaluations causes semantic misalignment and reduced fidelity. To overcome this, we propose InverFill, a one-step inversion method tailored for inpainting that injects semantic information from the input masked image into the initial noise, enabling high-fidelity few-step inpainting. Instead of training inpainting models, InverFill leverages few-step text-to-image models in a blended sampling pipeline with semantically aligned noise as input, significantly improving vanilla blended sampling and even matching specialized inpainting models at low NFEs. Moreover, InverFill does not require real-image supervision and only adds minimal inference overhead. Extensive experiments show that InverFill consistently boosts baseline few-step models, improving image quality and text coherence without costly retraining or heavy iterative optimization.