Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models
AI 摘要
提出一种利用冻结的预训练模型进行安全文本到图像生成的推理时能量引导框架。
主要贡献
- 提出基于能量的文本到图像安全生成框架
- 利用视觉语言基础模型作为安全监督信号
- 实现无需训练的模块化安全控制
方法论
利用视觉语言基础模型的梯度反馈,通过潜在空间估计进行能量引导采样,实现安全控制。
原文摘要
Controlling the behavior of text-to-image generative models is critical for safe and practical deployment. Existing safety approaches typically rely on model fine-tuning or curated datasets, which can degrade generation quality or limit scalability. We propose an inference-time steering framework that leverages gradient feedback from frozen pretrained foundation models to guide the generation process without modifying the underlying generator. Our key observation is that vision-language foundation models encode rich semantic representations that can be repurposed as off-the-shelf supervisory signals during generation. By injecting such feedback through clean latent estimates at each sampling step, our method formulates safety steering as an energy-based sampling problem. This design enables modular, training-free safety control that is compatible with both diffusion and flow-matching models and can generalize across diverse visual concepts. Experiments demonstrate state-of-the-art robustness against NSFW red-teaming benchmarks and effective multi-target steering, while preserving high generation quality on benign non-targeted prompts. Our framework provides a principled approach for utilizing foundation models as semantic energy estimators, enabling reliable and scalable safety control for text-to-image generation.