Multimodal Learning 相关度: 9/10

RL-RIG: A Generative Spatial Reasoner via Intrinsic Reflection

Tianyu Wang, Zhiyuan Ma, Qian Wang, Xinyi Zhang, Xinwei Long, Bowen Zhou
arXiv: 2602.19974v1 发布: 2026-02-23 更新: 2026-02-23

AI 摘要

RL-RIG利用强化学习和反射机制,提升图像生成模型在空间推理上的能力。

主要贡献

  • 提出 RL-RIG 框架,结合强化学习和反射机制
  • 引入 Generate-Reflect-Edit 范式,模仿思维链推理
  • 开发 Reflection-GRPO 方法,训练 VLM Actor 和 Image Editor
  • 使用 Scene Graph IoU 和 VLM-as-a-Judge 评估空间一致性

方法论

构建Diffuser、Checker、Actor、Inverse Diffuser四组件,通过强化学习训练Actor,使用反射机制优化生成过程,提升空间推理能力。

原文摘要

Recent advancements in image generation have achieved impressive results in producing high-quality images. However, existing image generation models still generally struggle with a spatial reasoning dilemma, lacking the ability to accurately capture fine-grained spatial relationships from the prompt and correctly generate scenes with structural integrity. To mitigate this dilemma, we propose RL-RIG, a Reinforcement Learning framework for Reflection-based Image Generation. Our architecture comprises four primary components: Diffuser, Checker, Actor, and Inverse Diffuser, following a Generate-Reflect-Edit paradigm to spark the Chain of Thought reasoning ability in image generation for addressing the dilemma. To equip the model with better intuition over generation trajectories, we further develop Reflection-GRPO to train the VLM Actor for edit prompts and the Image Editor for better image quality under a given prompt, respectively. Unlike traditional approaches that solely produce visually stunning yet structurally unreasonable content, our evaluation metrics prioritize spatial accuracy, utilizing Scene Graph IoU and employing a VLM-as-a-Judge strategy to assess the spatial consistency of generated images on LAION-SG dataset. Experimental results show that RL-RIG outperforms existing state-of-the-art open-source models by up to 11% in terms of controllable and precise spatial reasoning in image generation.

标签

图像生成 空间推理 强化学习 扩散模型 视觉语言模型

arXiv 分类

cs.CV