Multimodal Learning 相关度: 9/10

SPATIALALIGN: Aligning Dynamic Spatial Relationships in Video Generation

Fengming Liu, Tat-Jen Cham, Chuanxia Zheng
arXiv: 2602.22745v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

SPATIALALIGN框架通过DPO微调T2V模型,提升视频中动态空间关系与文本提示的对齐。

主要贡献

  • 提出了SPATIALALIGN自提升框架
  • 设计了基于几何的DSR-SCORE指标
  • 构建了包含动态空间关系的文本视频数据集

方法论

使用零阶正则化的DPO方法微调T2V模型,目标是优化模型对文本描述的动态空间关系的对齐。

原文摘要

Most text-to-video (T2V) generators prioritize aesthetic quality, but often ignoring the spatial constraints in the generated videos. In this work, we present SPATIALALIGN, a self-improvement framework that enhances T2V models capabilities to depict Dynamic Spatial Relationships (DSR) specified in text prompts. We present a zeroth-order regularized Direct Preference Optimization (DPO) to fine-tune T2V models towards better alignment with DSR. Specifically, we design DSR-SCORE, a geometry-based metric that quantitatively measures the alignment between generated videos and the specified DSRs in prompts, which is a step forward from prior works that rely on VLM for evaluation. We also conduct a dataset of text-video pairs with diverse DSRs to facilitate the study. Extensive experiments demonstrate that our fine-tuned model significantly out performs the baseline in spatial relationships. The code will be released in Link.

标签

文本到视频生成 动态空间关系 直接偏好优化 视频评估

arXiv 分类

cs.CV