Multimodal Learning 相关度: 9/10

3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding

Yiping Chen, Jinpeng Li, Wenyu Ke, Yang Luo, Jie Ouyang, Zhongjie He, Li Liu, Hongchao Fan, Hao Wu
arXiv: 2603.23447v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

提出3DCity-LLM框架,用于3D城市尺度视觉-语言感知与理解,并构建大规模数据集。

主要贡献

  • 提出3DCity-LLM框架
  • 构建高质量的3DCity-LLM-1.2M数据集
  • 提出基于文本相似性和LLM的评估协议

方法论

采用粗到精的特征编码策略,并行处理目标对象、对象间关系和全局场景特征,并进行大规模训练。

原文摘要

While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at https://github.com/SYSU-3DSTAILab/3D-City-LLM.

标签

3D City Multimodal Learning Large Language Models Vision-Language

arXiv 分类

cs.CV cs.AI