Multimodal Learning 相关度: 9/10

SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs

Guanting Ye, Qiyan Zhao, Wenhao Yu, Liangyu Yuan, Mingkai Li, Xiaofeng Zhang, Jianmin Ji, Yanyong Zhang, Qing Jiang, Ka-Veng Yuen
arXiv: 2602.22716v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

针对3D LVLM在3D空间感知上的不足,提出了基于球坐标的位置编码SoPE,增强了模型对3D几何结构的理解。

主要贡献

  • 提出了基于球坐标的位置编码SoPE
  • 引入了多尺度频率混合策略
  • 在多个3D场景基准测试和真实场景部署中验证了有效性

方法论

将点云token索引映射到3D球坐标空间,统一建模空间位置和方向角,并融合不同频率域的特征。

原文摘要

3D Large Vision-Language Models (3D LVLMs) built upon Large Language Models (LLMs) have achieved remarkable progress across various multimodal tasks. However, their inherited position-dependent modeling mechanism, Rotary Position Embedding (RoPE), remains suboptimal for 3D multimodal understanding. The vanilla RoPE formulation fails to preserve essential three-dimensional spatial structures when encoding 3D tokens, and its relative distance computation overlooks angular dependencies, hindering the model's ability to capture directional variations in visual representations. To overcome these limitations, we introduce Spherical Coordinate-based Positional Embedding (SoPE). Our method maps point-cloud token indices into a 3D spherical coordinate space, enabling unified modeling of spatial locations and directional angles. This formulation preserves the inherent geometric structure of point-cloud data, enhances spatial awareness, and yields more consistent and expressive geometric representations for multimodal learning. In addition, we introduce a multi-scale frequency mixing strategy to fuse feature information across different frequency domains. Experimental results on multiple 3D scene benchmarks validate the effectiveness of our approach, while real-world deployment experiments further demonstrate its strong generalization capability.

标签

3D LVLM 位置编码 球坐标 多模态学习 点云

arXiv 分类

cs.CV cs.AI