Multimodal Learning 相关度: 9/10

DriveTok: 3D Driving Scene Tokenization for Unified Multi-View Reconstruction and Understanding

Dong Zhuo, Wenzhao Zheng, Sicheng Zuo, Siming Yan, Lu Hou, Jie Zhou, Jiwen Lu
arXiv: 2603.19219v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

DriveTok提出了一种高效的3D驾驶场景Tokenization方法,用于多视角重建和理解。

主要贡献

  • 提出DriveTok,一种高效的3D驾驶场景Tokenizer
  • 使用3D可变形交叉注意力将视觉特征转换为场景tokens
  • 通过多任务学习,实现统一的场景tokens表示

方法论

使用视觉基础模型提取特征,通过3D变形交叉注意力生成tokens,多视角Transformer重建特征,并加入3D head进行预测。

原文摘要

With the growing adoption of vision-language-action models and world models in autonomous driving systems, scalable image tokenization becomes crucial as the interface for the visual modality. However, most existing tokenizers are designed for monocular and 2D scenes, leading to inefficiency and inter-view inconsistency when applied to high-resolution multi-view driving scenes. To address this, we propose DriveTok, an efficient 3D driving scene tokenizer for unified multi-view reconstruction and understanding. DriveTok first obtains semantically rich visual features from vision foundation models and then transforms them into the scene tokens with 3D deformable cross-attention. For decoding, we employ a multi-view transformer to reconstruct multi-view features from the scene tokens and use multiple heads to obtain RGB, depth, and semantic reconstructions. We also add a 3D head directly on the scene tokens for 3D semantic occupancy prediction for better spatial awareness. With the multiple training objectives, DriveTok learns unified scene tokens that integrate semantic, geometric, and textural information for efficient multi-view tokenization. Extensive experiments on the widely used nuScenes dataset demonstrate that the scene tokens from DriveTok perform well on image reconstruction, semantic segmentation, depth prediction, and 3D occupancy prediction tasks.

标签

自动驾驶 多视角学习 3D场景理解 Tokenization

arXiv 分类

cs.CV cs.LG