LoST: Level of Semantics Tokenization for 3D Shapes
AI 摘要
LoST通过语义显著性进行3D形状的token化,显著提升重建和生成质量。
主要贡献
- 提出Level-of-Semantics Tokenization (LoST),基于语义显著性进行token化
- 引入Relational Inter-Distance Alignment (RIDA) 损失函数,用于3D语义对齐
- LoST在3D形状重建和生成方面取得了SOTA效果,并显著减少了token数量
方法论
提出LoST,通过RIDA损失对齐3D形状潜在空间和语义特征空间,实现基于语义显著性的token化。
原文摘要
Tokenization is a fundamental technique in the generative modeling of various modalities. In particular, it plays a critical role in autoregressive (AR) models, which have recently emerged as a compelling option for 3D generation. However, optimal tokenization of 3D shapes remains an open question. State-of-the-art (SOTA) methods primarily rely on geometric level-of-detail (LoD) hierarchies, originally designed for rendering and compression. These spatial hierarchies are often token-inefficient and lack semantic coherence for AR modeling. We propose Level-of-Semantics Tokenization (LoST), which orders tokens by semantic salience, such that early prefixes decode into complete, plausible shapes that possess principal semantics, while subsequent tokens refine instance-specific geometric and semantic details. To train LoST, we introduce Relational Inter-Distance Alignment (RIDA), a novel 3D semantic alignment loss that aligns the relational structure of the 3D shape latent space with that of the semantic DINO feature space. Experiments show that LoST achieves SOTA reconstruction, surpassing previous LoD-based 3D shape tokenizers by large margins on both geometric and semantic reconstruction metrics. Moreover, LoST achieves efficient, high-quality AR 3D generation and enables downstream tasks like semantic retrieval, while using only 0.1%-10% of the tokens needed by prior AR models.