LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory
AI 摘要
LoGeR提出一种混合记忆模块,用于提升长视频序列三维重建的全局一致性。
主要贡献
- 提出混合记忆模块,结合参数化和非参数化记忆
- 实现无需后优化的超长序列稠密三维重建
- 在长序列数据集上显著优于现有方法
方法论
LoGeR采用分块处理视频流,利用双向先验进行块内推理,并通过混合记忆模块维护跨块一致性。
原文摘要
Feedforward geometric foundation models achieve strong short-window reconstruction, yet scaling them to minutes-long videos is bottlenecked by quadratic attention complexity or limited effective memory in recurrent designs. We present LoGeR (Long-context Geometric Reconstruction), a novel architecture that scales dense 3D reconstruction to extremely long sequences without post-optimization. LoGeR processes video streams in chunks, leveraging strong bidirectional priors for high-fidelity intra-chunk reasoning. To manage the critical challenge of coherence across chunk boundaries, we propose a learning-based hybrid memory module. This dual-component system combines a parametric Test-Time Training (TTT) memory to anchor the global coordinate frame and prevent scale drift, alongside a non-parametric Sliding Window Attention (SWA) mechanism to preserve uncompressed context for high-precision adjacent alignment. Remarkably, this memory architecture enables LoGeR to be trained on sequences of 128 frames, and generalize up to thousands of frames during inference. Evaluated across standard benchmarks and a newly repurposed VBR dataset with sequences of up to 19k frames, LoGeR substantially outperforms prior state-of-the-art feedforward methods--reducing ATE on KITTI by over 74%--and achieves robust, globally consistent reconstruction over unprecedented horizons.