$x^2$-Fusion: Cross-Modality and Cross-Dimension Flow Estimation in Event Edge Space
AI 摘要
提出了$x^2$-Fusion,通过事件边缘空间统一多模态特征,提升光流和场景流估计精度。
主要贡献
- 提出了事件边缘空间,作为多模态特征统一的潜在空间
- 提出了可靠性感知自适应融合,提升在退化场景下的稳定性
- 提出了跨维度对比学习,耦合2D光流和3D场景流
方法论
利用事件相机提供的时空边缘信息构建事件边缘空间,并将图像和LiDAR特征对齐到该空间中进行融合,最后进行光流和场景流估计。
原文摘要
Estimating dense 2D optical flow and 3D scene flow is essential for dynamic scene understanding. Recent work combines images, LiDAR, and event data to jointly predict 2D and 3D motion, yet most approaches operate in separate heterogeneous feature spaces. Without a shared latent space that all modalities can align to, these systems rely on multiple modality-specific blocks, leaving cross-sensor mismatches unresolved and making fusion unnecessarily complex.Event cameras naturally provide a spatiotemporal edge signal, which we can treat as an intrinsic edge field to anchor a unified latent representation, termed the Event Edge Space. Building on this idea, we introduce $x^2$-Fusion, which reframes multimodal fusion as representation unification: event-derived spatiotemporal edges define an edge-centric homogeneous space, and image and LiDAR features are explicitly aligned in this shared representation.Within this space, we perform reliability-aware adaptive fusion to estimate modality reliability and emphasize stable cues under degradation. We further employ cross-dimension contrast learning to tightly couple 2D optical flow with 3D scene flow. Extensive experiments on both synthetic and real benchmarks show that $x^2$-Fusion achieves state-of-the-art accuracy under standard conditions and delivers substantial improvements in challenging scenarios.