Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty
AI 摘要
A2MAML提出了一种不确定性感知的多模态多智能体学习框架,提升了协作感知系统的鲁棒性。
主要贡献
- 提出了针对多智能体多模态场景的不确定性建模方法
- 引入主动选择机制,选择可靠的智能体-模态组合
- 采用贝叶斯逆方差加权进行信息融合
方法论
将模态特征建模为带有不确定性预测的随机估计,主动选择可靠模态,通过贝叶斯方法融合信息。
原文摘要
Multi-agent systems are increasingly equipped with heterogeneous multimodal sensors, enabling richer perception but introducing modality-specific and agent-dependent uncertainty. Existing multi-agent collaboration frameworks typically reason at the agent level, assume homogeneous sensing, and handle uncertainty implicitly, limiting robustness under sensor corruption. We propose Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty (A2MAML), a principled approach for uncertainty-aware, modality-level collaboration. A2MAML models each modality-specific feature as a stochastic estimate with uncertainty prediction, actively selects reliable agent-modality pairs, and aggregates information via Bayesian inverse-variance weighting. This formulation enables fine-grained, modality-level fusion, supports asymmetric modality availability, and provides a principled mechanism to suppress corrupted or noisy modalities. Extensive experiments on connected autonomous driving scenarios for collaborative accident detection demonstrate that A2MAML consistently outperforms both single-agent and collaborative baselines, achieving up to 18.7% higher accident detection rate.