AI Agents 相关度: 8/10

Integrating Deep RL and Bayesian Inference for ObjectNav in Mobile Robotics

João Castelo-Branco, José Santos-Victor, Alexandre Bernardino
arXiv: 2603.25366v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

该论文提出了一种融合贝叶斯推理和深度强化学习的移动机器人物体搜索框架。

主要贡献

  • 融合贝叶斯推理和深度强化学习
  • 构建概率空间置信地图
  • 在Habitat 3.0中进行评估

方法论

结合贝叶斯推理更新目标位置的空间置信地图,利用强化学习训练策略,直接从概率表示中选择导航动作。

原文摘要

Autonomous object search is challenging for mobile robots operating in indoor environments due to partial observability, perceptual uncertainty, and the need to trade off exploration and navigation efficiency. Classical probabilistic approaches explicitly represent uncertainty but typically rely on handcrafted action-selection heuristics, while deep reinforcement learning enables adaptive policies but often suffers from slow convergence and limited interpretability. This paper proposes a hybrid object-search framework that integrates Bayesian inference with deep reinforcement learning. The method maintains a spatial belief map over target locations, updated online through Bayesian inference from calibrated object detections, and trains a reinforcement learning policy to select navigation actions directly from this probabilistic representation. The approach is evaluated in realistic indoor simulation using Habitat 3.0 and compared against developed baseline strategies. Across two indoor environments, the proposed method improves success rate while reducing search effort. Overall, the results support the value of combining Bayesian belief estimation with learned action selection to achieve more efficient and reliable objectsearch behavior under partial observability.

标签

深度强化学习 贝叶斯推理 移动机器人 目标导航 物体搜索

arXiv 分类

cs.RO cs.AI cs.CV