AI Agents 相关度: 9/10

A 4D Representation for Training-Free Agentic Reasoning from Monocular Laparoscopic Video

Maximilian Fehrentz, Nicolas Stellwag, Robert Wiebe, Nicole Thorisch, Fabian Grob, Patrick Remerscheid, Ken-Joel Simmoteit, Benjamin D. Killeen, Christian Heiliger, Nassir Navab
arXiv: 2604.00867v1 发布: 2026-04-01 更新: 2026-04-01

AI 摘要

提出一种基于4D表示的免训练手术机器人智能体推理框架,提升时空理解能力。

主要贡献

  • 提出基于单目腹腔镜视频的4D表示方法
  • 构建无需训练的基于MLLM的手术智能体
  • 提出包含134个临床相关问题的新数据集

方法论

利用点跟踪、深度估计和分割模型构建时空一致的4D模型,MLLM作为智能体使用该4D信息推理。

原文摘要

Spatiotemporal reasoning is a fundamental capability for artificial intelligence (AI) in soft tissue surgery, paving the way for intelligent assistive systems and autonomous robotics. While 2D vision-language models show increasing promise at understanding surgical video, the spatial complexity of surgical scenes suggests that reasoning systems may benefit from explicit 4D representations. Here, we propose a framework for equipping surgical agents with spatiotemporal tools based on an explicit 4D representation, enabling AI systems to ground their natural language reasoning in both time and 3D space. Leveraging models for point tracking, depth, and segmentation, we develop a coherent 4D model with spatiotemporally consistent tool and tissue semantics. A Multimodal Large Language Model (MLLM) then acts as an agent on tools derived from the explicit 4D representation (e.g., trajectories) without any fine-tuning. We evaluate our method on a new dataset of 134 clinically relevant questions and find that the combination of a general purpose reasoning backbone and our 4D representation significantly improves spatiotemporal understanding and allows for 4D grounding. We demonstrate that spatiotemporal intelligence can be "assembled" from 2D MLLMs and 3D computer vision models without additional training. Code, data, and examples are available at https://tum-ai.github.io/surg4d/

标签

4D Representation Surgical Robotics Multimodal Learning AI Agent Spatiotemporal Reasoning

arXiv 分类

cs.CV