AI Agents 相关度: 9/10

A Multimodal Framework for Human-Multi-Agent Interaction

Shaid Hasan, Breenice Lee, Sujan Sarker, Tariq Iqbal
arXiv: 2603.23271v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

提出了一种多模态框架,用于实现人与多智能体之间的自然交互和协同决策。

主要贡献

  • 提出了一个用于人-多智能体交互的多模态框架。
  • 集成了多模态感知、具身表达和协调决策。
  • 实现了基于LLM的具身规划和团队层面的协调机制。

方法论

每个机器人作为自主认知智能体,通过集成的多模态感知和LLM驱动的规划进行交互,并通过集中协调机制进行协同。

原文摘要

Human-robot interaction is increasingly moving toward multi-robot, socially grounded environments. Existing systems struggle to integrate multimodal perception, embodied expression, and coordinated decision-making in a unified framework. This limits natural and scalable interaction in shared physical spaces. We address this gap by introducing a multimodal framework for human-multi-agent interaction in which each robot operates as an autonomous cognitive agent with integrated multimodal perception and Large Language Model (LLM)-driven planning grounded in embodiment. At the team level, a centralized coordination mechanism regulates turn-taking and agent participation to prevent overlapping speech and conflicting actions. Implemented on two humanoid robots, our framework enables coherent multi-agent interaction through interaction policies that combine speech, gesture, gaze, and locomotion. Representative interaction runs demonstrate coordinated multimodal reasoning across agents and grounded embodied responses. Future work will focus on larger-scale user studies and deeper exploration of socially grounded multi-agent interaction dynamics.

标签

多模态 人机交互 多智能体 LLM

arXiv 分类

cs.RO cs.AI