AI Agents 相关度: 8/10

Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots

Carmen Ng
arXiv: 2603.16537v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

提出了一种LLM机器人辅助分配的前端保障模式,处理价值多元化和LLM不确定性问题。

主要贡献

  • 提出有界校准与可争议性模式
  • 强调在实时多用户辅助分配中的legibility,procedural legitimacy和actionability
  • 设计了公共场所机器人小插曲并提出了评估议程

方法论

提出一种程序化的前端模式,包括约束、可读性和可争议性,并通过案例研究和评估议程进行说明。

原文摘要

LLM-enabled robots prioritizing scarce assistance in social settings face pluralistic values and LLM behavioral variability: reasonable people can disagree about who is helped first, while LLM-mediated interaction policies vary across prompts, contexts, and groups in ways that are difficult to anticipate or verify at contact point. Yet user-facing guardrails for real-time, multi-user assistance allocation remain under-specified. We propose bounded calibration with contestability, a procedural front-end pattern that (i) constrains prioritization to a governance-approved menu of admissible modes, (ii) keeps the active mode legible in interaction-relevant terms at the point of deferral, and (iii) provides an outcome-specific contest pathway without renegotiating the global rule. Treating pluralism and LLM uncertainty as standing conditions, the pattern avoids both silent defaults that hide implicit value skews and wide-open user-configurable "value settings" that shift burden under time pressure. We illustrate the pattern with a public-concourse robot vignette and outline an evaluation agenda centered on legibility, procedural legitimacy, and actionability, including risks of automation bias and uneven usability of contest channels.

标签

LLM Robotics Assistance Allocation Human-Robot Interaction Fairness

arXiv 分类

cs.AI cs.HC cs.RO