AI Agents 相关度: 9/10

PhysicsAgentABM: Physics-Guided Generative Agent-Based Modeling

Kavana Venkatesh, Yinhan He, Jundong Li, Jiaming Cui
arXiv: 2602.06030v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

PhysicsAgentABM通过神经符号融合实现可扩展和校准的生成式Agent建模。

主要贡献

  • 提出PhysicsAgentABM框架,融合物理机制和LLM
  • 引入ANCHOR聚类策略,降低LLM调用次数
  • 在多个领域验证了模型的准确性和校准性

方法论

利用状态专业化符号代理编码先验,神经模型捕捉动态,不确定性融合生成过渡分布,实现个体行为的随机实现。

原文摘要

Large language model (LLM)-based multi-agent systems enable expressive agent reasoning but are expensive to scale and poorly calibrated for timestep-aligned state-transition simulation, while classical agent-based models (ABMs) offer interpretability but struggle to integrate rich individual-level signals and non-stationary behaviors. We propose PhysicsAgentABM, which shifts inference to behaviorally coherent agent clusters: state-specialized symbolic agents encode mechanistic transition priors, a multimodal neural transition model captures temporal and interaction dynamics, and uncertainty-aware epistemic fusion yields calibrated cluster-level transition distributions. Individual agents then stochastically realize transitions under local constraints, decoupling population inference from entity-level variability. We further introduce ANCHOR, an LLM agent-driven clustering strategy based on cross-contextual behavioral responses and a novel contrastive loss, reducing LLM calls by up to 6-8 times. Experiments across public health, finance, and social sciences show consistent gains in event-time accuracy and calibration over mechanistic, neural, and LLM baselines. By re-architecting generative ABM around population-level inference with uncertainty-aware neuro-symbolic fusion, PhysicsAgentABM establishes a new paradigm for scalable and calibrated simulation with LLMs.

标签

Agent-Based Modeling Large Language Models Neuro-Symbolic AI

arXiv 分类

cs.MA cs.LG