AI Agents 相关度: 8/10

Chow-Liu Ordering for Long-Context Reasoning in Chain-of-Agents

Naman Gupta, Vaibhav Singh, Arun Iyer, Kirankumar Shiragur, Pratham Grover, Ramakrishna B. Bairi, Ritabrata Maiti, Sankarshan Damle, Shachee Mishra Gupta, Rishikesh Maurya, Vageesh D. C
arXiv: 2603.09835v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

针对长文本推理的Chain-of-Agents,提出基于Chow-Liu树的块排序方法,提升信息利用率。

主要贡献

  • 提出基于Chow-Liu树的chunk排序方法
  • 提升Chain-of-Agents框架在长文本推理中的性能
  • 经验证优于默认和语义排序

方法论

利用Chow-Liu树学习块依赖结构,通过广度优先遍历生成chunk排序,减少信息损失。

原文摘要

Sequential multi-agent reasoning frameworks such as Chain-of-Agents (CoA) handle long-context queries by decomposing inputs into chunks and processing them sequentially using LLM-based worker agents that read from and update a bounded shared memory. From a probabilistic perspective, CoA aims to approximate the conditional distribution corresponding to a model capable of jointly reasoning over the entire long context. CoA achieves this through a latent-state factorization in which only bounded summaries of previously processed evidence are passed between agents. The resulting bounded-memory approximation introduces a lossy information bottleneck, making the final evidence state inherently dependent on the order in which chunks are processed. In this work, we study the problem of chunk ordering for long-context reasoning. We use the well-known Chow-Liu trees to learn a dependency structure that prioritizes strongly related chunks. Empirically, we show that a breadth-first traversal of the resulting tree yields chunk orderings that reduce information loss across agents and consistently outperform both default document-chunk ordering and semantic score-based ordering in answer relevance and exact-match accuracy across three long-context benchmarks.

标签

Chain-of-Agents 长文本推理 Chow-Liu树 块排序

arXiv 分类

cs.CL