AI Agents 相关度: 9/10

A Benchmark for Deep Information Synthesis

Debjit Paul, Daniel Murphy, Milan Gritta, Ronald Cardenas, Victor Prokhorov, Lena Sophia Bolliger, Aysim Toker, Roy Miles, Andreea-Maria Oncescu, Jasivan Alex Sivakumar, Philipp Borchert, Ismail Elezi, Meiru Zhang, Ka Yiu Lee, Guchun Zhang, Jun Wang, Gerasimos Lampouras
arXiv: 2602.21143v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

DEEPSYNTH基准测试评估LLM在信息合成和推理方面的能力,揭示现有模型的不足。

主要贡献

  • 提出了DEEPSYNTH基准,用于评估LLM的信息合成能力
  • DEEPSYNTH包含120个跨7个领域、67个国家的任务
  • 揭示了现有LLM在信息合成和推理方面的局限性

方法论

构建多阶段数据收集流程,要求标注者收集数据、创建假设、进行分析和设计可验证答案的任务。

原文摘要

Large language model (LLM)-based agents are increasingly used to solve complex tasks involving tool use, such as web browsing, code execution, and data analysis. However, current evaluation benchmarks do not adequately assess their ability to solve real-world tasks that require synthesizing information from multiple sources and inferring insights beyond simple fact retrieval. To address this, we introduce DEEPSYNTH, a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights. DEEPSYNTH contains 120 tasks collected across 7 domains and data sources covering 67 countries. DEEPSYNTH is constructed using a multi-stage data collection pipeline that requires annotators to collect official data sources, create hypotheses, perform manual analysis, and design tasks with verifiable answers. When evaluated on DEEPSYNTH, 11 state-of-the-art LLMs and deep research agents achieve a maximum F1 score of 8.97 and 17.5 on the LLM-judge metric, underscoring the difficulty of the benchmark. Our analysis reveals that current agents struggle with hallucinations and reasoning over large information spaces, highlighting DEEPSYNTH as a crucial benchmark for guiding future research.

标签

LLM Information Synthesis Benchmark Reasoning Agent

arXiv 分类

cs.AI cs.CL cs.IR cs.LG