OfficeQA Pro: An Enterprise Benchmark for End-to-End Grounded Reasoning
AI 摘要
OfficeQA Pro:一个评估AI Agent在企业级环境中进行文档推理的基准测试。
主要贡献
- 提出了OfficeQA Pro基准测试
- 评估了前沿LLM在多文档推理上的表现
- 探索了结构化文档表示对性能的影响
方法论
构建包含大量文档的企业级数据集,设计需要文档解析、检索和分析推理的问题,评估LLM在回答问题上的准确率。
原文摘要
We introduce OfficeQA Pro, a benchmark for evaluating AI agents on grounded, multi-document reasoning over a large and heterogeneous document corpus. The corpus consists of U.S. Treasury Bulletins spanning nearly 100 years, comprising 89,000 pages and over 26 million numerical values. OfficeQA Pro consists of 133 questions that require precise document parsing, retrieval, and analytical reasoning across both unstructured text and tabular data. Frontier LLMs including Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro Preview achieve less than 5% accuracy on OfficeQA Pro when relying on parametric knowledge, and less than 12% with additional access to the web. When provided directly with the document corpus, frontier agents still struggle on over half of questions, scoring 34.1% on average. We find that providing agents with a structured document representation produced by Databricks' ai_parse_document yields a 16.1% average relative performance gain across agents. We conduct additional ablations to study the effects of model selection, table representation, retrieval strategy, and test-time scaling on performance. Despite these improvements, significant headroom remains before agents can be considered reliable at enterprise-grade grounded reasoning.