LLM Reasoning 相关度: 9/10

TopoBench: Benchmarking LLMs on Hard Topological Reasoning

Mayug Maniparambil, Nils Hoehing, Janak Kapuriya, Arjun Karuvally, Ellen Rushe, Anthony Ventresque, Noel O'Connor, Fergal Reid
arXiv: 2603.12133v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

TopoBench基准测试评估LLM在拓扑推理难题上的能力,发现LLM在空间约束提取方面存在瓶颈。

主要贡献

  • 提出了TopoBench基准测试,包含六种拓扑难题
  • 分析了LLM在解决拓扑难题时的错误类型
  • 研究了缓解LLM拓扑推理失败的策略

方法论

构建包含不同难度拓扑难题的基准测试,评估LLM的性能,并通过干预实验分析LLM的错误原因。

原文摘要

Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at github.com/mayug/topobench-benchmark.

标签

LLM 拓扑推理 基准测试 空间推理 错误分析

arXiv 分类

cs.AI cs.CL