X-RAY: Mapping LLM Reasoning Capability via Formalized and Calibrated Probes
AI 摘要
X-RAY使用形式化探针分析LLM的推理能力,揭示其在约束处理上的非对称性。
主要贡献
- 提出了一个基于形式化和校准探针的LLM推理能力分析系统X-RAY
- 揭示了LLM在约束精化和解空间重构上的推理不对称性
- 提供了一种无污染的推理模型训练和测试框架
方法论
使用形式化工具生成具有结构变异的探针,通过校准和验证,精准隔离LLM推理能力。
原文摘要
Large language models (LLMs) achieve promising performance, yet their ability to reason remains poorly understood. Existing evaluations largely emphasize task-level accuracy, often conflating pattern matching with reasoning capability. We present X-RAY, an explainable reasoning analysis system that maps the LLM reasoning capability using calibrated, formally verified probes. We model reasoning capability as a function of extractable \textit{structure}, operationalized through formal properties such as constraint interaction, reasoning depth, and solution-space geometry. X-Ray generates probes via formal tools with controlled structural variations, enabling precise isolation of incremental structural information through formal calibration and verification. We evaluate state-of-the-art LLMs on problems ranging from junior-level to advanced in mathematics, physics, and chemistry. Our analysis reveals a systematic asymmetry in LLM reasoning: models are relatively robust to constraint refinement, where additional conditions shrink an existing solution space, but degrade sharply under solution-space restructuring, where modifications alter the underlying structural form of the solution manifold. Moreover, calibrated formal probes differentiate models that appear indistinguishable on standard benchmarks and reveal failure modes that are structurally interpretable rather than opaque. Beyond evaluation, our framework is contamination-free and supports the training and testing of reasoning models.