LLM Reasoning 相关度: 8/10

LLM-as-a-Judge for Time Series Explanations

Preetham Sivalingam, Murari Mandal, Saurabh Deshpande, Dhruv Kumar
arXiv: 2604.02118v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

该论文研究了LLM作为时间序列解释的生成器和评估器的可行性,并构建了合成数据集进行评估。

主要贡献

  • 提出了基于LLM的时间序列解释评估方法,无需参考解释。
  • 构建了一个包含350个时间序列案例的合成基准数据集。
  • 发现LLM在生成和评估任务上存在能力不对称性,评估能力更稳定。

方法论

构建合成时间序列数据,使用LLM进行解释生成和评估,并比较其在不同任务上的表现。

原文摘要

Evaluating factual correctness of LLM generated natural language explanations grounded in time series data remains an open challenge. Although modern models generate textual interpretations of numerical signals, existing evaluation methods are limited: reference based similarity metrics and consistency checking models require ground truth explanations, while traditional time series methods operate purely on numerical values and cannot assess free form textual reasoning. Thus, no general purpose method exists to directly verify whether an explanation is faithful to underlying time series data without predefined references or task specific rules. We study large language models as both generators and evaluators of time series explanations in a reference free setting, where given a time series, question, and candidate explanation, the evaluator assigns a ternary correctness label based on pattern identification, numeric accuracy, and answer faithfulness, enabling principled scoring and comparison. To support this, we construct a synthetic benchmark of 350 time series cases across seven query types, each paired with correct, partially correct, and incorrect explanations. We evaluate models across four tasks: explanation generation, relative ranking, independent scoring, and multi anomaly detection. Results show a clear asymmetry: generation is highly pattern dependent and exhibits systematic failures on certain query types, with accuracies ranging from 0.00 to 0.12 for Seasonal Drop and Volatility Shift, to 0.94 to 0.96 for Structural Break, while evaluation is more stable, with models correctly ranking and scoring explanations even when their own outputs are incorrect. These findings demonstrate feasibility of data grounded LLM based evaluation for time series explanations and highlight their potential as reliable evaluators of data grounded reasoning in the time series domain.

标签

LLM Time Series Explanation Evaluation Benchmark

arXiv 分类

cs.AI cs.CL