LLM Reasoning 相关度: 8/10

SciDef: Automating Definition Extraction from Academic Literature with Large Language Models

Filip Kučera, Christoph Mandl, Isao Echizen, Radu Timofte, Timo Spinde
arXiv: 2602.05413v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

SciDef提出一个基于LLM的pipeline,用于从学术文献中自动提取定义,并评估了不同prompting策略和指标。

主要贡献

  • 提出了SciDef:一个基于LLM的定义提取pipeline
  • 构建了DefExtra & DefSim数据集用于评估
  • 评估了不同prompting策略和NLI-based指标

方法论

利用LLM和多步prompting方法进行定义提取,使用NLI-based方法进行结果评估,并构建新数据集进行验证。

原文摘要

Definitions are the foundation for any scientific work, but with a significant increase in publication numbers, gathering definitions relevant to any keyword has become challenging. We therefore introduce SciDef, an LLM-based pipeline for automated definition extraction. We test SciDef on DefExtra & DefSim, novel datasets of human-extracted definitions and definition-pairs' similarity, respectively. Evaluating 16 language models across prompting strategies, we demonstrate that multi-step and DSPy-optimized prompting improve extraction performance. To evaluate extraction, we test various metrics and show that an NLI-based method yields the most reliable results. We show that LLMs are largely able to extract definitions from scientific literature (86.4% of definitions from our test-set); yet future work should focus not just on finding definitions, but on identifying relevant ones, as models tend to over-generate them. Code & datasets are available at https://github.com/Media-Bias-Group/SciDef.

标签

定义提取 LLM 自然语言处理 知识获取

arXiv 分类

cs.IR cs.CL