Improving Implicit Discourse Relation Recognition with Natural Language Explanations from LLMs
AI 摘要
该论文利用LLM生成解释来提升隐式篇章关系识别的性能和可解释性。
主要贡献
- 提出一种利用LLM生成解释增强IDRR模型的方法
- 提出一种分类-生成联合框架,利用LLM生成的解释进行监督训练
- 验证了该方法在IDRR、情感分类和NLI任务上的有效性
方法论
首先用LLM为每个训练样本生成解释,然后构建分类-生成框架,以LLM生成的解释作为额外监督信号训练模型。
原文摘要
Implicit Discourse Relation Recognition (IDRR) remains a challenging task due to the requirement for deep semantic understanding in the absence of explicit discourse markers. A further limitation is that existing methods only predict relations without providing any supporting explanations. Recent advances in large language models (LLMs) have shown strong reasoning capabilities in both deep language understanding and natural language explanation generation. In this work, we propose a simple yet effective approach to distill the reasoning capabilities of LLMs into lightweight IDRR models to improve both performance and interpretability. Specifically, we first prompt an LLM to generate explanations for each training instance conditioned on its gold label. Then, we introduce a novel classification-generation framework that jointly performs relation prediction and explanation generation, and train it with the additional supervision of LLM-generated explanations. Our framework is plug-and-play, enabling easy integration with most existing IDRR models. Experimental results on PDTB demonstrate that our approach significantly improves IDRR performance, while human evaluation further confirms that the generated explanations enhance model interpretability. Furthermore, we validate the generality of our approach on sentiment classification and natural language inference