Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection
AI 摘要
论文提出了一种基于指令调优LLM和紧凑提示的论证成分检测新方法,将ACD重构为生成任务,性能优于现有技术。
主要贡献
- 将ACD重构为语言生成任务
- 使用指令调优LLM和紧凑提示进行ACD
- 在标准基准测试中取得了更高的性能
方法论
利用指令调优的LLM,通过紧凑的指令提示,直接从纯文本中生成论证成分,避免了预先分割组件。
原文摘要
Argumentative component detection (ACD) is a core subtask of Argument(ation) Mining (AM) and one of its most challenging aspects, as it requires jointly delimiting argumentative spans and classifying them into components such as claims and premises. While research on this subtask remains relatively limited compared to other AM tasks, most existing approaches formulate it as a simplified sequence labeling problem, component classification, or a pipeline of component segmentation followed by classification. In this paper, we propose a novel approach based on instruction-tuned Large Language Models (LLMs) using compact instruction-based prompts, and reframe ACD as a language generation task, enabling arguments to be identified directly from plain text without relying on pre-segmented components. Experiments on standard benchmarks show that our approach achieves higher performance compared to state-of-the-art systems. To the best of our knowledge, this is one of the first attempts to fully model ACD as a generative task, highlighting the potential of instruction tuning for complex AM problems.