Learning to Generate and Extract: A Multi-Agent Collaboration Framework For Zero-shot Document-level Event Arguments Extraction
AI 摘要
提出了一种多智能体协作框架,用于解决零样本文档级事件论元抽取问题,提升数据生成质量和抽取性能。
主要贡献
- 提出多智能体协作框架,模拟人类“提议-评估-修正”认知过程。
- 设计奖励机制,结合事件结构约束,迭代优化生成和评估智能体。
- 实验证明该方法提升了数据生成质量和论元抽取性能,并增强了其他模型的零样本性能。
方法论
构建生成和评估智能体,利用强化学习进行迭代优化,生成智能体负责合成数据,评估智能体评估数据质量并提供奖励信号。
原文摘要
Document-level event argument extraction (DEAE) is essential for knowledge acquisition, aiming to extract participants of events from documents.In the zero-shot setting, existing methods employ LLMs to generate synthetic data to address the challenge posed by the scarcity of annotated data. However, relying solely on Event-type-only prompts makes it difficult for the generated content to accurately capture the contextual and structural relationships of unseen events. Moreover, ensuring the reliability and usability of synthetic data remains a significant challenge due to the absence of quality evaluation mechanisms. To this end, we introduce a multi-agent collaboration framework for zero-shot document-level event argument extraction (ZS-DEAE), which simulates the human collaborative cognitive process of "Propose-Evaluate-Revise." Specifically, the framework comprises a generation agent and an evaluation agent. The generation agent synthesizes data for unseen events by leveraging knowledge from seen events, while the evaluation agent extracts arguments from the synthetic data and assesses their semantic consistency with the context. The evaluation results are subsequently converted into reward signals, with event structure constraints incorporated into the reward design to enable iterative optimization of both agents via reinforcement learning.In three zero-shot scenarios constructed from the RAMS and WikiEvents datasets, our method achieves improvements both in data generation quality and argument extraction performance, while the generated data also effectively enhances the zero-shot performance of other DEAE models.