LLM Reasoning 相关度: 8/10

Arabic Morphosyntactic Tagging and Dependency Parsing with Large Language Models

Mohamed Adel, Bashar Alhafni, Nizar Habash
arXiv: 2603.16718v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

论文研究了LLM在阿拉伯语词法句法标注和依存句法分析任务上的表现,并分析了其优势与不足。

主要贡献

  • 评估了指令调整后的LLM在阿拉伯语结构化预测任务上的性能
  • 分析了prompt设计和示例选择对性能的影响
  • 揭示了LLM在阿拉伯语词法句法和句法方面表现的优势与挑战

方法论

采用了零样本提示和基于检索的上下文学习(ICL)方法,使用阿拉伯语树库作为示例,评估LLM的性能。

原文摘要

Large language models (LLMs) perform strongly on many NLP tasks, but their ability to produce explicit linguistic structure remains unclear. We evaluate instruction-tuned LLMs on two structured prediction tasks for Standard Arabic: morphosyntactic tagging and labeled dependency parsing. Arabic provides a challenging testbed due to its rich morphology and orthographic ambiguity, which create strong morphology-syntax interactions. We compare zero-shot prompting with retrieval-based in-context learning (ICL) using examples from Arabic treebanks. Results show that prompt design and demonstration selection strongly affect performance: proprietary models approach supervised baselines for feature-level tagging and become competitive with specialized dependency parsers. In raw-text settings, tokenization remains challenging, though retrieval-based ICL improves both parsing and tokenization. Our analysis highlights which aspects of Arabic morphosyntax and syntax LLMs capture reliably and which remain difficult.

标签

LLM 阿拉伯语 词法句法标注 依存句法分析 上下文学习

arXiv 分类

cs.CL