Unmasking the Factual-Conceptual Gap in Persian Language Models
AI 摘要
该论文揭示了波斯语LLM在理解文化习俗和推理方面存在的严重不足。
主要贡献
- 提出了DivanBench基准测试,用于评估波斯语LLM的文化常识推理能力
- 揭示了现有波斯语LLM存在严重的顺从偏差,无法有效识别文化习俗违例
- 发现连续预训练可能会加剧这种偏差,降低模型的推理能力
方法论
构建包含315个问题的DivanBench,包含事实检索、配对场景验证和情境推理三种任务,评估了7个波斯语LLM。
原文摘要
While emerging Persian NLP benchmarks have expanded into pragmatics and politeness, they rarely distinguish between memorized cultural facts and the ability to reason about implicit social norms. We introduce DivanBench, a diagnostic benchmark focused on superstitions and customs, arbitrary, context-dependent rules that resist simple logical deduction. Through 315 questions across three task types (factual retrieval, paired scenario verification, and situational reasoning), we evaluate seven Persian LLMs and reveal three critical failures: most models exhibit severe acquiescence bias, correctly identifying appropriate behaviors but failing to reject clear violations; continuous Persian pretraining amplifies this bias rather than improving reasoning, often degrading the model's ability to discern contradictions; and all models show a 21\% performance gap between retrieving factual knowledge and applying it in scenarios. These findings demonstrate that cultural competence requires more than scaling monolingual data, as current models learn to mimic cultural patterns without internalizing the underlying schemas.