LLM Reasoning 相关度: 9/10

Macaron: Controlled, Human-Written Benchmark for Multilingual and Multicultural Reasoning via Template-Filling

Alaa Elsetohy, Sama Hadhoud, Haryo Akbarianto Wibowo, Chenxi Whitehouse, Genta Indra Winata, Fajri Koto, Alham Fikri Aji
arXiv: 2602.10732v1 发布: 2026-02-11 更新: 2026-02-11

AI 摘要

Macaron是一个多语言文化推理基准,旨在测试LLM在不同文化背景下的推理能力。

主要贡献

  • 提出了一个基于模板的多语言多文化推理基准Macaron
  • 涵盖7种推理类型和22种文化方面
  • 包含20种语言和20个国家/文化背景

方法论

通过100个语言无关的模板,由母语注释者创建与场景对齐的多选题和真/假题。

原文摘要

Multilingual benchmarks rarely test reasoning over culturally grounded premises: translated datasets keep English-centric scenarios, while culture-first datasets often lack control over the reasoning required. We propose Macaron, a template-first benchmark that factorizes reasoning type and cultural aspect across question languages. Using 100 language-agnostic templates that cover 7 reasoning types, 22 cultural aspects, native annotators create scenario-aligned English and local-language multiple-choice questions and systematically derived True/False questions. Macaron contains 11,862 instances spanning 20 countries/cultural contexts, 10 scripts, and 20 languages (including low-resource ones like Amharic, Yoruba, Zulu, Kyrgyz, and some Arabic dialects). In zero-shot evaluation of 21 multilingual LLMs, reasoning-mode models achieve the strongest performance and near-parity between English and local languages, while open-weight models degrade substantially in local languages and often approach chance on T/F tasks. Culture-grounded mathematical and counting templates are consistently the hardest. The data can be accessed here https://huggingface.co/datasets/AlaaAhmed2444/Macaron.

标签

多语言 推理 文化 基准

arXiv 分类

cs.CL