IndicEval: A Bilingual Indian Educational Evaluation Framework for Large Language Models
AI 摘要
IndicEval是一个评估LLM在印度教育场景下多语言能力的评测框架。
主要贡献
- 提出了一个基于真实考试题目的多语言评估框架IndicEval
- 评估了多个LLM在教育场景下的推理能力和语言适应性
- 发现了CoT prompting可以显著提升模型表现,但多语言表现仍有差距
方法论
使用Zero-Shot, Few-Shot, 和Chain-of-Thought (CoT) prompt策略,在包含英语和印地语的真实考试题目上评估LLM。
原文摘要
The rapid advancement of large language models (LLMs) necessitates evaluation frameworks that reflect real-world academic rigor and multilingual complexity. This paper introduces IndicEval, a scalable benchmarking platform designed to assess LLM performance using authentic high-stakes examination questions from UPSC, JEE, and NEET across STEM and humanities domains in both English and Hindi. Unlike synthetic benchmarks, IndicEval grounds evaluation in real examination standards, enabling realistic measurement of reasoning, domain knowledge, and bilingual adaptability. The framework automates assessment using Zero-Shot, Few-Shot, and Chain-of-Thought (CoT) prompting strategies and supports modular integration of new models and languages. Experiments conducted on Gemini 2.0 Flash, GPT-4, Claude, and LLaMA 3-70B reveal three major findings. First, CoT prompting consistently improves reasoning accuracy, with substantial gains across subjects and languages. Second, significant cross-model performance disparities persist, particularly in high-complexity examinations. Third, multilingual degradation remains a critical challenge, with marked accuracy drops in Hindi compared to English, especially under Zero-Shot conditions. These results highlight persistent gaps in bilingual reasoning and domain transfer. Overall, IndicEval provides a practice-oriented, extensible foundation for rigorous, equitable evaluation of LLMs in multilingual educational settings and offers actionable insights for improving reasoning robustness and language adaptability.