Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages
AI 摘要
提出了FRED指标评估极低资源机器翻译,揭示性能差异受训练数据质量和预训练影响。
主要贡献
- 提出了FRED指标,包含生育率、检索代理、预训练曝光和语料库多样性
- 揭示了训练集重叠和预训练暴露对极低资源翻译性能的影响
- 强调了低资源语言的分词覆盖率问题
方法论
通过分析现有数据集,提出四个数据集内在指标(FRED)以评估机器翻译的难度和质量。
原文摘要
The landscape of extremely low-resource machine translation (MT) is characterized by perplexing variability in reported performance, often making results across different language pairs difficult to contextualize. For researchers focused on specific language groups -- such as ancient languages -- it is nearly impossible to determine if breakthroughs reported in other contexts (e.g., native African or American languages) result from superior methodologies or are merely artifacts of benchmark collection. To address this problem, we introduce the FRED Difficulty Metrics, which include the Fertility Ratio (F), Retrieval Proxy (R), Pre-training Exposure (E), and Corpus Diversity (D) and serve as dataset-intrinsic metrics to contextualize reported scores. These metrics reveal that a significant portion of result variability is explained by train-test overlap and pre-training exposure rather than model capability. Additionally, we identify that some languages -- particularly extinct and non-Latin indigenous languages -- suffer from poor tokenization coverage (high token fertility), highlighting a fundamental limitation of transferring models from high-resource languages that lack a shared vocabulary. By providing these indices alongside performance scores, we enable more transparent evaluation of cross-lingual transfer and provide a more reliable foundation for the XLR MT community.