BABE: Biology Arena BEnchmark
AI 摘要
BABE是一个生物学领域的新基准,旨在评估LLM的实验推理能力。
主要贡献
- 提出了BABE基准,用于评估生物学AI系统的实验推理能力
- BABE基于同行评审论文和真实生物学研究
- BABE挑战模型进行因果推理和跨尺度推断
方法论
BABE构建自同行评审的科研论文,要求模型整合实验结果和背景知识,进行因果推理和跨尺度推断。
原文摘要
The rapid evolution of large language models (LLMs) has expanded their capabilities from basic dialogue to advanced scientific reasoning. However, existing benchmarks in biology often fail to assess a critical skill required of researchers: the ability to integrate experimental results with contextual knowledge to derive meaningful conclusions. To address this gap, we introduce BABE(Biology Arena BEnchmark), a comprehensive benchmark designed to evaluate the experimental reasoning capabilities of biological AI systems. BABE is uniquely constructed from peer-reviewed research papers and real-world biological studies, ensuring that tasks reflect the complexity and interdisciplinary nature of actual scientific inquiry. BABE challenges models to perform causal reasoning and cross-scale inference. Our benchmark provides a robust framework for assessing how well AI systems can reason like practicing scientists, offering a more authentic measure of their potential to contribute to biological research.