AI Agents 相关度: 8/10

MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants

Zuhao Zhang, Chengyue Yu, Yuante Li, Chenyi Zhuang, Linjian Mo, Shuai Li
arXiv: 2603.09652v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

提出了MiniAppBench,用于评估LLM生成交互式HTML应用的能力,并提出了自动评估框架MiniAppEval。

主要贡献

  • 提出了MiniAppBench benchmark,评估LLM生成交互式HTML应用能力
  • 提出了MiniAppEval框架,用于自动化评估生成应用的质量
  • 揭示了当前LLM在生成高质量MiniApp方面的挑战

方法论

构建包含500个任务的benchmark,使用基于浏览器自动化的agent进行探索式测试,评估应用的意图、静态和动态方面。

原文摘要

With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth exists, we propose MiniAppEval, an agentic evaluation framework. Leveraging browser automation, it performs human-like exploratory testing to systematically assess applications across three dimensions: Intention, Static, and Dynamic. Our experiments reveal that current LLMs still face significant challenges in generating high-quality MiniApps, while MiniAppEval demonstrates high alignment with human judgment, establishing a reliable standard for future research. Our code is available in github.com/MiniAppBench.

标签

LLM benchmark evaluation interactive HTML MiniApp

arXiv 分类

cs.AI