Learning to Present: Inverse Specification Rewards for Agentic Slide Generation
AI 摘要
论文提出了一种基于强化学习和逆向规范奖励的自动幻灯片生成方法。
主要贡献
- 提出了一个基于LLM Agent的强化学习环境SlideRL
- 引入了逆向规范奖励用于幻灯片质量评估
- 开源了SlideRL数据集和代码
方法论
使用Qwen2.5-Coder-7B模型,通过GRPO进行微调,结合多组件奖励和逆向规范奖励训练LLM Agent生成幻灯片。
原文摘要
Automated presentation generation remains a challenging task requiring coherent content creation, visual design, and audience-aware communication. This work proposes an OpenEnv-compatible reinforcement learning environment where LLM agents learn to research topics, plan content, and generate professional HTML slide presentations through tool use. We introduce a multi-component reward system combining structural validation, render quality assessment, LLM-based aesthetic scoring, content quality metrics, and an inverse specification reward that measures how faithfully generated slides convey their intended purpose. The inverse specification reward, an "inverse task" where an LLM attempts to recover the original specification from generated slides, provides a holistic quality signal. Our approach fine-tunes Qwen2.5-Coder-7B via GRPO, training only 0.5% of parameters on prompts derived from expert demonstrations collected using Claude Opus 4.6. Experiments on 48 diverse business briefs across six models demonstrate that our fine-tuned 7B model achieves 91.2% of Claude Opus 4.6's quality while improving 33.1% over the base model. The six-model comparison reveals that instruction adherence and tool-use compliance, rather than raw parameter count, determine agentic task performance. We contribute SlideRL, an open-source dataset of 288 multi-turn rollout trajectories across all six models: https://huggingface.co/datasets/KarthikRagunathAnandaKumar/sliderl-multi-turn-rollouts Code: https://github.com/pushing-the-frontier/slide-forge-llm