Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation Policies
AI 摘要
提出 Meta-TTL 框架,通过元学习优化语言Agent的测试时学习适应策略,提升泛化能力。
主要贡献
- 提出 Meta-TTL 框架
- 将适应策略的学习形式化为双层优化问题
- 实验证明优化后的适应策略具有更强的泛化能力
方法论
Meta-TTL 使用双层优化,内层执行 TTL,外层使用进化搜索优化适应策略,提升 agent 在新任务上的表现。
原文摘要
Test-Time Learning (TTL) enables language agents to iteratively refine their performance through repeated interactions with the environment at inference time. At the core of TTL is an adaptation policy that updates the actor policy based on experience from previous episodes, thereby improving future behavior. Existing methods rely on fixed, hand-crafted adaptation policies rather than optimizing them for downstream improvement. We argue that optimal adaptation policies should be learned from task environments, not hand-engineered based on human intuition. To achieve this, we introduce Meta-TTL, a framework that formulates the discovery of effective adaptation policies as a bi-level optimization problem. Within this framework, the inner loop executes the standard TTL process, measuring how effectively a candidate adaptation policy helps an agent correct errors across sequential episodes. Guided by the agent's performance, the outer loop employs evolutionary search over a diverse distribution of training tasks to iteratively refine the adaptation policy. We evaluate Meta-TTL on Jericho and WebArena-Lite across both in-distribution (ID) and out-of-distribution (OOD) settings, using multiple meta-agent backbones. Results on both benchmarks show that Meta-TTL consistently outperforms hand-crafted baselines, suggesting that the optimized adaptation policy encodes transferable strategies that generalize beyond the training task distribution.