LLM Memory & RAG 相关度: 7/10

Meta-Sel: Efficient Demonstration Selection for In-Context Learning via Supervised Meta-Learning

Xubin Wang, Weijia Jia
arXiv: 2602.12123v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

Meta-Sel提出了一种基于监督元学习的高效演示选择方法,用于上下文学习中的意图分类。

主要贡献

  • 提出Meta-Sel,一种轻量级监督元学习演示选择方法
  • 构建元数据集,使用类一致性作为监督信号
  • 在多个数据集和LLM上进行了广泛的实验评估

方法论

通过在训练数据上采样pair构建元数据集,训练校准的logistic回归模型来预测(candidate, query) pairs的分数。

原文摘要

Demonstration selection is a practical bottleneck in in-context learning (ICL): under a tight prompt budget, accuracy can change substantially depending on which few-shot examples are included, yet selection must remain cheap enough to run per query over large candidate pools. We propose Meta-Sel, a lightweight supervised meta-learning approach for intent classification that learns a fast, interpretable scoring function for (candidate, query) pairs from labeled training data. Meta-Sel constructs a meta-dataset by sampling pairs from the training split and using class agreement as supervision, then trains a calibrated logistic regressor on two inexpensive meta-features: TF--IDF cosine similarity and a length-compatibility ratio. At inference time, the selector performs a single vectorized scoring pass over the full candidate pool and returns the top-k demonstrations, requiring no model fine-tuning, no online exploration, and no additional LLM calls. This yields deterministic rankings and makes the selection mechanism straightforward to audit via interpretable feature weights. Beyond proposing Meta-Sel, we provide a broad empirical study of demonstration selection, benchmarking 12 methods -- spanning prompt engineering baselines, heuristic selection, reinforcement learning, and influence-based approaches -- across four intent datasets and five open-source LLMs. Across this benchmark, Meta-Sel consistently ranks among the top-performing methods, is particularly effective for smaller models where selection quality can partially compensate for limited model capacity, and maintains competitive selection-time overhead.

标签

In-Context Learning Demonstration Selection Meta-Learning Intent Classification

arXiv 分类

cs.LG cs.AI cs.CL