A Geometric Analysis of Small-sized Language Model Hallucinations
AI 摘要
论文从几何角度分析小模型幻觉问题,提出利用embedding空间聚类区分真实和虚假响应的方法。
主要贡献
- 提出幻觉的几何分析视角
- 证明真实响应在embedding空间中更紧密聚类
- 提出基于少量标注的高效幻觉检测方法
方法论
通过分析模型对同一prompt的多个响应在embedding空间中的聚类情况,进行幻觉检测和分类。
原文摘要
Hallucinations -- fluent but factually incorrect responses -- pose a major challenge to the reliability of language models, especially in multi-step or agentic settings. This work investigates hallucinations in small-sized LLMs through a geometric perspective, starting from the hypothesis that when models generate multiple responses to the same prompt, genuine ones exhibit tighter clustering in the embedding space, we prove this hypothesis and, leveraging this geometrical insight, we also show that it is possible to achieve a consistent level of separability. This latter result is used to introduce a label-efficient propagation method that classifies large collections of responses from just 30-50 annotations, achieving F1 scores above 90%. Our findings, framing hallucinations from a geometric perspective in the embedding space, complement traditional knowledge-centric and single-response evaluation paradigms, paving the way for further research.