LLM Reasoning 相关度: 8/10

Concept frustration: Aligning human concepts and machine representations

Enrico Parisini, Christopher J. Soelistyo, Ahab Isaac, Alessandro Barp, Christopher R. S. Banerji
arXiv: 2603.29654v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

论文提出“概念挫败”框架,旨在对齐人类概念与机器学习模型内部表征,提升可解释性。

主要贡献

  • 提出“概念挫败”的概念,用于衡量人类与机器概念的差异
  • 开发任务对齐相似度度量,检测概念挫败现象
  • 构建线性高斯生成模型,分析概念挫败对模型性能的影响

方法论

论文通过几何框架,对比人类监督概念与模型无监督表征,并使用任务对齐相似度度量检测概念挫败。

原文摘要

Aligning human-interpretable concepts with the internal representations learned by modern machine learning systems remains a central challenge for interpretable AI. We introduce a geometric framework for comparing supervised human concepts with unsupervised intermediate representations extracted from foundation model embeddings. Motivated by the role of conceptual leaps in scientific discovery, we formalise the notion of concept frustration: a contradiction that arises when an unobserved concept induces relationships between known concepts that cannot be made consistent within an existing ontology. We develop task-aligned similarity measures that detect concept frustration between supervised concept-based models and unsupervised representations derived from foundation models, and show that the phenomenon is detectable in task-aligned geometry while conventional Euclidean comparisons fail. Under a linear-Gaussian generative model we derive a closed-form expression for Bayes-optimal concept-based classifier accuracy, decomposing predictive signal into known-known, known-unknown and unknown-unknown contributions and identifying analytically where frustration affects performance. Experiments on synthetic data and real language and vision tasks demonstrate that frustration can be detected in foundation model representations and that incorporating a frustrating concept into an interpretable model reorganises the geometry of learned concept representations, to better align human and machine reasoning. These results suggest a principled framework for diagnosing incomplete concept ontologies and aligning human and machine conceptual reasoning, with implications for the development and validation of safe interpretable AI for high-risk applications.

标签

interpretable AI concept alignment foundation models geometric framework

arXiv 分类

cs.LG cs.AI stat.ML