Multimodal Learning 相关度: 8/10

UNBOX: Unveiling Black-box visual models with Natural-language

Simone Carnemolla, Chiara Russo, Simone Palazzo, Quentin Bouniot, Daniela Giordano, Zeynep Akata, Matteo Pennisi, Concetto Spampinato
arXiv: 2603.08639v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

UNBOX利用LLM和扩散模型,在纯语义搜索下揭示黑盒视觉模型的内在逻辑和潜在偏差。

主要贡献

  • 提出了UNBOX框架,用于在完全无数据、无梯度和无反向传播的约束下进行类别的模型剖析。
  • 利用大型语言模型和文本到图像的扩散模型将激活最大化转化为纯粹的语义搜索。
  • 验证了在最严格的黑盒约束下,UNBOX的性能与最先进的白盒可解释性方法具有竞争力。

方法论

UNBOX通过LLM和文本到图像扩散模型,将激活最大化转化为基于输出概率的语义搜索,生成人类可解释的文本描述。

原文摘要

Ensuring trustworthiness in open-world visual recognition requires models that are interpretable, fair, and robust to distribution shifts. Yet modern vision systems are increasingly deployed as proprietary black-box APIs, exposing only output probabilities and hiding architecture, parameters, gradients, and training data. This opacity prevents meaningful auditing, bias detection, and failure analysis. Existing explanation methods assume white- or gray-box access or knowledge of the training distribution, making them unusable in these real-world settings. We introduce UNBOX, a framework for class-wise model dissection under fully data-free, gradient-free, and backpropagation-free constraints. UNBOX leverages Large Language Models and text-to-image diffusion models to recast activation maximization as a purely semantic search driven by output probabilities. The method produces human-interpretable text descriptors that maximally activate each class, revealing the concepts a model has implicitly learned, the training distribution it reflects, and potential sources of bias. We evaluate UNBOX on ImageNet-1K, Waterbirds, and CelebA through semantic fidelity tests, visual-feature correlation analyses and slice-discovery auditing. Despite operating under the strictest black-box constraints, UNBOX performs competitively with state-of-the-art white-box interpretability methods. This demonstrates that meaningful insight into a model's internal reasoning can be recovered without any internal access, enabling more trustworthy and accountable visual recognition systems.

标签

Interpretability Black-box model Large Language Models Diffusion Models Vision

arXiv 分类

cs.CV cs.AI