LLM Reasoning 相关度: 7/10

A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference

Or Zamir
arXiv: 2602.15756v1 发布: 2026-02-17 更新: 2026-02-17

AI 摘要

逐层近似验证的不可组合性:即使每层计算误差可控,整体输出误差可能不可控。

主要贡献

  • 证明了逐层近似验证方法对于神经推理的无效性
  • 提供了一个反例,展示了即使每层误差很小,最终输出也可能被恶意操控
  • 指出了浮点数数据上可验证机器学习推理的潜在缺陷

方法论

通过构造一个功能等价的网络,证明了可以利用每层的误差来操纵最终输出。

原文摘要

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance $δ$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).

标签

Verification Neural Inference Floating-Point Arithmetic Adversarial Attack

arXiv 分类

cs.CR cs.LG