Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction
AI 摘要
通过输入的不变变换重采样降低认知不确定性,从而提高AI模型推理精度。
主要贡献
- 提出了一种基于重采样的推理方法
- 利用不变变换输入的多版本来减少认知不确定性
- 平衡模型大小和性能的策略
方法论
对输入进行不变变换,生成多个样本,用AI模型推理,聚合结果以提高精度。
原文摘要
An artificial intelligence (AI) model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a "resampling" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance.