AI Agents 相关度: 6/10

Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity

Quang-Huy Nguyen, Jiaqi Wang, Wei-Shinn Ku
arXiv: 2602.23296v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

提出FedWQ-CP,一种在联邦学习中解决双重异质性下的不确定性量化问题的方法。

主要贡献

  • 提出FedWQ-CP算法,能在双重异质性下进行联邦不确定性量化
  • 单轮通信实现代理端-服务器端校准
  • 实验证明其在保持覆盖率的同时产生最小的预测集合或区间

方法论

通过代理端计算校准数据的一致性分数和局部阈值,服务器端加权平均阈值得到全局阈值。

原文摘要

Federated learning (FL) faces challenges in uncertainty quantification (UQ). Without reliable UQ, FL systems risk deploying overconfident models at under-resourced agents, leading to silent local failures despite seemingly satisfactory global performance. Existing federated UQ approaches often address data heterogeneity or model heterogeneity in isolation, overlooking their joint effect on coverage reliability across agents. Conformal prediction is a widely used distribution-free UQ framework, yet its applications in heterogeneous FL settings remains underexplored. We provide FedWQ-CP, a simple yet effective approach that balances empirical coverage performance with efficiency at both global and agent levels under the dual heterogeneity. FedWQ-CP performs agent-server calibration in a single communication round. On each agent, conformity scores are computed on calibration data and a local quantile threshold is derived. Each agent then transmits only its quantile threshold and calibration sample size to the server. The server simply aggregates these thresholds through a weighted average to produce a global threshold. Experimental results on seven public datasets for both classification and regression demonstrate that FedWQ-CP empirically maintains agent-wise and global coverage while producing the smallest prediction sets or intervals.

标签

联邦学习 不确定性量化 一致性预测 异质性

arXiv 分类

cs.LG cs.AI