AI Agents 相关度: 7/10

Private and Robust Contribution Evaluation in Federated Learning

Delio Jaramillo Velez, Gergely Biczok, Alexandre Graell i Amat, Johan Ostman, Balazs Pejo
arXiv: 2602.21721v1 发布: 2026-02-25 更新: 2026-02-25

AI 摘要

提出两种适用于联邦学习的安全聚合贡献评估方法,兼顾公平性、隐私性、鲁棒性和实用性。

主要贡献

  • 提出Fair-Private和Everybody-Else两种贡献评估方法
  • 提供了公平性、隐私性、鲁棒性和计算效率的理论保证
  • 实验验证了方法的有效性,并在多个数据集上超越现有基线

方法论

基于边际差分,设计了与安全聚合兼容的贡献评估指标,并通过理论分析和实验验证了其性能。

原文摘要

Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-silo settings. Our scores consistently outperform existing baselines, better approximate Shapley-induced client rankings, and improve downstream model performance as well as misbehavior detection. These results demonstrate that fairness, privacy, robustness, and practical utility can be achieved jointly in federated contribution evaluation, offering a principled solution for real-world cross-silo deployments.

标签

联邦学习 隐私保护 贡献评估 安全聚合 公平性

arXiv 分类

cs.CR cs.GT cs.LG