Exponential-Family Membership Inference: From LiRA and RMIA to BaVarIA
AI 摘要
提出了BaVarIA攻击,统一了LiRA、RMIA和BASE,并在低shadow-model预算下提升了成员推理攻击效果。
主要贡献
- 统一了LiRA、RMIA和BASE到指数族框架
- 提出了基于贝叶斯方差推断的BaVarIA攻击
- 在低shadow-model预算下提升了攻击性能
方法论
使用指数族似然比框架分析现有攻击,并使用共轭正态-逆伽马先验进行贝叶斯方差推断。
原文摘要
Membership inference attacks (MIAs) are becoming standard tools for auditing the privacy of machine learning models. The leading attacks -- LiRA (Carlini et al., 2022) and RMIA (Zarifzadeh et al., 2024) -- appear to use distinct scoring strategies, while the recently proposed BASE (Lassila et al., 2025) was shown to be equivalent to RMIA, making it difficult for practitioners to choose among them. We show that all three are instances of a single exponential-family log-likelihood ratio framework, differing only in their distributional assumptions and the number of parameters estimated per data point. This unification reveals a hierarchy (BASE1-4) that connects RMIA and LiRA as endpoints of a spectrum of increasing model complexity. Within this framework, we identify variance estimation as the key bottleneck at small shadow-model budgets and propose BaVarIA, a Bayesian variance inference attack that replaces threshold-based parameter switching with conjugate normal-inverse-gamma priors. BaVarIA yields a Student-t predictive (BaVarIA-t) or a Gaussian with stabilized variance (BaVarIA-n), providing stable performance without additional hyperparameter tuning. Across 12 datasets and 7 shadow-model budgets, BaVarIA matches or improves upon LiRA and RMIA, with the largest gains in the practically important low-shadow-model and offline regimes.