AI Agents 相关度: 5/10

Community Concealment from Unsupervised Graph Learning-Based Clustering

Dalyapraz Manatova, Pablo Moriano, L. Jean Camp
arXiv: 2602.12250v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

研究GNN在图聚类中暴露群体隐私的风险,提出了一种基于扰动的社区隐藏策略。

主要贡献

  • 分析了影响社区隐藏的关键因素:边界连接性和特征相似性
  • 提出了一种通过重连边和修改节点特征来隐藏社区的扰动策略
  • 实验证明该方法优于DICE,提升了社区隐藏效果

方法论

通过分析影响社区隐藏的因素,设计扰动策略,重连边和修改节点特征,减少GNN消息传递的区分性。

原文摘要

Graph neural networks (GNNs) are designed to use attributed graphs to learn representations. Such representations are beneficial in the unsupervised learning of clusters and community detection. Nonetheless, such inference may reveal sensitive groups, clustered systems, or collective behaviors, raising concerns regarding group-level privacy. Community attribution in social and critical infrastructure networks, for example, can expose coordinated asset groups, operational hierarchies, and system dependencies that could be used for profiling or intelligence gathering. We study a defensive setting in which a data publisher (defender) seeks to conceal a community of interest while making limited, utility-aware changes in the network. Our analysis indicates that community concealment is strongly influenced by two quantifiable factors: connectivity at the community boundary and feature similarity between the protected community and adjacent communities. Informed by these findings, we present a perturbation strategy that rewires a set of selected edges and modifies node features to reduce the distinctiveness leveraged by GNN message passing. The proposed method outperforms DICE in our experiments on synthetic benchmarks and real network graphs under identical perturbation budgets. Overall, it achieves median relative concealment improvements of approximately 20-45% across the evaluated settings. These findings demonstrate a mitigation strategy against GNN-based community learning and highlight group-level privacy risks intrinsic to graph learning.

标签

图神经网络 隐私保护 社区检测 对抗学习

arXiv 分类

cs.LG cs.CR cs.SI