AI Agents 相关度: 9/10

Differential Privacy in Generative AI Agents: Analysis and Optimal Tradeoffs

Ya-Ting Yang, Quanyan Zhu
arXiv: 2603.17902v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

研究AI Agent中企业数据隐私泄露问题,提出基于差分隐私的分析框架并优化隐私-效用权衡。

主要贡献

  • 提出token-level和message-level差分隐私
  • 推导出隐私泄露与生成参数的关系
  • 建立了隐私-效用设计问题并优化温度选择

方法论

建立概率模型,将响应生成视为prompt和数据集到token序列分布的随机机制,利用差分隐私理论分析隐私泄露。

原文摘要

Large language models (LLMs) and AI agents are increasingly integrated into enterprise systems to access internal databases and generate context-aware responses. While such integration improves productivity and decision support, the model outputs may inadvertently reveal sensitive information. Although many prior efforts focus on protecting the privacy of user prompts, relatively few studies consider privacy risks from the enterprise data perspective. Hence, this paper develops a probabilistic framework for analyzing privacy leakage in AI agents based on differential privacy. We model response generation as a stochastic mechanism that maps prompts and datasets to distributions over token sequences. Within this framework, we introduce token-level and message-level differential privacy and derive privacy bounds that relate privacy leakage to generation parameters such as temperature and message length. We further formulate a privacy-utility design problem that characterizes optimal temperature selection.

标签

差分隐私 AI Agent 隐私保护 生成模型 LLM

arXiv 分类

cs.CR cs.AI