CacheSolidarity: Preventing Prefix Caching Side Channels in Multi-tenant LLM Serving Systems
AI 摘要
CacheSolidarity通过监控和选择性隔离,防御LLM多租户环境下的缓存侧信道攻击,提升性能。
主要贡献
- 提出CacheSolidarity系统,防御LLM服务中的缓存侧信道攻击
- 在不牺牲性能的前提下,保护多租户LLM系统安全
- 通过选择性隔离,提高缓存复用率和降低推理延迟
方法论
监控用户间的缓存重用情况,标记可疑共享,选择性隔离前缀,限制其重用。
原文摘要
Large Language Models (LLMs) rely on optimizations like Automatic Prefix Caching (APC) to accelerate inference. APC works by reusing previously computed states for the beginning part of a request (prefix), when another request starts with the same text. While APC improves throughput, it introduces timing side channels: cache hits are faster than misses, creating observable latency differences. In multi-tenant systems, attackers can exploit these differences to infer sensitive information, e.g., by incrementally reconstructing another user's request by observing hit/miss patterns. Current defenses take a sledgehammer approach: they disable APC and cache sharing, isolating users, and sacrificing efficiency for regular users. This paper presents CacheSolidarity, a system that secures multi-tenant LLM serving systems against APC side channels without sacrificing performance and efficiency. CacheSolidarity monitors cache reuse across users, flags suspicious sharing, and selectively isolates prefixes, restricting their reuse only when necessary. Evaluation shows that CacheSolidarity enables up to 70% higher cache reuse and 30% lower inference latency compared to existing defenses that isolate users. CacheSolidarity's lightweight design demonstrates how security in LLM serving does not have to come at the cost of unnecessarily reduced performance or unbearable overheads.