Taming OpenClaw: Security Analysis and Mitigation of Autonomous LLM Agent Threats
AI 摘要
分析OpenClaw自主LLM智能体的安全威胁,并提出生命周期防御框架。
主要贡献
- 提出了一个五层生命周期安全框架,用于分析自主LLM智能体的安全威胁。
- 识别并详细分析了OpenClaw中存在的多种新型安全威胁,如间接prompt注入等。
- 提出了针对每个生命周期阶段的代表性防御策略,以增强自主LLM智能体的安全性。
方法论
通过案例研究分析OpenClaw的安全威胁,并构建安全框架指导防御策略。
原文摘要
Autonomous Large Language Model (LLM) agents, exemplified by OpenClaw, demonstrate remarkable capabilities in executing complex, long-horizon tasks. However, their tightly coupled instant-messaging interaction paradigm and high-privilege execution capabilities substantially expand the system attack surface. In this paper, we present a comprehensive security threat analysis of OpenClaw. To structure our analysis, we introduce a five-layer lifecycle-oriented security framework that captures key stages of agent operation, i.e., initialization, input, inference, decision, and execution, and systematically examine compound threats across the agent's operational lifecycle, including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift. Through detailed case studies on OpenClaw, we demonstrate the prevalence and severity of these threats and analyze the limitations of existing defenses. Our findings reveal critical weaknesses in current point-based defense mechanisms when addressing cross-temporal and multi-stage systemic risks, highlighting the need for holistic security architectures for autonomous LLM agents. Within this framework, we further examine representative defense strategies at each lifecycle stage, including plugin vetting frameworks, context-aware instruction filtering, memory integrity validation protocols, intent verification mechanisms, and capability enforcement architectures.