AI Agents 相关度: 9/10

Malicious Or Not: Adding Repository Context to Agent Skill Classification

Florian Holzbauer, David Schmidt, Gabriel Gegenhuber, Sebastian Schrittwieser, Johanna Ullrich
arXiv: 2603.16572v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

论文分析AI Agent技能生态安全,提出新方法降低恶意技能误报率,并揭示新的攻击向量。

主要贡献

  • 提出基于仓库上下文的Agent技能恶意性分析方法
  • 大幅降低恶意技能的误报率
  • 揭示利用废弃GitHub仓库劫持技能的新型攻击向量

方法论

收集大量Agent技能数据,结合安全扫描器和GitHub仓库信息进行分析,对比技能描述与仓库内容,识别潜在风险。

原文摘要

Agent skills extend local AI agents, such as Claude Code or Open Claw, with additional functionality, and their popularity has led to the emergence of dedicated skill marketplaces, similar to app stores for mobile applications. Simultaneously, automated skill scanners were introduced, analyzing the skill description available in SKILL.md, to verify their benign behavior. The results for individual market places mark up to 46.8% of skills as malicious. In this paper, we present the largest empirical security analysis of the AI agent skill ecosystem, questioning this high classification of malicious skills. Therefore, we collect 238,180 unique skills from three major distribution platforms and GitHub to systematically analyze their type and behavior. This approach substantially reduces the number of skills flagged as non-benign by security scanners to only 0.52% which remain in malicious flagged repositories. Consequently, out methodology substantially reduces false positives and provides a more robust view of the ecosystem's current risk surface. Beyond that, we extend the security analysis from the mere investigation of the skill description to a comparison of its congruence with the GitHub repository the skill is embedded in, providing additional context. Furthermore, our analysis also uncovers several, by now undocumented real-world attack vectors, namely hijacking skills hosted on abandoned GitHub repositories.

标签

AI Agent 安全分析 恶意技能检测 GitHub

arXiv 分类

cs.CR cs.AI