AI Agents Already Hijacked: ClawHub Malicious Skills Turn 10K Agents Into Crypto Mining Botnet

AI Agents Already Hijacked: ClawHub Malicious Skills Turn 10K Agents Into Crypto Mining Botnet

Zero Exploits, 10K Compromised

While we’re discussing what AI Agents can do, the security community is already discussing what AI Agents can be made to do by others.

Manifold security researcher Aks Sharma discovered 30 malicious Skills on ClawHub (the AI Agent skill marketplace) that can turn installed AI Agents into crypto mining botnets. Before being detected by the security community, these malicious Skills had already received over 10,000 downloads.

The entire attack process requires zero exploits — just publish seemingly useful Skills and wait for Agent users to install them.

Attack Chain Breakdown

What makes this attack path so alarming is its simplicity:

1. Attacker writes malicious Skill (disguised as a useful tool)

2. Publish to ClawHub (skill marketplace, low review threshold)

3. Users install Skill ("this tool looks good")

4. Skill executes in the Agent runtime environment

5. Agent is quietly connected to a mining pool, consuming user compute

6. 10K downloads = 10K compromised Agents

This is not a traditional software supply chain attack — it doesn’t require infiltrating infrastructure, social engineering phishing, or zero-day vulnerabilities. It simply exploits the trust mechanism of the Agent ecosystem for Skills.

Bigger Picture: AI Agent Security Is Sounding the Alarm All Around

The ClawHub incident is not an isolated security issue. Over the past few weeks, security incidents in the AI Agent ecosystem have been erupting densely:

IncidentTimeImpact
ClawHub Malicious SkillsLate April30 malicious Skills, 10K+ downloads
LiteLLM CVE-2026-42208April 29SQL injection, exploited within 36 hours, exposing AI gateway databases and cloud credentials
Microsoft Entra ID AI Agent privilege escalationApril 30Agent roles can escalate privileges, potentially enabling tenant-wide takeovers
AgentFlow CVE-2026-7466April 29User-controlled Pipeline path parameter leads to arbitrary code execution
AI coding Agent production credentials uncontrolledApril 306 vulnerabilities, 4 platforms, 9 months — Agents access production with ungoverned credentials

Root of the Problem

The core contradiction in AI Agent security is: Agents are designed for autonomous action, but security infrastructure was designed for “human-in-the-loop” scenarios.

Specifically:

  1. Skill marketplace review lags behind release speed: ClawHub adds hundreds of new Skills daily, manual review can’t keep up
  2. Agent runtime permission boundaries are blurry: Why does a “weather query Skill” need filesystem access?
  3. Credential management is missing: AI coding Agents use hardcoded API keys to connect to production databases, nobody audits
  4. Security toolchains don’t match: Traditional SAST/DAST tools can’t analyze Agent dynamic behavior

Action Items

For teams using or planning to deploy AI Agents:

Immediate Actions

  • Audit installed Skills: Check Skills from ClawHub, GitHub and other sources, remove those from unknown origins
  • Restrict Agent runtime permissions: Use least privilege principle, don’t give Agents unnecessary filesystem/network access
  • Rotate Agent credentials: Treat all API keys and database passwords used by Agents as compromised

Medium-Term Building

  • Establish Skill review process: Conduct code review before installing third-party Skills
  • Deploy Agent behavior monitoring: Record all tool calls and API accesses by Agents, build anomaly detection
  • Isolate Agent runtime environments: Use containers or sandboxes to limit Agent impact scope

Long-Term Strategy

  • Push for Agent security standards: The industry needs an Agent security guide similar to OWASP Top 10
  • Establish Skill signing mechanisms: Like code signing, ensure Skill source is trusted and untampered

The security problem of the AI Agent era is no longer “if it will happen” but “how much has already happened.” The ClawHub incident is just the tip of the iceberg — when autonomous software becomes the norm, security infrastructure must keep up.