Malware

Malicious DeepSeek-Claw AI Skill Delivers Remcos RAT and GhostLoader in Agentic AI Supply Chain Attack

dark6 7 May 2026
Read Time:3 Minute, 52 Second

Security researchers at Zscaler ThreatLabZ have uncovered a sophisticated malware campaign targeting developers and AI engineers who use the OpenClaw agentic AI framework. The attack involves a malicious skill disguised as a legitimate DeepSeek integration — and once executed within an automated AI workflow, it silently deploys a full remote access trojan (RAT) on Windows systems or a credential-harvesting stealerware suite on macOS and Linux.

What Is OpenClaw?

OpenClaw (formerly known as Clawdbot and Moltbot) is an open-source framework designed to enable AI agents to carry out complex, high-privilege tasks on local systems. It supports plugins called “skills” — modular packages that extend the agent’s capabilities, similar to plugins in a software ecosystem. Because OpenClaw agents typically run with elevated privileges to perform their intended tasks, a malicious skill is effectively granted the same level of access.

The DeepSeek-Claw Deception

The threat actor published a fake skill named DeepSeek-Claw on GitHub, presenting it as a legitimate integration between OpenClaw and DeepSeek’s AI models. The packaging and documentation were crafted to appear authentic, targeting developers who routinely pull new skills into automated pipelines without extensive vetting.

The attack’s key innovation was hiding malicious commands inside the skill’s SKILL.md file — an instruction file that AI agents parse to understand how to use the skill. By embedding poisoned instructions in this file, the attacker bypassed traditional phishing and social engineering, instead exploiting the automated, trust-first nature of agentic AI workflows.

Windows Attack Chain: Remcos RAT via DLL Sideloading

On Windows systems, the malicious SKILL.md triggered a hidden PowerShell command that silently downloaded a remote Windows Installer package from an attacker-controlled server. That installer dropped two files onto the system:

  • A genuine, digitally signed GoToMeeting executable
  • A malicious DLL disguised as a legitimate GoToMeeting dependency

When the trusted application ran, it loaded the fake DLL instead — a technique known as DLL sideloading. The malicious DLL then patched key Windows security tools in memory to blind them, before decrypting and launching Remcos RAT. Remcos opened an encrypted command-and-control channel back to the attacker, granting persistent, stealthy remote access to the compromised system.

macOS and Linux Attack Chain: GhostLoader Credential Theft

For macOS and Linux targets, the attack path was equally sophisticated. A heavily obfuscated Node.js file was buried inside npm lifecycle scripts within the skill package. When the install command ran — as it does automatically during skill installation — it silently executed and dropped GhostLoader onto the system.

Once active, GhostLoader performed a comprehensive sweep of the host for valuable data:

  • macOS Keychain data (stored passwords, certificates, keys)
  • SSH private keys
  • Cryptocurrency wallet files
  • Cloud provider API tokens (AWS, GCP, Azure)
  • Browser-stored credentials

All exfiltrated data was transmitted back to attacker-controlled servers over encrypted channels.

Why Agentic AI Pipelines Are a Growing Attack Surface

This campaign highlights a critical and underappreciated security risk: agentic AI pipelines are inherently high-trust environments. AI agents are designed to act autonomously and execute instructions with minimal human oversight — precisely the properties that make them effective at automation, and precisely the properties that make them dangerous when manipulated.

Traditional security controls like email filtering, browser sandboxing, and endpoint detection are poorly positioned to intercept attacks that arrive through AI skill repositories. The attack surface mirrors supply chain attacks targeting npm and PyPI packages, but with the added challenge that AI agents execute with even broader system permissions.

Zscaler analysts note that as AI agents become standard components of development pipelines, supply chain poisoning through fake skills is expected to become an increasingly common attack vector.

Indicators of Compromise

  • Presence of DeepSeek-Claw skill in OpenClaw skills directory
  • Unexpected GoToMeeting executable in non-standard paths
  • PowerShell executions originating from AI agent processes
  • Outbound connections to unfamiliar IP addresses from node or python AI agent processes
  • Unexpected npm install operations during AI agent skill loading

Recommendations for AI and Development Teams

  • Audit installed skills: Review all OpenClaw skills currently installed and verify their provenance against official sources
  • Sandbox AI agent environments: Run AI agents in restricted, sandboxed environments with limited filesystem and network access
  • Verify skill packages before installation: Inspect SKILL.md files and any lifecycle scripts in skill packages before executing them
  • Apply least-privilege principles: AI agents should not run with administrator or root privileges unless strictly necessary
  • Monitor for anomalous behavior: Implement EDR rules to flag unusual network connections and file writes originating from AI agent processes

The OpenClaw campaign is a preview of what security teams will increasingly face as agentic AI becomes embedded in software development workflows. The time to establish security guardrails for AI agent environments is now — before the threat matures further.

Leave a Reply

💬 [[ unisciti alla discussione! ]]


Se vuoi commentare su Malicious DeepSeek-Claw AI Skill Delivers Remcos RAT and GhostLoader in Agentic AI Supply Chain Attack, utilizza la discussione sul Forum.
Condividi esempi, IOCs o tecniche di detection efficaci nel nostro 👉 forum community