An active and wide-ranging malware distribution campaign is abusing two prominent AI platforms — Hugging Face and ClawHub — to deliver trojans, cryptominers, and infostealers disguised as legitimate AI tools and agent extensions. Documented by Acronis Threat Research Unit (TRU), the campaign marks a significant evolution in supply chain attacks, shifting from traditional software repositories to the rapidly growing and less-scrutinized AI ecosystem.
575+ Malicious Skills on ClawHub’s OpenClaw Ecosystem
Within the OpenClaw ecosystem distributed through ClawHub, Acronis TRU identified 575 malicious skills published across 13 developer accounts. Two threat actors dominate the campaign:
- hightower6eu — responsible for 334 malicious skills (58% of the total)
- sakaen736jih — responsible for 199 skills (34.6%)
These trojanized skills masquerade as useful tools — such as a YouTube transcript summarizer — while secretly instructing users to download password-protected archives or execute encoded commands. Because OpenClaw agents are designed to act autonomously based on instructions embedded in skill definitions, attackers can effectively turn these AI agents into unwitting malware delivery mechanisms, dramatically expanding attack impact without requiring direct user action.
Indirect Prompt Injection: A New Threat Vector
A critical technique observed across the ClawHub campaign is indirect prompt injection, which embeds hidden, malicious instructions within skill files that AI agents read and execute on behalf of users. Unlike traditional phishing, the victim never has to click a suspicious link — the AI agent itself carries out the attacker’s instructions, making this one of the most insidious new attack vectors to emerge from the agentic AI revolution.
Multi-Platform Payloads Targeting Windows and macOS
For Windows targets, payloads were detected as trojans packed with VMProtect — a commercial obfuscation tool commonly used to hinder analysis. A second Windows payload used a 30-byte XOR key to decrypt strings at runtime, dynamically resolved NT APIs, and performed in-memory process injection into explorer.exe. The injected code established AES-encrypted C2 communication over HTTPS, downloaded a cryptominer disguised as svchost.exe, and maintained persistence via scheduled tasks and Windows Defender exclusion path manipulation.
For macOS targets, a base64-encoded command connects to an external IP (91.92.242[.]30) and silently downloads and executes AMOS Stealer — a macOS-focused information stealer commonly sold as malware-as-a-service (MaaS) through Telegram and underground forums.
Hugging Face Abused as Payload Staging Infrastructure
On Hugging Face, which hosts over one million machine learning models, Acronis TRU identified repositories being used as multi-stage infection chain staging points, hosting payloads across Windows, Linux, and Android. Two notable campaigns illustrate this abuse:
The ITHKRPAW Campaign (targeting Vietnamese financial sector organizations) used a malicious LNK file to invoke Cloudflare Workers, which served a PowerShell dropper that fetched a payload from a Hugging Face dataset repository while opening a decoy cat image to mask activity. Researchers assess with moderate confidence that the PowerShell script was LLM-generated, based on embedded Vietnamese-language comments in the code.
The FAKESECURITY Campaign used a batch script containing an encoded PowerShell blob that downloaded a heavily obfuscated secondary batch script from a Hugging Face repository. After stripping the Mark-of-the-Web to bypass Windows SmartScreen, the malware injected shellcode into explorer.exe and dropped a file masquerading as Windows Security.
Defensive Recommendations
Organizations and developers should treat AI models, datasets, and agent extensions as untrusted inputs requiring the same validation applied to any third-party code. Specific steps include:
- Audit installed OpenClaw skills for encoded commands or external download instructions
- Monitor for unexpected process injection into
explorer.exe - Block known malicious indicators:
91.92.242[.]30andvelvet-parrot[.]com - Restrict Windows Defender exclusion path modifications via Group Policy
- Treat Hugging Face repositories referenced in scripts as potentially untrusted third-party code
- Implement network monitoring for anomalous outbound connections from AI agent processes
This campaign signals a broader shift in the threat landscape: as AI platforms become ubiquitous in developer and enterprise workflows, they are increasingly becoming high-value targets for supply chain attacks. The combination of trusted brand names, massive user bases, and minimal vetting creates fertile ground for threat actors willing to invest in this emerging attack surface.