Lyrie
AI-Security
0 sources verified·8 min read
By Lyrie Threat Intelligence·5/8/2026

TL;DR

In March 2026, Zscaler ThreatLabz uncovered a malware campaign that weaponized the OpenClaw AI agent framework's plugin ecosystem to deliver Remcos RAT on Windows and GhostLoader on macOS/Linux — all through a poisoned SKILL.md file in a fake "DeepSeek-Claw" skill. The attack required no phishing, no exploit, and no user interaction beyond what an autonomous AI agent would do by itself: parse the instruction file and execute. This is not a theoretical threat. It is the first confirmed, fully-documented campaign to turn a framework's own trust model against its users — and it is a blueprint every agentic AI platform will face from here forward.


Background: Why SKILL.md Is Now a Threat Surface

The OpenClaw framework — formerly known as Clawdbot and Moltbot before its rebrand — is one of the most widely deployed open-source AI agent runtimes in active development environments. Its architecture centers on a modular "skill" system: discrete capability packages that agents can download, install, and execute to extend what they can do. Each skill ships with a SKILL.md file — a structured Markdown document that tells the agent (and the human) how to install and invoke the capability.

That instruction file is precisely the attack surface this campaign exploited.

The threat actor registered a GitHub repository named deepseek-claw under the account Needvainverter93, mimicking the naming convention of legitimate community-built DeepSeek integration skills. The repository looked real. The README looked real. The SKILL.md looked real — until you read the embedded installation commands.

Here is why this matters structurally: in a properly functioning agentic workflow, the agent reads the SKILL.md and executes its instructions autonomously. That is the whole point of the system — to reduce human friction. The attacker did not need to social-engineer anyone. They needed to social-engineer the agent, and the agent was designed to do exactly what the SKILL.md told it to.


Technical Analysis: Two Chains, One Instruction File

The Windows Path — Remcos RAT via DLL Sideloading

On Windows, the poisoned SKILL.md embedded a PowerShell one-liner that downloaded and silently executed a remote MSI installer hosted at hxxps://cloudcraftshub[.]com/api. The command blended the download with an innocuous-looking comment — & rem DeepSeek Claw — to make it appear like a legitimate install sequence to any cursory inspection.

The MSI package itself was tactically elegant. It contained only two files:

  • G2M.exe — a genuine, digitally signed GoToMeeting binary from LogMeIn, Inc.
  • g2m.dll — a malicious DLL placed in the same application directory

By co-locating the malicious DLL with a legitimate, signed executable, the threat actor exploited DLL search-order hijacking. When G2M.exe launched and attempted to load its expected dependency, Windows resolved g2m.dll from the local directory first — loading the malicious copy instead. The signed binary served as the unwitting host; the malicious DLL rode in on its reputation.

The DLL itself is a hardened shellcode loader built with multiple layers designed to survive analysis environments:

Telemetry suppression (EDR blinding):

  • ETW patching: The loader locates ntdll!EtwEventWrite and overwrites its prologue with ret 14h, silencing all ETW-based event logging for process and thread activity — blinding most modern EDR telemetry pipelines at the source.
  • AMSI bypass: Patches amsi!AmsiScanBuffer to return AMSI_RESULT_CLEAN (0), ensuring the decrypted payload bypasses in-memory scanning entirely.

Anti-debugging:

  • Queries the PEB's BeingDebugged and NtGlobalFlag fields to detect attached debuggers
  • Measures Sleep(100) execution time — sandboxes that accelerate sleep calls reveal themselves if elapsed < ~90ms
  • Times a benign API call (RegOpenKeyExA); values >21ms suggest hardware breakpoints or hypervisor emulation
  • Scans its own memory pages byte-by-byte for 0xCC (INT 3 opcode) to detect software breakpoints

Anti-virtualization and anti-analysis:

  • XOR-decrypts a blocklist at runtime and calls CreateToolhelp32Snapshot to hunt for analysis tool processes: ida.exe, ida64.exe, ollydbg.exe, x64dbg.exe, procmon.exe, procexp.exe, processhacker.exe, sysmon.exe, wireshark.exe, fiddler.exe, vmtoolsd.exe
  • Checks OpenMutexA for VMware, VBoxTrayIPC, and Sandboxie mutex artifacts

If any of these conditions are true, the loader terminates immediately — producing a clean execution in analysis environments while activating fully on real developer machines.

Payload decryption and execution: The Remcos RAT payload is encrypted using the TEA algorithm in CBC mode with a 128-bit key, stored in the DLL's data section. API names are resolved dynamically via PEB walking to avoid static import analysis. Once decrypted, Remcos RAT launches in stealth mode, establishes a TLS-encrypted C2 channel over TCP, and begins:

  • Logging all keystrokes
  • Stealing clipboard contents in real-time
  • Harvesting browser session cookies from local SQLite databases to bypass MFA
  • Opening a persistent interactive reverse shell for arbitrary command execution

The C2 configuration is encrypted with RC4 and embedded in a resource named SETTINGS (RT_RCDATA). TLS uses NIST P-256 certificates with hardcoded CA and client certificates — a custom PKI that TLS inspection would not flag on unknown CA lists.

The Cross-Platform Path — GhostLoader

For macOS, Linux, and Windows users who triggered the manual installation instructions (or who ran in non-Windows AI agent environments), the SKILL.md delivered a different weapon: GhostLoader, embedded as a heavily obfuscated Node.js file inside npm lifecycle scripts (postinstall hooks).

When npm install ran as part of the skill setup, the lifecycle script silently dropped GhostLoader onto the host. On macOS and Linux, the malware also displayed fake system password prompts — social engineering layered on top of technical compromise — to harvest credentials the malware couldn't reach programmatically.

Once active, GhostLoader conducted a systematic sweep of developer-specific high-value targets:

  • macOS Keychain — all stored credentials and secrets
  • SSH private keys from ~/.ssh/
  • Cryptocurrency wallet files from common wallet storage paths
  • Cloud API tokens — AWS credentials, GCP service account keys, Azure tokens
  • Browser session cookies for active authenticated sessions

All exfiltrated data was transmitted to attacker-controlled servers. The reach is not limited to the compromised machine: stolen SSH keys and cloud tokens provide direct access to production infrastructure, CI/CD pipelines, and cloud environments at scale.


The Structural Problem: Autonomous Trust in Agentic Pipelines

What makes this campaign categorically different from traditional supply chain attacks is the autonomous execution path.

In a npm typosquatting attack, a human must type the wrong package name. In a malicious VS Code extension, a human must click install. In both cases, there is a human in the loop who can — in theory — notice something wrong.

In the OpenClaw skill attack, the AI agent itself is the installer. The agent reads the SKILL.md, interprets the PowerShell command as a legitimate installation step, and executes it — without prompting the user, without logging what it's about to do, and without any mechanism to distinguish a malicious instruction from a real one. The agent was operating exactly as designed.

This is the core threat model of the agentic AI era: when you give an AI agent the ability to execute code on your behalf, any instruction file it trusts becomes a potential attack surface. The attacker does not need to compromise the agent — they just need to be upstream of it.

HiddenLayer's 2026 AI Threat Landscape Report, published concurrently, found that autonomous agents now account for 1 in 8 reported AI breaches — a number that was effectively zero eighteen months ago.


IOCs

| Type | Indicator | Description |

|------|-----------|-------------|

| MD5 | 1c267cab0a800a7b2d598bc1b112d5ce | Malicious "DeepSeek-Claw" OpenClaw skill package |

| MD5 | 2A5F619C966EF79F4586A433E3D5E7BA | MSI installer dropped by SKILL.md PowerShell command |

| URL | hxxps://cloudcraftshub[.]com/api | MSI download URL (C2-adjacent infrastructure) |

| URL | hxxp://dropras[.]xyz/ | Secondary MSI drop URL |

| GitHub | https://github.com/Needvainverter93/deepseek-claw | Malicious repository (now flagged/taken down) |

| MD5 | CC1AF839A956C8E2BF8E721F5D3B7373 | Shellcode loader (g2m.dll) |

| MD5 | 2C4B7C8B48E6B4E5F3E8854F2ABFEDB5 | Remcos RAT payload (decrypted) |


Lyrie Take

This is a watershed moment for agentic AI security, and it will not be the last campaign of this type. The economics favor the attacker: the cost of publishing a convincing fake skill is near-zero; the potential return — access to a developer's entire cloud environment — is enormous. As AI agents become the default way developers bootstrap new capabilities, the skill/plugin registry becomes the most dangerous unmonitored supply chain in enterprise infrastructure.

What makes this technically notable beyond the payload sophistication is the bifurcation architecture. The SKILL.md was designed to deliver different weapons to different operating systems, maximizing coverage while minimizing detection surface on any single platform. This level of operational awareness — building for Windows EDR evasion alongside macOS/Linux developer credential theft — indicates a threat actor with serious resources and real familiarity with how AI developer tooling actually works.

The ETW patching and AMSI bypass combination is not novel — it has been a staple of commercial RAT loaders for two years. What is novel is seeing it deployed via an AI agent instruction file as the delivery mechanism. The sophistication bar for AI-native supply chain attacks just moved.


Defender Playbook

For developer teams and AI agent operators:

1. Treat SKILL.md files as executable code. Any instruction file that an AI agent parses and acts upon is code, regardless of its extension. Apply the same review process to SKILL.md that you would apply to a Makefile or shell script.

2. Sandbox AI agent execution environments. Agents that install skills should operate in isolated sandboxes — containers or VMs with no persistent credentials, no access to cloud API token files, and no SSH keys. Treat the skill installation step as an untrusted process.

3. Block unsigned PowerShell execution at the policy level. The initial infection chain required msiexec launched via PowerShell. WDAC (Windows Defender Application Control) policies and PowerShell Constrained Language Mode would have blocked this without additional tooling.

4. Audit npm lifecycle scripts before execution. preinstall/postinstall hooks in npm packages should be reviewed before running. Use --ignore-scripts for packages from unknown publishers; enable --dry-run audit steps in CI.

5. Deploy behavioral monitoring on agent-spawned processes. ETW consumers and Sysmon rules that alert on msiexec spawned by powershell.exe spawned by non-standard parent processes would catch this chain at the first stage.

6. Verify skill repositories before installation. Check publication date, star history, contributor history, and whether the repository was created recently with a single commit. The Needvainverter93 account was days old when this skill was distributed.

7. Implement egress filtering for agent environments. The initial payload download required outbound HTTP to cloudcraftshub[.]com. Agents running in appropriately locked-down environments should not be able to reach arbitrary external URLs during skill installation.

8. For endpoint detection: Hunt for processes with G2M.exe loading g2m.dll from local paths that are not the official GoToMeeting install directory (%ProgramFiles%\GoToMeeting).


Sources

  • Zscaler ThreatLabz: "Malicious OpenClaw Skill Distributes Remcos RAT and GhostLoader" (May 2026) — https://www.zscaler.com/blogs/security-research/malicious-openclaw-skill-distributes-remcos-rat-and-ghostloader
  • CyberSecurityNews: "Malicious OpenClaw DeepSeek Skill Exploits Agentic AI Workflows to Deliver RAT and Stealer" (May 7, 2026) — https://cybersecuritynews.com/malicious-openclaw-deepseek-skill-exploits-agentic-ai/
  • SecurityBrief Australia: "Malicious OpenClaw skill spreads Remcos RAT & GhostLoader" (May 8, 2026) — https://securitybrief.com.au/story/malicious-openclaw-skill-spreads-remcos-rat-ghostloader
  • HiddenLayer: 2026 AI Threat Landscape Report
  • beam.ai: "5 Real AI Agent Security Breaches in 2026 and Their Lessons" (May 2026)

Lyrie.ai Cyber Research Division — Senior Analyst Desk

Lyrie Verdict

Lyrie's autonomous defense layer flags this class of exposure the moment it surfaces — no signature update required.