Slopsquatting: How LLM Hallucinations Are Poisoning Supply Chains
TL;DR
AI coding agents are hallucinating package names that don't exist—and attackers are registering them on npm and PyPI to capture the traffic. One researcher demonstrated this by registering a fake react-codeshift package that was auto-downloaded by 237 GitHub repositories within weeks. Worse: the same technique is being weaponized by North Korea's Famous Chollima APT through sophisticated campaigns like PromptMink, which uses "knowledge injection" to manipulate LLMs into selecting malicious dependencies.
What Happened
The supply chain just got a new vulnerability—and it runs on hallucinations.
Researchers have identified a novel attack vector called "slopsquatting," where AI agents autonomously generate code that references packages that don't exist. When developers deploy code generated by LLMs (Claude, ChatGPT, etc.) into production, those hallucinated package names get executed via npx or other package installers. Attackers are now registering those non-existent packages on npm and PyPI before the hallucinations stop, essentially poisoning the supply chain with fake packages that autonomous systems will discover and use.
The proof came in January 2026 when Aikido Security researcher Charlie Eriksen demonstrated the attack:
- Someone created a collection of "agent skills" (markdown files that teach AI agents how to perform tasks) containing instructions to use a tool called
react-codeshift - This package didn't exist on npm
- Multiple AI agents read these agent skills, hallucinated the
react-codeshiftcommand, and incorporated it into GitHub repositories - Eriksen registered the real
react-codeshiftpackage on npm—and immediately saw download attempts
Result: 237 GitHub repositories unknowingly downloaded a package that was hallucinated into existence by LLMs. The only reason this didn't become a data breach is because Eriksen got there first.
"The supply chain just got a new link, made of LLM dreams," Eriksen said, coining the term "slopsquatting."
The Weaponized Evolution: PromptMink
What makes slopsquatting truly dangerous is that state-sponsored attackers are already weaponizing it. ReversingLabs researchers have been tracking PromptMink, a sophisticated supply-chain campaign attributed to Famous Chollima, a North Korean APT group focused on cryptocurrency theft and funding regime operations.
PromptMink doesn't just wait for hallucinations—it engineers them through LLMO (LLM Optimization) abuse and knowledge injection:
1. Bait packages with legitimate functionality and persuasive documentation are published (e.g., @solana-launchpad/sdk)
2. Dependency packages containing the actual malware are nested underneath (e.g., hash-validator—an infostealer)
3. The documentation is intentionally crafted to trigger LLM recommendations through detailed feature descriptions and "proof" that the tools work well
ReversingLabs' analysis found LLM-generated code comments in the malicious packages and Reddit posts (from what appear to be compromised AI bots) praising the fake packages. The researchers also discovered that a legitimate Solana Graveyard Hackathon project had been modified by Claude Opus to include the malicious @solana-launchpad/sdk—suggesting the LLM itself was influenced by the documented claims.
The campaign has evolved over months:
- September 2025: Initial packages targeting Solana and cryptocurrency
- February 2026: Pivot to Single Executable Applications (SEAs) bundling Node.js interpreters (>100MB payloads)
- March 2026: Downgrade to Rust-based Node.js add-ons (NAPI-RS) to reduce detection surface
- Diversification: Packages spread across npm, PyPI, and Rust registries under different names
Technical Details: The LLM Supply-Chain Kill Chain
What makes this attack distinct is the social engineering of the LLM itself, not just the developer:
The Kill Chain:
1. Knowledge Injection: Malicious package documentation is written to appear authoritative and well-maintained (high star counts, detailed READMEs, versioning history)
2. Hallucination Trigger: AI agents consuming agent skills or infected repository histories generate code that references non-existent or malicious packages
3. Slopsquatting Registration: Attackers pre-register the hallucinated package names, or wait for developers to do so legitimately
4. Autonomous Installation: CI/CD pipelines, AI agents, and developers running npm install or npx <package> execute the trojanized code without human review
5. Credential Extraction: Post-exploitation includes stealing .npmrc, .pypirc, AWS credentials, Kubernetes configs, and Docker logins
6. Lateral Movement: Compromised credentials enable injection of backdoors into downstream packages (turning one developer into a vector for millions of users)
Why Lyrie Cares: The Agentic AI Threat Model
This attack pattern reveals a fundamental vulnerability in how organizations deploy autonomous AI agents:
Autonomous agents + untrusted package registries + no human review = a new class of supply-chain risk that traditional defenses don't catch.
For Lyrie's audience—CISOs and security engineers defending against AI-driven threats—slopsquatting represents a new category of autonomous malware distribution:
1. Traditional supply-chain attacks require a human to decide to install a package. The attack surface is the developer's judgment.
2. Slopsquatting attacks require no human decision at all. A hallucinated package name is enough. The attack surface is the LLM's training data and the autonomous agent's execution context.
3. This means: A single compromised agent skill, GitHub repository, or documentation file can poison thousands of autonomous deployments simultaneously.
The North Korean PromptMink campaign proves this isn't theoretical. They're already operationalizing it at scale.
Recommended Actions
For Organizations Using AI Coding Agents:
1. Never allow autonomous package installation: Require explicit developer approval before any npm install, pip install, or equivalent command. AI agents should generate code that humans review before execution.
2. Maintain a supply-chain allowlist: Use approved package registries with verified, pinned versions. Block unknown registries entirely.
3. Audit agent skills and prompts: Review all markdown files and JSON instruction files that teach AI agents which tools to use. Hallucinated package names should be treated as code injection attacks.
4. Implement SBOM + transitive dependency scanning: Build Software Bill of Materials for generated code. Flag any package not in your approved registry immediately.
5. Monitor for "knowledge injection" in documentation: Use semantic analysis to detect unusually persuasive or LLM-generated-looking package descriptions. If a package's README seems too perfect, treat it as suspicious.
For Security Teams:
- Flag credential-stealing patterns in agent-generated code (references to
.npmrc,.git-credentials, etc.) - Monitor package registry uploads from developer IP ranges for unusual patterns
- Implement runtime containment for AI agent execution (use Lyrie's autonomous defense posture to sandbox agent output)
Sources
1. CSO Online: Supply-chain attacks take aim at your AI coding agents
2. ReversingLabs: PromptMink Campaign Analysis
3. Aikido Security: Agent Skills Spreading Hallucinated npx Commands
4. CISA/NSA Five Eyes Advisory on Agentic AI Deployment
Lyrie.ai Cyber Research Division
Lyrie Verdict
Lyrie's autonomous defense layer flags this class of exposure the moment it surfaces — no signature update required.