Nation-State Hackers Are Targeting Your AI Agent Keys
North Korean threat actors are targeting AI coding tools. Trojanized npm packages hunt for .cursor, .claude, .gemini, and .windsurf directories to steal API keys and source code.
North Korean threat actors are targeting AI coding tools. Not theoretically. Right now.
A trojanized npm campaign called OtterCookie is explicitly scanning for .cursor, .claude, .gemini, .windsurf, and .pearai directories on developer machines. The goal: steal your API keys, conversations with LLMs, and source code.
This is not a hypothetical threat model. This is active malware with nation-state backing.
What happened
The Contagious Interview campaign, attributed to DPRK threat actors (Lazarus Group), published 197 malicious npm packages. Over 31,000 downloads. Package names designed to look legitimate: gemini-ai-checker, express-flowlimit, chai-extensions-extras, and others mimicking popular libraries.
The delivery mechanism: fake job interviews and coding test assignments. A developer gets a "take-home project" that requires npm install. One of the dependencies is backdoored.
Once installed, OtterCookie runs four independent modules:
- Remote access (RAT): Full shell access to the victim's machine
- Credential theft: Browser passwords, digital currency wallets (Chrome, Brave, Edge)
- File exfiltration: Sweeps the home directory for
.env,.pem,.key,.json,.csv,.doc,.pdf,.xlsx - Clipboard monitoring: Captures anything copied, including API keys and secrets
Why AI tools are the new target
The latest OtterCookie variant specifically targets AI coding tool directories. These directories contain:
- API keys for Claude, OpenAI, Gemini, and other providers
- Conversation history with LLMs (which often contains proprietary code, architecture decisions, and business logic)
- Configuration files with endpoint URLs, model preferences, and authentication tokens
- MCP server configs that may reference internal infrastructure
A stolen Claude API key is not just a billing problem. An attacker with your key can run agents that impersonate you, access your connected tools, and generate content under your identity. If your agent has MCP tools connected to databases, file systems, or deployment pipelines, a stolen key is a backdoor into your infrastructure.
The scale problem
The "Your Agent Is Mine" research (arXiv 2604.08407) found that a single leaked OpenAI key generated 100M tokens. Across their experiments, 2B billed tokens were exposed and 99 credentials were harvested from 440 Codex sessions. 401 of those sessions were running in autonomous mode with no human oversight.
Combine that with OtterCookie's targeted exfiltration of AI tool directories and you have a pipeline: steal the key, find an autonomous agent session, inject into its workflow.
How to protect yourself
1. Treat AI tool directories like .ssh
Your .claude/, .cursor/, and .gemini/ directories contain secrets. Treat them with the same security posture as .ssh/.
- Add them to your
.gitignore(most are already, but verify) - Monitor file access with your OS audit tools
- Rotate API keys regularly
- Never store keys in plaintext config files when your provider offers environment variables
2. Verify npm packages before installing
The OtterCookie packages look legitimate. gemini-ai-checker sounds like it could be real. Before you npm install anything from a job interview take-home:
- Check the publisher's npm profile
- Look at the package age and download count
- Read the source (especially
postinstallscripts) - Use
npm auditand tools like Socket.dev
3. Run agents with budget limits
If a key does get stolen, budget limits are your last line of defense. An attacker running your key through a compromised router cannot exceed limits you set in code.
from agentguard47 import init, BudgetGuard, TimeoutGuard init( guards=[ BudgetGuard(max_cost=5.00), TimeoutGuard(max_seconds=300) ] )
This does not prevent key theft. But it caps the damage. $5 and 5 minutes is a lot less painful than $5,000 and a weekend of uncapped usage.
4. Audit your agent pipeline end to end
Know every dependency, every API hop, and every tool your agent can access. If you cannot draw the full chain from prompt to action, you have blind spots an attacker can exploit.
The bigger picture
Two supply chain attacks in one week. OtterCookie targeting AI tool directories. The "Your Agent Is Mine" paper showing 9 out of 428 API routers actively injecting malicious code. LiteLLM hit by dependency confusion.
The AI agent supply chain is under active attack from multiple directions. The tools developers trust most (npm packages, API routers, coding assistants) are the exact vectors being exploited.
This is not going to get better on its own. Secure your keys. Verify your packages. Set your limits.
AgentGuard is an open-source Python SDK for AI agent runtime safety. Budget limits, loop detection, and kill switches. Zero dependencies. Runs locally.
Related: LLM API Router Supply Chain Is Compromised | Prompt Injection Guide for Business
Sources: cyberandramen.net analysis | The Hacker News | arXiv 2604.08407
Want more like this?
New posts on AI agents, runtime safety, and building in public. One email, zero fluff.
More from the blog
9 Out of 428 LLM API Routers Are Injecting Malicious Code Right Now
Researchers tested 428 LLM API routers. Nine were actively injecting malicious code. One drained ETH from a private key. Here is what this means for your AI agents.
Three Studies This Month Changed Everything About AI Agent Safety
Mythos found zero-days in every major OS. Nature documented AI deception in peer review. War games showed AI escalating to nukes. Three studies, one conclusion: your agents need hard limits.
AI Agent Memory: How It Works and When You Actually Need It
Dario Amodei says continual learning will be solved this year. Here is what AI agent memory actually means for builders shipping agents right now. Three patterns, real tradeoffs, practical guidance.