bmdpat
ai-agentssecuritysupply-chainnpm

Nation-State Hackers Are Targeting Your AI Agent Keys

North Korean threat actors are targeting AI coding tools. Trojanized npm packages hunt for .cursor, .claude, .gemini, and .windsurf directories to steal API keys and source code.

Patrick Hughes
5 min read
Share: LinkedIn Twitter

North Korean threat actors are targeting AI coding tools. Not theoretically. Right now.

A trojanized npm campaign called OtterCookie is explicitly scanning for .cursor, .claude, .gemini, .windsurf, and .pearai directories on developer machines. The goal: steal your API keys, conversations with LLMs, and source code.

This is not a hypothetical threat model. This is active malware with nation-state backing.

What happened

The Contagious Interview campaign, attributed to DPRK threat actors (Lazarus Group), published 197 malicious npm packages. Over 31,000 downloads. Package names designed to look legitimate: gemini-ai-checker, express-flowlimit, chai-extensions-extras, and others mimicking popular libraries.

The delivery mechanism: fake job interviews and coding test assignments. A developer gets a "take-home project" that requires npm install. One of the dependencies is backdoored.

Once installed, OtterCookie runs four independent modules:

  1. Remote access (RAT): Full shell access to the victim's machine
  2. Credential theft: Browser passwords, digital currency wallets (Chrome, Brave, Edge)
  3. File exfiltration: Sweeps the home directory for .env, .pem, .key, .json, .csv, .doc, .pdf, .xlsx
  4. Clipboard monitoring: Captures anything copied, including API keys and secrets

Why AI tools are the new target

The latest OtterCookie variant specifically targets AI coding tool directories. These directories contain:

  • API keys for Claude, OpenAI, Gemini, and other providers
  • Conversation history with LLMs (which often contains proprietary code, architecture decisions, and business logic)
  • Configuration files with endpoint URLs, model preferences, and authentication tokens
  • MCP server configs that may reference internal infrastructure

A stolen Claude API key is not just a billing problem. An attacker with your key can run agents that impersonate you, access your connected tools, and generate content under your identity. If your agent has MCP tools connected to databases, file systems, or deployment pipelines, a stolen key is a backdoor into your infrastructure.

The scale problem

The "Your Agent Is Mine" research (arXiv 2604.08407) found that a single leaked OpenAI key generated 100M tokens. Across their experiments, 2B billed tokens were exposed and 99 credentials were harvested from 440 Codex sessions. 401 of those sessions were running in autonomous mode with no human oversight.

Combine that with OtterCookie's targeted exfiltration of AI tool directories and you have a pipeline: steal the key, find an autonomous agent session, inject into its workflow.

How to protect yourself

1. Treat AI tool directories like .ssh

Your .claude/, .cursor/, and .gemini/ directories contain secrets. Treat them with the same security posture as .ssh/.

  • Add them to your .gitignore (most are already, but verify)
  • Monitor file access with your OS audit tools
  • Rotate API keys regularly
  • Never store keys in plaintext config files when your provider offers environment variables

2. Verify npm packages before installing

The OtterCookie packages look legitimate. gemini-ai-checker sounds like it could be real. Before you npm install anything from a job interview take-home:

  • Check the publisher's npm profile
  • Look at the package age and download count
  • Read the source (especially postinstall scripts)
  • Use npm audit and tools like Socket.dev

3. Run agents with budget limits

If a key does get stolen, budget limits are your last line of defense. An attacker running your key through a compromised router cannot exceed limits you set in code.

from agentguard47 import init, BudgetGuard, TimeoutGuard init( guards=[ BudgetGuard(max_cost=5.00), TimeoutGuard(max_seconds=300) ] )

This does not prevent key theft. But it caps the damage. $5 and 5 minutes is a lot less painful than $5,000 and a weekend of uncapped usage.

4. Audit your agent pipeline end to end

Know every dependency, every API hop, and every tool your agent can access. If you cannot draw the full chain from prompt to action, you have blind spots an attacker can exploit.

The bigger picture

Two supply chain attacks in one week. OtterCookie targeting AI tool directories. The "Your Agent Is Mine" paper showing 9 out of 428 API routers actively injecting malicious code. LiteLLM hit by dependency confusion.

The AI agent supply chain is under active attack from multiple directions. The tools developers trust most (npm packages, API routers, coding assistants) are the exact vectors being exploited.

This is not going to get better on its own. Secure your keys. Verify your packages. Set your limits.


AgentGuard is an open-source Python SDK for AI agent runtime safety. Budget limits, loop detection, and kill switches. Zero dependencies. Runs locally.

Get started with AgentGuard

Related: LLM API Router Supply Chain Is Compromised | Prompt Injection Guide for Business

Sources: cyberandramen.net analysis | The Hacker News | arXiv 2604.08407

Want more like this?

New posts on AI agents, runtime safety, and building in public. One email, zero fluff.

More from the blog