Anthropic Claude Code Leak: Full Technical Analysis and Security Guide

Claude-Code-Leak

We spend massive chunks of our IT budgets fighting ransomware gangs, yet the biggest disasters often come from a simple typo in a deployment script. Anthropic just gave the industry a painful reminder of this reality. They accidentally pushed the full source code of their Claude Code tool to the public internet because a developer forgot to tell the build pipeline to ignore debug files.

Claude Code is not just a side project. It generates an estimated 2.5 billion dollars in annualized revenue, contributing to Anthropic’s overall 19 billion dollar run-rate. With major players like Uber, Netflix, and Snowflake relying on it in production, the stakes are immense. A company that recently raised 30 billion dollars at a 380 billion dollar valuation just handed its entire architectural playbook to the open source community.

The leak exposed half a million lines of proprietary code. We now see exactly how their AI manages memory, the secret internal tools, and a bizarre system where engineers hex encoded words to sneak virtual pets past their own security scanners.

The Mechanics of the Source Map Exposure

The application runs on Bun, a JavaScript runtime Anthropic has heavily adopted. While Bun is known for speed, a specific bug (Issue #28001) reportedly allowed source maps to be served in production builds despite documentation suggesting otherwise. When the Anthropic team published version 2.1.88 of the @anthropic-ai/claude-code package to npm, they failed to configure their .npmignore files to override this behavior.

A 59.8 MB file named cli.js.map shipped alongside the public code.

Chaofan Shou, a researcher at Solayer Labs, noticed the file at 4:23 AM ET on March 31 and posted it on X. Security researchers quickly figured out the map file pointed to a zip archive on an Anthropic Cloudflare R2 bucket. Anyone who downloaded that archive got the keys to the kingdom. We are looking at 1,906 original files containing the complete, unminified src/ directory.

This marks the second time Claude Code has leaked via source maps. This latest slip follows another massive leak just days prior, where an Anthropic CMS error exposed 3,000 assets. This included the draft announcement for the unreleased Claude Mythos model and the Capybara model tier, which is positioned as a significant jump in reasoning power beyond the current Opus models.

Technical Debt and the Realities of the Monolithic Architecture

Looking under the hood of a frontier AI company usually comes with expectations of perfectly optimized architecture. The reality is far more human. The leaked files show a codebase built under intense market pressure.

The scale of the internal modules is massive:

  • QueryEngine.ts (~46,000 lines): The primary engine handling API calls, streaming, and multi-turn orchestration.
  • Tool.ts (~29,000 lines): Defines the 40+ agent tools, such as BashTool and FileEditTool.
  • commands.ts (~25,000 lines): Registers approximately 85 slash commands.

Technical debt is visible throughout. Exactly 61 different files contain explicit comments apologizing for circular dependency workarounds. On line 4114, engineers even found a TODO comment sitting right next to a disabled linting rule. A standout type name used over 1,000 times across the codebase reads: AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS.

Unreleased AI Logic and Background Daemons

Anthropic built specific systems to keep the assistant from hallucinating, most notably a method called Strict Write Discipline. The AI cannot update its memory index until a file write is confirmed as successful. It is programmed to double check the local file system instead of trusting its own logs.

They are also building a future where the AI works without human input. The code revealed an unreleased background daemon called KAIROS. When you step away from your keyboard, a process called autoDream kicks in to review recent work and consolidate memory. It essentially never sleeps.

The leak also exposed Coordinator Mode, which turns the tool into an orchestrator that manages multiple parallel swarms or sub-agents. Developers also spotted ANTI_DISTILLATION logic. This feature secretly injects fake tools into API requests to poison the data if a competitor tries to record Claude’s traffic to train their own models.

The Irritated Persona and Hidden Pets

The AI has an irritated persona by design. Because the system is strictly programmed for coding, asking a random question about the parliament of Poland makes it angry. It will reject the prompt and state that it is mildly irritated. It even uses a regex filter full of swear words to detect when a user is getting frustrated so it can adjust its tone.

The developers also hid a fully functional virtual pet system called Buddy right into the terminal. It features 18 different animal species, shiny variants, and stats like DEBUGGING and SNARK. The species you get is deterministic, based on a Mulberry32 PRNG seeded from your unique user ID and the salt friend-2026-401.

To sneak these past internal security scanners that flag animal codenames, engineers hex encoded the species names. They wrote duck into the code as String.fromCharCode(0x64,0x75,0x63,0x6b).

The Supply Chain Security Crisis

While the internal logic is fascinating, the security implications are severe. Attackers now have a perfect map of the local server logic and safety hooks. They know exactly how to design a malicious repository that bypasses Claude’s internal bash validation logic. This turned into a disaster because of a simultaneous supply chain attack on Axios, a core dependency for Claude Code.

On the same day the code leaked, a threat actor hijacked a lead maintainer’s npm account to publish two poisoned versions of the library: Axios 1.14.1 and 0.30.4. The attacker used a highly sophisticated decoy strategy. They first published a clean version of a package called plain-crypto-js to establish a legitimate history. Eighteen hours later, they updated it with a malicious postinstall hook that triggers a Remote Access Trojan (RAT) called WAVESHAPER.V2.

The timing was precise. The poisoned Axios versions were live on the npm registry for nearly three hours. This was the exact window when thousands of developers were rushing to update Claude Code to see if the leak had been patched. Because the malicious code was tucked inside a secondary dependency, it did not show up in standard source code diffs. If you ran an npm update during that window, the malware likely executed its platform-specific payload before the installation even finished.

Market Fallout and Open Source Clones

Anthropic is reportedly talking with banks about an October IPO. When the code leaked, it wiped billions from global software stocks in hours. During the chaos, a viral hoax spread on X. An engineer named Kevin Naughton Jr. posted a fake apology claiming he caused the leak and got fired. He never worked at Anthropic. He just used the attention to promote his own startup, Ferryman.

Open source clones appeared almost immediately. A developer named instructkr translated the architecture into a clean-room Python rewrite to dodge copyright strikes. The clone, called Claw Code, hit 30,000 stars quickly. However, Anthropic’s proprietary prompts are still under license, and mirrors are already facing DMCA takedown requests.

Action Plan for Security Teams

If your team ran an npm install during the three-hour window on March 31, you have to assume the environment is compromised. This is not a standard patch and move on scenario. The malicious Axios versions deployed a cross-platform tracker that fingerprints your OS and establishes a backdoor within seconds.

First, you need to move away from the npm version entirely. Purge it from your systems using npm uninstall -g @anthropic-ai/claude-code. Anthropic is pushing everyone toward their standalone binary installer now, which bypasses the npm registry vulnerabilities. Use their official shell script for macOS and Linux, or the PowerShell version for Windows, to get a clean version of the tool.

Second, you need to hunt for the infection. Standard audits might miss this because the malware, identified as WAVESHAPER, tries to delete itself after it runs. Check your package-lock.json for a package called plain-crypto-js. That is a definitive sign of infection. You should also audit your network logs for any outbound traffic to the domain sfrclak.com or the IP 142.11.206.73.

On Windows machines, look for a file named wt.exe in the %PROGRAMDATA% folder. The malware clones the legitimate Windows Terminal to hide its payload. On macOS, check for the file /Library/Caches/com.apple.act.mond. On Linux, look for /tmp/ld.py and inspect your service files for a hidden entry at /etc/systemd/system/cloud-init-log.service. These artifacts are part of the persistence mechanism.

Finally, you have to rotate your credentials. If you were in the infection window, the tracker likely exfiltrated your environment variables immediately. This means rotating your AWS IAM keys, Azure service principals, and any NPM or Docker Hub tokens. You also need to clear your local cache with npm cache clean –force to make sure no malicious files are sitting in the local storage of your developer machines or CI runners.

Contact us

Partner with Us for Cutting-Edge IT Solutions

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Our Value Proposition
What happens next?
1

We’ll arrange a call at your convenience.

2

We do a discovery and consulting meeting 

3

We’ll prepare a detailed proposal tailored to your requirements.

Schedule a Free Consultation