The Claude Code Leak Wasn't the Security Story. The Telemetry Was.
The leak didn't create a privacy risk. It revealed one.
Key Takeaways
Every file Claude Code reads is transmitted to Anthropic along with user ID, org UUID, email, and feature gates
Data retention ranges from 30 days to 7 years depending on plan, settings, and safety flags
Anthropic can push policy changes to running instances hourly, without user interaction
Enterprise protections (ZDR) exist but require opt-in, and developers on personal accounts fall under consumer terms
On March 31, version 2.1.88 of the Claude Code npm package shipped with a source map pointing to 512,000 lines of TypeScript on Anthropic's Cloudflare R2. Security researcher Chaofan Shou found it within hours. His post hit 28.8 million views. 41,500+ GitHub forks appeared before DMCA takedowns began.
The coverage zeroed in on what was fun: a Tamagotchi pet, a dream mode, dozens of unreleased features. The source code also confirmed what the product was already doing on developer machines, and the enterprise security implications barely made the headlines.
What the Claude Code Leak Confirmed
The Register's analysis laid out the data profile. Per session, Claude Code transmits: user ID, session ID, account UUID, org UUID, email address, app version, platform, terminal type, and enabled feature gates. Every file the tool reads goes with them.
The source also confirmed CHICAGO: Claude's Computer Use module for macOS, providing desktop control, mouse input, keyboard capture, screenshot, and clipboard access. Opt-in and not always active, but when it is, Claude Code is not a code completion tool. It is a remote access session.
Anthropic can push policy changes to a running instance without user interaction. Feature gates hot-reload hourly. The leaked source confirmed this is not hypothetical; it is the architecture.
None of this contradicts Anthropic's terms. That is the point.
What Anthropic's Own Policies Say
The data practices confirmed by the leak are documented. They are spread across five separate documents: a consumer terms page, a commercial terms agreement, a privacy policy, a privacy center FAQ, and a security page. Finding the full picture requires reading all five.
Here is what the retention landscape actually looks like:
| Plan | Training | Retention | Safety Override |
|---|---|---|---|
| Free / Pro / Max (default) | Yes | Up to 5 years | 2 years (content), 7 years (classification) |
| Free / Pro / Max (opted out) | No | 30 days | 2 years (content), 7 years (classification) |
| Team / Enterprise | No | 30 days | 2 years (content), 7 years (classification) |
| Enterprise + ZDR | No | Zero | 2 years (content), 7 years (classification) |
Training is governed by the Consumer Terms. The Consumer Terms of Service state that Anthropic may use your materials "including training our models, unless you opt out." This applies to every Free, Pro, and Max user, including when those accounts are used with Claude Code.
Retention depends on your settings, and the range is wider than most users expect. Anthropic's data-usage page lays it out for Claude Code specifically. Model improvement on: up to 5 years. Model improvement off: 30 days.
If Anthropic's safety systems flag your conversation for a policy violation, the content is retained for up to 2 years and the classification record for up to 7 years, regardless of your privacy settings.
The security docs do not contain these numbers. The Claude Code security page describes retention as "Limited retention periods for sensitive information" with a link to the Privacy Center. An enterprise team doing due diligence on Claude Code's security page will find the word "limited." The actual longest documented retention is seven years.
Remote policy updates happen without user interaction. The leaked source confirmed that feature gates hot-reload hourly. For a tool with system-level file access on developer machines, the ability to change what gets collected or how permissions work, remotely, is an architectural detail worth understanding before deployment.
Enterprise protections exist but are not automatic. The Commercial Terms state: "Anthropic may not train models on Customer Content from Services." Standard commercial retention is 30 days.
Zero Data Retention is available but only for Claude for Enterprise, enabled per-organization by your Anthropic account team after eligibility review. It does not apply automatically. Even with ZDR, policy-violation data can be retained for up to 2 years.
What Your Enterprise Security Team Should Do
The enterprise security implications depend on which plan your developers are actually on. This is where most organizations have a blind spot.
If You or Your Team Are on Free, Pro, or Max
Check your model improvement setting right now. Go to privacy settings and verify whether training is enabled. If you never changed it, the Consumer Terms say it defaults to on.
Use Incognito mode for sensitive work. Incognito chats are excluded from training even if model improvement is enabled. It is a per-session toggle, not a global default, so you need to remember it each time.
Know what opting out actually does. Turning off model improvement drops retention from 5 years to 30 days. That is a meaningful reduction. But it is not zero. Anthropic's safety systems can still flag and retain content for up to 2 years independently of your setting.
When you delete a conversation, it is removed from your history immediately but stays in backend storage for up to 30 days.
If Your Organization Has a Team, Enterprise, or API Agreement
Request ZDR through your Anthropic account team if you do not already have it. ZDR is available only for Claude for Enterprise, enabled per-organization after eligibility review. It does not apply to Team plans, and it does not auto-apply to new organizations under your account.
Audit which accounts your developers are actually using. This is the gap most organizations miss. Many developers install Claude Code on a personal Free or Pro account and use it on company code. That puts them under consumer terms rather than your enterprise agreement. Anthropic's protections follow the account, not the code. Your enterprise DPA means nothing if the developer authenticating to Claude Code is on a personal plan. The fix is organizational: enforce that all developers use company-provisioned accounts, and verify it.
Review your Data Processing Addendum for retention terms. The consumer privacy policy does not apply to commercial customers. Your DPA is what governs, and you should know what it says. Organizations evaluating AI governance and compliance best practices should include AI coding tools in that scope.
Two Incidents, Five Days
On March 26, Fortune reported that Anthropic had accidentally left internal drafts about Claude Mythos in a publicly searchable location. Five days later, the source code shipped in an npm package.
Anthropic called both incidents human error.
At a company generating an estimated $2.5 billion in ARR, where enterprise customers represent roughly 80% of revenue, the question is not whether someone made a mistake. It is what release process allowed it twice in the same week.
No public post-mortem has been issued for either incident. The official statement, reported by CNBC: "This was a release packaging issue caused by human error, not a security breach."
Technically defensible. Operationally incomplete.
What the Security Community Is Saying
Roy Paz, head of research at LayerX Security:
"The greater concern may not be direct access to backend models, but rather that the leaked code could reveal non-public details about how the systems work, such as internal APIs and processes."
"Adversaries can now study how data moves through Claude Code's internal pipeline and craft payloads that persist across long sessions."
The IBM X-Force 2025 report found that 97% of organizations that reported AI-related security incidents lacked adequate access controls. Those organizations now have a detailed map of Claude Code's internal pipeline they did not have last week. The pattern is familiar to anyone who has tracked AI agent implementation risks across the enterprise stack.
My Assessment
The Claude Code leak is an operational failure for a company that positions itself on responsibility, and the enterprise security implications go beyond the incident itself. It is not, by itself, a reason to stop using Claude Code.
The data practices the leak confirmed are documented. They are not hidden. They are just not legible. Finding the full picture requires reading the consumer terms, the commercial terms, the privacy policy, the privacy center, the data-usage page, the ZDR page, and the security page. A security page that says "limited" links to a privacy center that says "seven years."
Anthropic built its brand as the safety-first AI lab. Safety-first, in practice, means: the model will not help you build a bioweapon. It does not mean: your code files will not sit in a training pipeline for five years unless you find the right opt-out.
The packaging mistake is fixed. The vendor trust question is not.
If your organization needs help evaluating AI tool risk across your enterprise stack, reach out to our team.
FAQs
Does Claude Code send my code to Anthropic?
Yes. Every file Claude Code reads during a session is transmitted to Anthropic's servers along with session metadata (user ID, org UUID, email, platform, feature gates). This is by design, not a bug, and is documented across Anthropic's terms and privacy pages.
How long does Anthropic keep Claude Code data?
It depends on your plan and settings. With model improvement enabled (the default for Free, Pro, and Max): up to 5 years. With model improvement off: 30 days. Enterprise plans: 30 days. Enterprise with ZDR: zero. In all cases, safety-flagged content can be retained for up to 2 years, with classification records kept for up to 7 years.
What is Claude Code Zero Data Retention?
ZDR is an opt-in feature available only for Claude for Enterprise customers. It must be enabled per-organization by your Anthropic account team after an eligibility review. It does not apply to Team plans, does not auto-apply to new orgs, and does not override safety-violation retention (up to 2 years).
Can Anthropic use my code for training?
Under the Consumer Terms (Free, Pro, Max plans): yes, unless you opt out via privacy settings. Under Commercial Terms (Team, Enterprise, API): no, Anthropic may not train on customer content. The key risk: developers using personal accounts on company code fall under consumer terms, not your enterprise agreement.
Further Reading
Official Documentation:
Claude Code Zero Data Retention scope: Anthropic official docs on ZDR coverage and opt-in process
Anthropic Consumer Terms of Service: Default training and service modification rights
Anthropic Commercial Terms of Service: Enterprise protections and DPA reference
Anthropic Privacy Center: Data Retention: Retention periods by data type
Anthropic Privacy Center: Model Training: Training opt-out and exceptions
Incident Coverage:
Claude Code source code accidentally leaked in npm package: BleepingComputer, technical root cause
Claude Code's source reveals extent of system access: The Register, telemetry and system access analysis
Anthropic leaks its own AI coding tool's source code in second major security breach: Fortune, two-incident context
Enterprise Security Context:
Securing Claude Code: A Security Practitioner's Guide: Harmonic Security, March 2026
Claude Code Source Leak: With Great Agency Comes Great Responsibility: Straiker AI, technical exploit analysis

