Anthropic's Claude Code Source Code Leaked in Major Security Incident
Anthropic Leaks Claude Code Source in Security Blunder

Anthropic's Claude Code Source Code Exposed in Accidental Leak

Pages from the Anthropic website and the company's logo were displayed on a computer screen in New York on 26 February 2026, highlighting a significant security lapse. Anthropic, a prominent artificial intelligence firm, has confirmed that it accidentally released part of the internal source code for its AI-powered coding assistant, Claude Code, due to what it described as "human error". The incident occurred on Tuesday, when an internal-use file was mistakenly included in a software update, pointing to an archive containing nearly 2,000 files and 500,000 lines of code.

Rapid Spread and Immediate Fallout

The leaked code was quickly copied to the developer platform GitHub, where a post on X sharing a link garnered more than 29 million views by early Wednesday. A rewritten version of the source code swiftly became GitHub's fastest-ever downloaded repository, prompting Anthropic to issue copyright takedown requests in an attempt to contain its spread. Within the code, users discovered blueprints for innovative features, including a Tamagotchi-esque coding assistant and an always-on AI agent, as reported by the Verge.

An Anthropic spokesperson stated, "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach." The exposed code pertained to the tool's internal architecture but did not include confidential data from Claude, the underlying AI model developed by Anthropic.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Historical Context and Competitive Implications

This is not the first time Claude Code's source code has been exposed; an earlier version was leaked in February 2025, and the tool had previously been reverse-engineered by independent developers. Claude Code has emerged as a key product for Anthropic, with paid subscriptions more than doubling this year, according to a TechCrunch report. The company's Claude chatbot also gained popularity recently, climbing to the top spot on Apple's chart of top free apps in the US amid CEO Dario Amodei's disputes with the Pentagon over ethical use of AI technology.

However, this leak marks the second data breach for Anthropic in recent weeks. Fortune previously reported on a separate incident where the company stored thousands of internal files on publicly accessible systems, including drafts referencing upcoming models like "Mythos" and "Capybara". Experts warn that these leaks suggest internal security vulnerabilities, which could be particularly troubling for a company focused on AI safety. Competitors such as OpenAI and Google may benefit from insights into Claude Code's workings, with the Wall Street Journal noting that the leak included commercially sensitive information, such as tools and instructions for deploying AI models as coding agents.

Broader Regulatory and Legal Challenges

The latest breach comes amidst ongoing legal battles for Anthropic. The US government recently designated the company as a supply chain risk, a decision Anthropic is contesting in court. Last week, a US district judge granted a temporary injunction to block this designation, adding another layer of complexity to the firm's operational challenges. As Anthropic navigates these security and regulatory issues, the incident underscores the heightened scrutiny facing AI companies in an increasingly competitive and sensitive technological landscape.

Pickt after-article banner — collaborative shopping lists app with family illustration