At 4:23 AM on Tuesday, an intern at a crypto startup spotted something odd in a routine software update.

Buried inside Claude Code version 2.1.88, pushed to the public npm registry in the early hours, was a 59.8-megabyte file that should never have been there. It was a source map - the kind of thing developers use internally and never publish.

This one pointed directly to a zip archive on Anthropic's own cloud storage. And that archive contained the full, unobfuscated source code for Claude Code.

500,000 lines, 1,900 files, all of it Public.

Within 30 minutes, the code was downloaded, mirrored on GitHub, and forking across the internet. By the time Anthropic pulled the package, the repository had been forked over 41,500 times.

Anthropic confirmed it in a statement: "This was a release packaging issue caused by human error, not a security breach."

True, but the damage is about strategy.

Claude Code is not just a chatbot wrapper. It's a sophisticated engineering system, and the source code reveals exactly how it's built:

  • A three-layer memory architecture that explains why Claude Code stays coherent over long, complex sessions

  • A feature called KAIROS, referenced 150+ times, that allows Claude Code to work autonomously in the background even when the user is idle, consolidating its own memory and resolving contradictions

  • An "Undercover Mode" that strips all AI attribution from public git commit messages when working in open-source repos, so no one sees Anthropic's model names in public code history

  • 44 unreleased features, fully built and hidden behind feature flags, including persistent background agents and cross-device control

Every competitor, OpenAI, Cursor, and Google now has a free engineering education on how to build a production-grade AI coding agent.

And the timing couldn't be worse for Anthropic.

This is their second data leak in under a week. Just days earlier, Anthropic had accidentally left nearly 3,000 internal files publicly accessible, including a draft blog post about their next model, internally called "Mythos" and "Capybara."

So in the span of seven days, Anthropic accidentally revealed:

  • What is the most powerful upcoming model that they can do

  • Exactly how their most profitable product is built

Claude Code generates $2.5 billion in annualised revenue. 80% of that comes from enterprise clients. Those clients pay, in part, for the belief that the technology running their workflows is proprietary and protected.

That belief took a serious hit this week.

Here's the uncomfortable irony. Anthropic has spent years positioning itself as the "safety-first" AI lab - the one that's careful, deliberate, and thoughtful. Two leaks in one week suggest that operational discipline hasn't kept pace with growth. That's a warning about what happens when a company scales from a research lab to a $19 billion revenue run rate in under two years.

The leak won't sink Anthropic. Claude Code is still the best AI coding tool in the world. But every competitor just got a free map of how it was built. In AI, that head start is measured in months, and months matter enormously.

Europe decided it wants its own AI stack. And it's paying $830M for it.

France's Mistral is building a data center outside Paris with 13,800 Nvidia chips, 44 megawatts of capacity, the largest European AI infrastructure bet yet.

Europe's governments and enterprises don't want their AI running on American servers. Mistral's pitch: sovereign AI, data that never touches a US hyperscaler.

Their revenue grew 20x in one year, from $20M to $400M ARR. They're targeting $1B by year-end.

15% of Americans say they'd work for an AI boss. 70% think AI will shrink their job market.

These split emerged in a Quinnipiac poll of 1,397 Americans. Amazon has already laid off thousands of middle managers and replaced their functions with AI workflows. Uber engineers built an AI model of their own CEO to screen pitches before meetings with the actual CEO.

The "Great Flattening" isn't a prediction anymore. It's already being deployed.

AI data centers are literally heating up the neighbourhoods around them.

A new study found this is affecting an estimated 340 million people across India, Spain, Mexico, and other countries. The heat isn't metaphorical.

Data centers exhaust enormous amounts of thermal energy. Neighbourhoods within a few kilometres are measurably warmer, agricultural land is affected, and water usage increases.

Every ChatGPT query, every Claude response, every Grok search - all of it runs on hardware that produces real, physical heat in real, physical places.

FAST BREAK

A startup called Starcloud recently raised $170 million to build data centres in space. Not near space or high altitude, but in orbit.

The idea is to launch servers into orbit, power them with solar panels, and beam their computing power back to Earth. It sounds like science fiction. Apparently, it costs $170 million to find out if it isn't.

Elon Musk's Terafab wants a terawatt of compute in space. Blue Origin entered the space data centre business last month. Now a startup is joining them.

The most expensive real estate on Earth for AI infrastructure might end up being 400 kilometres above it.

Keep Reading