On March 10, Yann LeCun, a Turing Award winner and the man who spent 12 years building Meta's AI research into one of the most influential labs on earth, announced that his new company, AMI Labs, had raised $1.03 billion at a $3.5 billion valuation.

The largest seed round in European startup history.

Three weeks earlier, Fei-Fei Li, the researcher whose ImageNet dataset essentially started the deep learning revolution, raised another billion for World Labs at a $5 billion valuation.

Two of the most decorated researchers in AI history are pointing at the same thing: world models. Both say, in different but equally direct language, that the technology currently running every AI product you use is architecturally insufficient for what comes next.

That is a founding generation betting against its own founding idea.

What's Actually Being Said?

LLMs, the engine behind ChatGPT, Claude, Gemini, and the rest, are trained to predict the next token in a sequence. That is literally what they do. Scale it to trillions of tokens across the internet, and the results are remarkable. But the architecture has a hard ceiling. It has no internal model of physical reality, no gravity, and no cause-and-effect that exists outside language.

LeCun's phrase for this: LLMs "can't predict the consequences of their actions." 

His architecture, JEPA (Joint Embedding Predictive Architecture), doesn't predict the next word or the next pixel. It predicts the next state of an environment in an abstract representation space. The difference is between a system that has read everything about how the world works and a system that has actually modelled it.

The practical consequence?

Every domain where AI has to operate in physical reality, like robotics, autonomous vehicles, surgical systems, and industrial automation, hits the same wall. Language models that are extraordinary at understanding and generating text become unreliable the moment the real world starts talking back.

This is what $2 billion just said out loud.

The Players and Their Bets

The race has a clear field.

LeCun's AMI Labs is a pure research bet. It's Paris-based, has no declared product timeline, and targets industrial control, robotics, healthcare, and wearables. The first commercial discussions are expected in 6-12 months. What he's building is the foundational architecture, not the application layer.

Fei-Fei Li's World Labs is further along. Marble, their first product, already generates persistent, editable 3D environments from text or image prompts, and can be exported into Unreal and Unity. Subscription pricing from $20-95 a month. The primary customers today are robotics simulation and game development. 

Her thesis is sharper and more personal: "Spatial intelligence is the scaffolding upon which our cognition is built."

Google DeepMind launched Genie 3 in January, the first real-time interactive world model that generates navigable 3D environments at 24 fps and 720p. It's available now to AI Ultra subscribers in the US. The use case is agent training for AGI research.

NVIDIA Cosmos, trained on 20 million hours of real-world video, crossed 2 million downloads. Waymo, Figure AI, and Agility Robotics are already using it. Jensen Huang's framing: world foundation models are to physical AI what LLMs are to generative AI.

And Travis Kalanick, who spent 8 years in stealth, launched Atoms on March 13. It is a robotics company targeting industrial automation in mining and logistics, with reportedly thousands of employees already. 

His framing: treat physical-world problems the way engineers treat software. Structured, computable, automatable.

Every one of these is a different approach to the same problem. The architecture debate is real, but the direction of travel is the same across all of them.

The Tension Nobody Is Resolving

Here is where the analysis gets genuinely interesting.

LeCun says LLMs will never reach genuine intelligence. Geoffrey Hinton, also a Turing Award winner and a founding figure of this field, says AI is already powerful enough to replace most knowledge work within the decade.

These positions sound contradictory. They aren't. They're answering different questions.

LLMs can be transformative for the economy right now, reshaping white-collar work, automating workflows, and compressing team sizes, yet be the wrong architecture for the next level of AI capability. Both things are true. The industry keeps trying to force a choice between them because a middle position doesn't make for good Twitter arguments.

What LeCun is actually saying is more precise and more interesting. The ceiling on LLMs is lower than the market believes, and when that ceiling becomes undeniable, the paradigm shift will be abrupt. He's arguing about where that power hits its limit.

The AMI Labs CEO, Alexandre LeBrun, said something telling to Bloomberg: 

"Generative architecture trained by self-supervised learning mimics intelligence; it doesn't genuinely understand the world." And then he immediately added: "My prediction is that 'world models' will be the next buzzword. In six months, every company will call itself a world model to raise funding."

A CEO launching a company with world models in its DNA, pre-emptively warning about world model hype - that kind of self-awareness is worth noting.

What This Moment Actually Is?

World models are not a product you will use next quarter. The honest timelines across all these companies place meaningful commercial deployment 3-5 years out, for most applications.

What is happening right now is a paradigm shift in where the serious research money is going and who's making that call. The people who know these systems most intimately, who watched LLMs go from research curiosity to trillion-dollar industry, are placing their biggest personal bets on what comes after.

That is a signal.

The $2 billion raised in six weeks answers a question the current paradigm cannot: what does AI look like when it understands the world, not just the words we use to describe it?

Keep Reading