François Chollet is the man who built Keras, the deep learning library that trained half the AI researchers working today.

He left Google, started an AI lab, and on Tuesday, he released a test. It's called ARC-AGI-3. The idea is simple: give an AI a never-before-seen environment, a small interactive world with no instructions, no stated goals, and no explanation of the rules. The AI has to figure out what's happening, develop a strategy, and solve the problem. Just like a human would.

Well, it is said that every human who tried it solved it. First attempt, no training.

The same models that pass bar exams, write legal briefs, debug enterprise codebases, and generate entire marketing campaigns cannot figure out a simple, unfamiliar environment that any child can navigate in minutes.

Why?

Because there's a difference between remembering and reasoning. AI is extraordinarily good at the first one. It has read essentially everything ever written. But ARC-AGI-3 is designed so that memory is useless. You can't memorize your way through a world you've never seen before. You have to think.

This is the exact gap Chollet has been pointing at since 2019. The industry ignored him because the models kept outperforming on every benchmark. The problem, he kept insisting, was that those benchmarks were testing memory, not intelligence.

Chollet said it directly:

"If you want to be among the first to know when an AGI breakthrough happens, monitor the ARC-AGI-3 leaderboard. Any sudden score jump will mean something important has changed."

That's an early warning system.

Right now, it's silent. The best non-LLM system in the preview scored 12.58%. The best frontier model: 0.37%.

The companies spending hundreds of billions of dollars on AI have built the world's most powerful memory machines. They just haven't built anything that can think.

Melania Trump appeared at the White House this week alongside a Figure AI robot named "Plato."

The robot gave a brief speech, then left. The event was part of a global summit on AI and children's education attended by 45 nations. Her vision is a humanoid robot that replaces the school teacher and personalized for every child.

The government isn't watching AI from a distance anymore. It's walking it down the red carpet.

Google built a compression algorithm that makes AI 6x cheaper to run. The internet immediately called it Pied Piper.

AI's biggest cost is the memory required to run it in real time. Google's TurboQuant compresses that memory by at least 6x, with no loss in quality. Cloudflare's CEO called it Google's DeepSeek moment.

AI memory company stocks dropped 3–5% the same day.

Manus, the AI agent that broke the internet in February, is already falling apart.

Two months ago, Manus was the most waitlisted AI product in history. It could browse, code, book flights, and manage your calendar autonomously. Everyone called it the future.

As of now, the founding team is in conflict, key engineers have left, the product is breaking in production, and the hype has outrun the reality by months.

FAST BREAK

India is quietly becoming the engine room of global AI training, and almost nobody is talking about it.

Deccan AI raised $25 million to expand its India-based workforce that labels, annotates, and cleans the data that AI models learn from. Every major frontier model, GPT, Gemini, and Claude are trained on data that thousands of workers in India, Kenya, and the Philippines have manually reviewed and tagged.

The AI industry calls this "human feedback." But what is it actually? Millions of hours of low-wage human labour, quietly making the models smarter, invisibly, at scale.

The AI revolution runs on automation. The automation runs on humans. And those humans are largely invisible.

Keep Reading