In February, India's Prime Minister Modi stood on a stage in New Delhi and asked the most powerful people in AI to raise their hands together for a photo.

Google's CEO, Meta's AI chief and every leader in the room joined. Sam Altman and Dario Amodei, standing right next to each other, raised separate fists and didn't make eye contact.

Altman later said he was "just confused." The internet didn't buy it. Because if you know the history, you know that moment wasn't awkward but honest.

Here's the story:

Dario Amodei joined OpenAI in 2016. By 2020, he was VP of Research, the person most responsible for building GPT-2 and GPT-3, and the models that made OpenAI matter. He had more technical credibility inside the lab than almost anyone.

But four years of small wounds had piled up:

  • Co-founder Greg Brockman floated a plan to raise money by selling AGI access to Russia and China. Dario called it borderline treason and nearly quit

  • Altman promised Dario that Brockman wouldn't have authority over him. Dario later found out Altman had privately given Brockman the power to fire him

  • Dario was left out of a meeting with Barack Obama. Altman and Brockman went instead

  • Dario asked for credit on the company's charter. Brockman went on a podcast to discuss it instead

Then, in 2020, it broke completely. Altman called Dario and his sister Daniela into a conference room and accused them of organising a campaign of negative board feedback against him. They denied it. Altman's source walked in and said she had no idea what Altman was talking about.

Dario and Daniela started shouting.

Altman went to Dario's house personally to ask him to stay. Dario had two conditions: report directly to the board, not to Altman, and never work with Brockman again. Altman said no.

Dario left. He took Daniela and eleven other OpenAI employees with him. Within a year, they had founded Anthropic.

But leaving didn't end it. It just moved the fight into public.

Today, both companies are worth over $300 billion. Both are racing toward an IPO. And the personal feud has become a strategy war:

  • Anthropic ran Super Bowl ads with the words "betrayal," "deception," and "treachery" as a direct attack on OpenAI's plan to run ads in ChatGPT. Altman publicly called the ads "clearly dishonest.”

  • Dario reportedly told Anthropic staff internally that the Altman-Musk lawsuit was like "Hitler vs. Stalin," two people he wanted nothing to do with

  • When the Pentagon came calling, Anthropic refused to sign without hard safeguards against autonomous weapons. OpenAI signed the same day. The Defence Secretary then threatened to declare Anthropic a national "supply chain risk"

  • Dario wrote a Slack message to all Anthropic employees calling OpenAI "mendacious" and Altman's statements "straight up lies"

Here's the irony that ties everything together:

Dario left OpenAI because he believed you could build AI fast and safely. That was the whole point of Anthropic. This week, leaked internal documents revealed Anthropic's newest model, Claude Mythos, is described as "dramatically" ahead of anything else on the market.

It's also described as "very expensive to serve."

Anthropic's paying users on $100/month plans are already hitting rate limits within an hour. The most powerful model Dario has ever built is too expensive for most people to actually use.

The "responsible" path is turning out to be the harder business.

Whether that changes or whether Anthropic's bet eventually pays off is probably the most important open question in AI right now.

SoftBank took a $40 billion loan. It has to be repaid in 12 months.

This unsecured, short-term loan is to cover its $30 billion commitment to OpenAI's recent $110 billion raise. Why 12 months? Because the lenders, JPMorgan, Goldman Sachs, four Japanese banks almost certainly expect OpenAI to go public this year. An IPO would hand SoftBank the liquidity to pay it all back.

OpenAI's IPO, if it happens, will be one of the largest stock market listings in history. SoftBank's total bet on OpenAI is now over $60 billion.

Stanford just proved that AI is making you worse at being wrong.

Researchers tested 11 AI models on thousands of real-life situations where humans had already concluded the person asking was in the wrong. AI validated bad behavior 49% more than humans did. When asked about harmful or illegal actions, AI agreed with the user 47% of the time.

The lead researcher said: "AI advice does not tell people that they're wrong. I worry people will lose the skills to deal with difficult social situations."

JPMorgan is now tracking exactly how every employee uses AI.

The bank has rolled out monitoring tools that log which AI tools employees use, how often, and for what tasks. JPMorgan already has 200,000+ employees using its internal AI assistant LLM Suite. The tracking is about figuring out where AI actually creates value and where it doesn't.

But here's what it signals: AI adoption is no longer optional at big institutions. It's being measured. And what gets measured, gets managed.

FAST BREAK

Elon Musk's last remaining co-founder at xAI quietly left the company this week.

Igor Babuschkin had been with Musk since the very beginning, serving as one of the original architects of Grok and as a member of xAI's core research team. He was the last person standing from the founding group that Musk assembled after leaving OpenAI's board in 2018.

Now, he is gone.

xAI is currently valued at over $50 billion. It merged operations with X (formerly Twitter), and it's in the middle of building one of the world's largest AI data centres in Memphis.

The companies that change the world are rarely built by the people who started them. The founders leave, but the institutions remain. The question is always whether the mission does too.

Keep Reading