Think about what happens in your brain when you watch a movie.

Your eyes process the visuals, your ears process the sound, and your mind connects the two, understands the story, and feels the emotion. Thousands of neurons firing across dozens of regions, all in real time, all invisibly, all unique to you.

For decades, scientists have studied this using fMRI scans. The problem was that each study took months. Each experiment needed new participants, and each finding was specific to one person, one task, one language.

Meta just changed that.

TRIBE v2, Trimodal Brain Encoder, is a foundation model that predicts how the human brain responds to almost anything it sees, hears, or reads. Feed it a video, a podcast, a sentence, and it tells you which parts of the brain activate, how strongly, and in what pattern.

Here's what makes it genuinely different:

  • Trained on 500+ hours of fMRI recordings from 700+ people watching movies, listening to podcasts, and reading text

  • 70x higher resolution than previous brain prediction models

  • Zero-shot capability. It can predict brain responses for people it has never scanned, in languages it has never been tested on, for tasks it has never seen before

  • Built on Meta's own models: Llama 3.2 for text, V-JEPA2 for video, Wav2Vec-BERT for audio - all working together

Basically, it's a digital twin of neural activity. A simulation of the human brain that you can run on a computer.

Meta has open-sourced the entire thing - the model, codebase, research paper, and a live demo. Any researcher anywhere in the world can use it today.

This is Meta's Fundamental AI Research team, not the Instagram team. These are the people who spend years on problems that don't generate a single ad dollar because the science matters.

Most people will never hear about TRIBE v2. They'll hear about the next ChatGPT update or the next viral AI image tool. But quietly, in a research lab, Meta made it possible to run experiments on the human brain without a single human in the room.

That's not a product launch. That's a shift in what's possible.

OpenAI had the most chaotic week of self-correction in AI history.

In seven days, the company killed or paused three separate products: Sora, Instant Checkout, and Erotic mode. The third one indefinitely paused after internal staff called it a "sexy suicide coach."

All of it traces back to one thing: Anthropic is eating OpenAI's enterprise lunch. The response is a hard pivot. Cut everything that isn't coding or business software, and focus.

Wikipedia just banned AI-written content. The vote was 40 to 2.

The world's largest free encyclopedia built entirely by volunteers over 25 years passed a policy this week: no AI-generated text in articles. Editors can use AI to fix grammar in their own writing, but the content itself must come from a human.

The reason is simple and worth sitting with: AI confidently writes things that aren't true. Wikipedia's entire value is accuracy. The two don't mix.

Meta rolled out AI-generated message drafts inside WhatsApp this week.

The app reads your conversation, understands the context, and suggests a full reply. You just tap send. Three billion people use WhatsApp every month. Most of them don't think of it as an AI product. After this week, it quietly is one.

The question nobody is asking yet: if AI writes your messages and AI reads them on the other end, what exactly is the human doing?

FAST BREAK

The US Senate introduced a bill requiring every AI data center to disclose how much electricity it uses. Why does this matter?

Because nobody actually knows. AI data centers are private. Their power consumption is reported voluntarily, if at all. One hyperscale data center can consume as much electricity as a small city, and right now, there's no law requiring them to tell anyone.

A single large AI training run already uses more electricity than 100 average Indian households use in a year. The Senate is trying to understand what it's actually costing before the bill arrives, and nobody can explain it.

The most powerful technology in history is being built with no public accounting of what it consumes. That's about to change.

Keep Reading