Two years ago, Perplexity was the company everyone said would kill Google Search. Then, quietly, it disappeared from the conversation. OpenAI launched products, Anthropic launched Claude, and Google launched Gemini. The AI discourse moved on, and Perplexity became the answer to a question nobody was asking anymore.

Then, two weeks ago, at their first developer conference held inside a converted church in San Francisco, Perplexity announced two products simultaneously.

Perplexity Computer: a cloud-based orchestration engine that breaks your goals into subtasks and dispatches each to a different AI model - Claude Opus 4.6 for core reasoning, Gemini for deep research, Grok for lightweight tasks, and GPT-5.2 for long-context recall. These sub-agents run in parallel within isolated sandboxes and deliver a finished output. You describe an outcome, and the system figures out the steps.

Perplexity Personal Computer: the same engine, now anchored to a Mac mini in your home or office, running 24/7. It has persistent access to your local files, Gmail, Slack, GitHub, and Notion, and can be controlled from any device anywhere. The intelligence runs remotely. The device is why it feels personal. 

The architecture is worth understanding. 

Think of it like a law firm. When a case comes in, the managing partner doesn't handle everything personally. They read the brief and assign each piece to whoever is best for that specific job. 

  • That's the meta-router. It sits above all the models, reads each incoming task, and dispatches it to the model best suited to that task. No manual configuration required.

  • The sub-agents don't talk to each other through hidden channels. They leave notes in a shared folder, which means every handoff between agents is readable and traceable.

  • Each task runs in its own sealed environment (Firecracker microVM (2 vCPUs, 8GB RAM), completely separate from every other user's session. And for anything irreversible, like sending an email or deleting a file, the system cannot proceed without your explicit approval.

CEO Aravind Srinivas said, "A traditional operating system takes instructions. An AI operating system takes objectives."

This is the distance Perplexity has travelled.

The demand for this was already proven. 

Six weeks earlier, OpenClaw, an open-source agent that lets you message an AI like a coworker and have it work in the background, had accumulated 145,000 GitHub stars. Developers were buying spare Mac minis specifically to run them 24/7. 

However, the problem is: 

Security researchers found thousands of OpenClaw setups accidentally exposed to the open internet. Default configurations listened on all network interfaces. Irreversible actions happened, and data was lost. It was powerful, the way early electrical wiring was powerful: extraordinary when properly installed, dangerous otherwise. Perplexity built the managed version of exactly that concept, for people who can't afford the accidents.

Perplexity's own internal usage data tells a more structural story than the marketing suggests.

In January 2025, 90% of their enterprise queries were handled by two AI models. By December 2025, no single model accounted for more than 25% of usage. Most coverage reads this as proof that model-agnostic orchestration is working. It's actually evidence of something more fundamental: AI models are not commoditizing. They're becoming more distinct.

The assumption in 2024 was that frontier models would converge. As Claude, Gemini, and GPT all got smarter, the differences between them would narrow until they were essentially the same product at different price points.

What's actually happening is the opposite. 

Claude increasingly dominates nuanced multi-step reasoning. Gemini is strongest on long research chains, Grok optimizes for latency, and GPT-5.2 holds the longest context windows. As these differences sharpen, the value of knowing which model wins at which task compounds. The orchestration layer becomes harder to replace as the gap between specialists widens.

This is the architecture of Perplexity's bet. Models will specialize, and the conductor becomes structurally indispensable.

The risk is that Perplexity only controls the harness.

Claude belongs to Anthropic, Gemini to Google, Grok to xAI, and GPT-5.4 to OpenAI. Every piece of intelligence inside Perplexity Computer is owned by a company that is actively building its own orchestration layer. Each of them controls both the models and the harness. 

This is not a theoretical risk. In 2018, Google raised Maps API prices by 1,400% with 30 days' notice. Companies that had built their core product on the free tier suddenly faced annual API bills ranging from $100,000 to $250,000. Some migrated, and many didn't survive long enough. 

The asymmetry was absolute: Google owned the infrastructure; the customers had already built their dependency. Perplexity's structural relationship with Anthropic, Google, OpenAI, and xAI isn't identical, but it rhymes closely enough to name directly.

There's another risk as well.

When Perplexity Computer routes your task, it dispatches subtasks to Anthropic's, Google's, OpenAI's, and xAI's APIs. Perplexity maintains SOC 2 Type II compliance and offers zero-data-retention options. But that's Perplexity's data policy. Each API call also carries that provider's own data terms separately. 

Enterprises in regulated sectors like banks, healthcare companies, and law firms aren't signing one enterprise data agreement. They're implicitly operating under four, simultaneously, every time the router makes a dispatch decision. For Indian enterprises specifically, routing through four US-based providers also raises DPDP-related questions that Perplexity's current documentation doesn't answer.

Whether this bet pays off is a race between two forces. 

On one side, the speed at which Perplexity builds switching costs deep enough that leaving becomes genuinely painful. 400+ enterprise connectors, custom workflows, organisational habits built around their system - the kind of institutional muscle memory that makes an enterprise relationship sticky regardless of what the underlying providers do.

On the other side, the speed at which those providers decide the orchestration layer is valuable enough to own themselves. The developer who built OpenClaw, the product that proved this entire category, now works at OpenAI. That detail is easy to miss. It shouldn't be.

A personal computer is what accelerates that clock in Perplexity's favor.

An always-on agent with persistent access to your files, history, and sessions creates continuity. And continuity, not capability, is what builds enterprise loyalty.

Srinivas chose a converted church for the announcement. The setting may have been aesthetic. But building a company's future on infrastructure you don't own, while racing to make yourself indispensable before the infrastructure owners take notice - that DOES require a specific kind of faith.

From trying to kill Google Search to trying to run your entire workday, that is the full arc of Perplexity's ambition and comeback!

Keep Reading