🤖 Ghostwritten by Claude · Curated by Tom Hundley
This article was written by Claude and curated for publication by Tom Hundley.
The buttons say Keep. The docs say Apply. Five agents are running. Which one wins? Heres whats actually happening.
If youve tried Cursors multi-agent feature and found yourself staring at five competing implementations wondering how to merge them all together, youve discovered something important: Cursor multi-agent isnt collaborative orchestration. Its competitive selection.
Understanding this distinction will save you hours of merge-conflict hell and unlock the actual power of parallel agents.
When most developers hear multi-agent, they imagine something like Claude Codes approach: a manager agent that breaks down work, assigns subtasks to worker agents, then synthesizes results. One coherent workflow, many hands.
Thats not what Cursor does. At least, not yet.
Cursors multi-agent is Best-of-N execution. When you set Agent Count to 3x or 5x, Cursor runs the same prompt in parallel across isolated environments. Each agent operates in its own git worktree—a separate working directory attached to your repository. They cant see each other. They dont collaborate. They race.
Your job? Pick the winner.
According to Cursors official documentation, having multiple models attempt the same problem and picking the best result significantly improves the final output, especially for harder tasks.
This is powerful, but its not orchestration. Its an audition.
Cursors UI has evolved rapidly, and the terminology has left a trail of confused developers in its wake. Heres the definitive breakdown:
These buttons appear in the diff editor when reviewing changes. They work within the current workspace:
If youre reviewing an agents worktree, accepting a change keeps it in that agents isolated workspace. It doesnt merge anything into your main branch.
Cursor introduced these as alternative wording:
The confusion comes from deletions. Keep a deletion sounds backwards—youre keeping the act of deleting, not the deleted code. Users report this causing a 500ms mental pause every time. Community feedback has been vocal about this friction.
This is the critical action. Apply takes changes from an agents worktree and merges them into your main working tree.
The workflow Cursor intends:
Apply is the merge to main step. Everything else is workspace-local review.
The nightmare scenario: You run 5 agents on refactor the authentication module. Each one works. Each one is different. You Accept all 5. Now what?
Youve just created 5 parallel universes of your codebase. They share the same repository object database (git worktrees are efficient that way), but they have incompatible changes. Trying to merge them is like trying to combine 5 different answers to an essay question into one coherent response.
The simple rule: If you ran N agents on solve problem X and they all touched the same core files:
Cursor 2.2 introduced Multi-Agent Judging. After all parallel agents finish, Cursor evaluates each solution and recommends a winner with reasoning.
This is genuinely helpful. Instead of manually comparing 5 implementations, you get an AI-powered recommendation. But its still selecting one winner from N candidates—not synthesizing all N into something greater than its parts.
Judging works best when:
This is the closest youll get to orchestrated multi-agent behavior in Cursor today:
This gives you the benefit of multiple perspectives without the merge nightmare. One agent builds; others review.
Multi-agent shines when youre genuinely uncertain:
Then:
Cursor doesnt natively assign different tasks to different agents. But the community has developed a workaround using .cursor/worktrees.json:
.agent-idThis lets you say Agent 1: implement auth. Agent 2: implement logging. Agent 3: write tests. But it requires manual setup and youre still applying results sequentially.
Cursor 2.2s Plan Mode improvements move closer to this by letting you send selected to-dos to new agents—a step toward true manager-to-worker orchestration.
Claude Code feels orchestrated because it operates as manager + workers + synthesis in one thread. It can:
This is fundamentally different from Cursors run N candidates and pick one approach. Neither is wrong—theyre different tools for different workflows.
| Aspect | Cursor Multi-Agent | Claude Code CLI |
|---|---|---|
| Model | Best-of-N selection | Orchestrated delegation |
| Isolation | Git worktrees (separate) | Shared context |
| Synthesis | Manual (pick winner) | Automatic (consolidation) |
| Best for | Uncertain approaches | Complex, coordinated tasks |
| Merge work | You handle it | Agent handles it |
Cursors trajectory suggests theyre moving toward true orchestration. Multi-Agent Judging is a step. Plan Mode with delegated to-dos is another step. The infrastructure (worktrees, isolated execution) is already there.
But today, if you want orchestrated multi-agent behavior, you either need to manually coordinate (Workflow A) or use a tool built for it (Claude Code).
Cursor multi-agent is Best-of-N, not orchestration. Agents race; they dont collaborate.
Accept/Keep is workspace-local; Apply is the merge. Dont Accept 5 implementations and expect magic.
Multi-Agent Judging helps pick winners. Let Cursor recommend, then Apply that one.
One builder, many critics. The best workflow uses one implementing agent and others for review.
Different tools, different strengths. Cursor excels at exploration; Claude Code excels at coordination.
Understanding these distinctions transforms multi-agent from a source of confusion into a genuine productivity multiplier. The key is working with the tools design, not against it.
This article is a live example of the AI-enabled content workflow we build for clients.
| Stage | Who | What |
|---|---|---|
| Research | Claude Opus 4.5 | Analyzed current industry data, studies, and expert sources |
| Curation | Tom Hundley | Directed focus, validated relevance, ensured strategic alignment |
| Drafting | Claude Opus 4.5 | Synthesized research into structured narrative |
| Fact-Check | Human + AI | All statistics linked to original sources below |
| Editorial | Tom Hundley | Final review for accuracy, tone, and value |
The result: Research-backed content in a fraction of the time, with full transparency and human accountability.
Were an AI enablement company. It would be strange if we didnt use AI to create content. But more importantly, we believe the future of professional content isnt AI vs. Human—its AI amplifying human expertise.
Every article we publish demonstrates the same workflow we help clients implement: AI handles the heavy lifting of research and drafting, humans provide direction, judgment, and accountability.
Want to build this capability for your team? Lets talk about AI enablement →
Discover more content: