/ Directory / Playground / Zen MCP Server
● Community BeehiveInnovations 🔑 Needs your key

Zen MCP Server

by BeehiveInnovations · BeehiveInnovations/zen-mcp-server

Multi-LLM orchestration MCP — let Claude consult Gemini, OpenAI o3, Grok, OpenRouter, or Ollama mid-task and merge their answers.

Zen MCP gives Claude a chat, thinkdeep, codereview, debug, consensus, and precommit tool — each can route to different models behind the scenes. Lets you use Claude as the orchestrator while delegating heavy thinking to Gemini Pro, code review to o3, or fast lookups to a local Ollama. Built for multi-model workflows that one provider can't cover.

Why use it

Key features

Live Demo

What it looks like in practice

zen-mcp-server.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "zen-mcp-server": {
      "command": "uvx",
      "args": [
        "zen-mcp-server"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "zen-mcp-server": {
      "command": "uvx",
      "args": [
        "zen-mcp-server"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "zen-mcp-server": {
      "command": "uvx",
      "args": [
        "zen-mcp-server"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "zen-mcp-server": {
      "command": "uvx",
      "args": [
        "zen-mcp-server"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "zen-mcp-server",
      "command": "uvx",
      "args": [
        "zen-mcp-server"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "zen-mcp-server": {
      "command": {
        "path": "uvx",
        "args": [
          "zen-mcp-server"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add zen-mcp-server -- uvx zen-mcp-server

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Zen MCP Server

Get consensus from 3 different models on a hard architectural decision

👤 Tech leads making high-stakes calls ⏱ ~20 min intermediate

When to use: Pure Claude opinions feel like a single point of view; you want diversity.

Prerequisites
  • API keys for at least 2 alt providers — Set GEMINI_API_KEY, OPENAI_API_KEY, OPENROUTER_API_KEY
Flow
  1. Frame the question
    Use zen.consensus. Question: should we move our queue from Redis to NATS Jetstream? Stakes: 50M msgs/day, 99.99% uptime. Ask Gemini Pro and o3.✓ Copied
    → Both models respond, Claude summarizes diff between answers
  2. Drill into the disagreement
    Where do they disagree? Use zen.chat to push back on each model's weakest point.✓ Copied
    → Per-model rebuttal pass
  3. Synthesize
    Give me a final recommendation with the strongest argument from each side surfaced.✓ Copied
    → Merged decision doc

Outcome: An architectural call you can defend with cross-model evidence.

Pitfalls
  • Models hallucinate API quirks differently — false consensus — Always verify with the actual docs/source for fact claims
Combine with: context7 · git-mcp-idosal

Use thinkdeep with Gemini Pro to plan before Claude executes

👤 Devs who hit Claude's planning ceiling on huge tasks ⏱ ~40 min intermediate

When to use: Task is so big that even extended thinking loses the thread.

Flow
  1. Hand off the brief
    Use zen.thinkdeep with gemini-2.5-pro. Plan the migration from monolith to 3 microservices. Identify all sequencing risks.✓ Copied
    → Long plan from Gemini, returned as structured doc
  2. Critique the plan
    Now you (Claude) review that plan. What's missing? What's risky?✓ Copied
    → Gap analysis
  3. Execute step 1
    Implement the first phase of the plan in this repo.✓ Copied
    → Real edits

Outcome: Plans you trust because two models agreed on shape.

Combine with: desktop-commander-mcp

Run zen.codereview from a different model than the one that wrote the code

👤 Solo devs who want adversarial review ⏱ ~25 min intermediate

When to use: You vibed a feature with Claude and want a fresh pair of eyes (other model).

Flow
  1. Stage the diff
    Save the current branch's diff. Pass it to zen.codereview with o3.✓ Copied
    → Review comments organized by file
  2. Apply or reject
    Show me each suggestion. Apply only the ones I approve.✓ Copied
    → Per-suggestion accept/reject prompt

Outcome: Code reviewed by a model with different priors.

Combine with: git-mcp-idosal

Combinations

Pair with other MCPs for X10 leverage

zen-mcp-server + desktop-commander-mcp

Plan with Gemini, execute with Claude+desktop-commander

thinkdeep the migration → desktop-commander to apply step 1✓ Copied
zen-mcp-server + git-mcp-idosal

Pull repo via gitmcp, codereview via zen with o3

Codereview the PR diff from owner/repo using o3 — second opinion.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
chat prompt, model?, continuation_id? General Q&A to alt model 1 API call to chosen model
thinkdeep problem, model? Planning hard problems Heavy — uses thinking budget
codereview files[], model? Fresh-eyes PR review 1 API call
debug error, context, model? Stuck on a bug Claude can't see 1 API call
consensus question, models[] High-stakes decisions N API calls
precommit diff, model? Final check before commit 1 API call

Cost & Limits

What this costs to run

API quota
Per-provider; you pay each one separately
Tokens per call
Variable by tool — thinkdeep can hit 30k+
Monetary
Free server; pay each LLM provider directly
Tip
Set DEFAULT_MODEL=ollama:llama3.1 for casual chat, escalate to paid only when needed

Security

Permissions, secrets, blast radius

Minimum scopes: outbound:llm-providers
Credential storage: Provider keys in env vars
Data egress: Whichever provider(s) you configure

Troubleshooting

Common errors and fixes

Model not available

Check the env var for that provider; only configured providers show up

Verify: List zen tools — model defaults appear there
Continuation ID lost

Each tool returns a continuation_id; pass it back to keep cross-model context

Cost runaway on thinkdeep

Set MAX_THINKING_TOKENS in env; default can be high

Alternatives

Zen MCP Server vs others

AlternativeWhen to use it insteadTradeoff
context-mode-mcpYou want context shaping, not multi-model routingDifferent problem
Direct OpenAI/Gemini MCPsYou only need one alt model, not orchestrationLess ceremony but no consensus/thinkdeep workflows

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills