/ Directory / Playground / MCPJam Inspector
● Community MCPJam ⚡ Instant

MCPJam Inspector

by MCPJam · MCPJam/inspector

Postman for MCP — connect any server, list its tools, hand-call them, chat with it as an agent, and evaluate output across LLMs in one local UI.

MCPJam Inspector is a development platform for MCP. Spin up the local UI, point it at any stdio/SSE/streaming-HTTP MCP server, and you get tool listing, hand-call forms, a built-in chat that uses the server as agent tools, and an eval runner. Authoring or debugging a server? Use this before you ship.

Why use it

Key features

Live Demo

What it looks like in practice

mcpjam-inspector.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mcpjam-inspector": {
      "command": "npx",
      "args": [
        "-y",
        "@mcpjam/inspector"
      ]
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "mcpjam-inspector": {
      "command": "npx",
      "args": [
        "-y",
        "@mcpjam/inspector"
      ]
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "mcpjam-inspector": {
      "command": "npx",
      "args": [
        "-y",
        "@mcpjam/inspector"
      ]
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "mcpjam-inspector": {
      "command": "npx",
      "args": [
        "-y",
        "@mcpjam/inspector"
      ]
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "mcpjam-inspector",
      "command": "npx",
      "args": [
        "-y",
        "@mcpjam/inspector"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "mcpjam-inspector": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "@mcpjam/inspector"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add mcpjam-inspector -- npx -y @mcpjam/inspector

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use MCPJam Inspector

Debug why your MCP tool is being called wrong

👤 MCP server authors ⏱ ~20 min intermediate

When to use: You shipped a tool, and Claude keeps calling it with the wrong arguments.

Prerequisites
  • Inspector running — npx -y @mcpjam/inspector (opens browser at localhost:6274)
  • Your MCP server — Have it ready to launch via stdio command or SSE URL
Flow
  1. Connect the server
    In the inspector UI, add a stdio server: command=node, args=[./dist/server.js].✓ Copied
    → Tool list appears with descriptions + schemas
  2. Read the LLM's view
    Look at the rendered description in the tool detail panel — that's exactly what the model sees.✓ Copied
    → Spot the ambiguity ("id" should be "task_id", or example missing)
  3. Reproduce the misuse
    Open Chat tab. Send the user prompt that caused the failure. Watch the tool_use payload.✓ Copied
    → Same wrong call you saw in production
  4. Fix description, retest
    Update the tool's description and example in your server code, restart, retry the same prompt.✓ Copied
    → Correct call this time

Outcome: Concrete fix backed by an evidence-driven before/after.

Pitfalls
  • Stale schema cached after server restart — Click 'Reconnect' in the server panel; the inspector re-fetches list_tools

Evaluate how different models use your MCP server's tools

👤 Server authors targeting multiple clients ⏱ ~45 min advanced

When to use: You want to know if your server works as well with Sonnet as with Haiku/GPT-5.

Flow
  1. Build an eval set
    Author 10 representative user prompts in the Eval tab. Mix easy and adversarial.✓ Copied
    → Eval saved with prompts + expected tool sequences
  2. Run across models
    Run the eval against Sonnet 4.6, Haiku 4.5, and GPT-5. Compare tool-use traces.✓ Copied
    → Per-model trace; pass/fail per prompt
  3. Tighten weakest schema
    On the failing prompts, what description change would fix the cheaper model without breaking Sonnet?✓ Copied
    → Concrete description rewrite

Outcome: A server that works across the model lineup, not just the one you tested.

Pitfalls
  • Eval only tests happy path — Add adversarial prompts: missing args, contradictory inputs, partial info

Explore an unfamiliar third-party MCP server safely

👤 Anyone evaluating a community MCP ⏱ ~15 min intermediate

When to use: You're considering adding someone's MCP to your config and want to see what tools it exposes first.

Flow
  1. Spin it up isolated
    Add the server in inspector — don't put it in your real client config yet.✓ Copied
    → Tools listed with full descriptions
  2. Audit the surface
    Scan tool list. Anything that writes/deletes/runs code? Anything that calls external URLs?✓ Copied
    → Risk-categorized tool list
  3. Hand-test risky tools
    Hand-call each write tool with a no-op payload to see what it actually does.✓ Copied
    → You confirm behavior before exposing to an autonomous agent

Outcome: Informed install/skip decision instead of blind trust.

Pitfalls
  • Tool shells out — even hand-call can mutate your system — Run in a container or a scratch dir; never inspect untrusted MCP on your daily driver

Combinations

Pair with other MCPs for X10 leverage

mcpjam-inspector + github

Profile your own MCP server in CI

On every PR, run inspector eval against the main + PR head; comment diff in the PR via github MCP.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
list_tools server connection Auto-runs on connect — seldom called manually 0
call_tool tool_name, args Hand-call any tool with form inputs depends on tool
chat model, messages Drive an LLM through your tools to see emergent behavior depends on model API
run_eval eval_set, models[] Cross-model regression check before shipping API calls × models × prompts

Cost & Limits

What this costs to run

API quota
Eval mode hits LLM provider quotas — bring your own keys
Tokens per call
0 for inspector itself; full agent cost for chat/eval
Monetary
Free (open source) — you only pay model API costs
Tip
Use Haiku for eval iteration; flip to Sonnet only for final cross-model check

Security

Permissions, secrets, blast radius

Minimum scopes: Local network only by default
Credential storage: API keys for eval models stored in browser localStorage by default — clear after use on shared machines
Data egress: Only to model providers you configure (Anthropic, OpenAI, etc.) and the MCP servers you connect
Never grant: Public-internet access — keep inspector on localhost

Troubleshooting

Common errors and fixes

Server fails to connect (stdio)

Check the command path is absolute and the working dir is set; check stderr in the inspector logs panel

Verify: Run the server command manually in a terminal first
SSE server hangs on connect

CORS or auth header issue — check the SSE endpoint accepts cross-origin requests from localhost:6274

Verify: curl -N <sse_url> with -H 'Accept: text/event-stream'
Eval runs but all models fail

Check API key validity in settings; check model names match provider's current naming

Inspector port already in use

PORT=6275 npx @mcpjam/inspector

Verify: lsof -i :6274

Alternatives

MCPJam Inspector vs others

AlternativeWhen to use it insteadTradeoff
modelcontextprotocol/inspector (official)You want the canonical reference inspector with the most conservative feature setLacks chat/eval modes; lower-level
wong2/mcp-cliYou prefer terminal over UINo visual eval comparison or schema rendering

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills