/ Directory / Playground / Tavily MCP
● Official tavily-ai 🔑 Needs your key

Tavily MCP

by tavily-ai · tavily-ai/tavily-mcp

Tavily MCP gives your agent web search, page extract, site map, and crawl — already formatted for LLMs so you don't waste tokens on scraped markup.

Tavily is a search API designed for AI agents: answers come back as clean text with sources, not 50 KB of HTML. The MCP server exposes four tools (search, extract, map, crawl) that you can compose into real research workflows. Requires a free API key from tavily.com. Works out of the box in Claude Desktop, Cursor, Windsurf, Claude Code — install via npx.

Why use it

Key features

Live Demo

What it looks like in practice

tavily-mcp.replay ▶ ready
0/0

Install

Pick your client

~/Library/Application Support/Claude/claude_desktop_config.json  · Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "tavily-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "tavily-mcp@latest"
      ],
      "env": {
        "TAVILY_API_KEY": "tvly-..."
      }
    }
  }
}

Open Claude Desktop → Settings → Developer → Edit Config. Restart after saving.

~/.cursor/mcp.json · .cursor/mcp.json
{
  "mcpServers": {
    "tavily-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "tavily-mcp@latest"
      ],
      "env": {
        "TAVILY_API_KEY": "tvly-..."
      }
    }
  }
}

Cursor uses the same mcpServers schema as Claude Desktop. Project config wins over global.

VS Code → Cline → MCP Servers → Edit
{
  "mcpServers": {
    "tavily-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "tavily-mcp@latest"
      ],
      "env": {
        "TAVILY_API_KEY": "tvly-..."
      }
    }
  }
}

Click the MCP Servers icon in the Cline sidebar, then "Edit Configuration".

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "tavily-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "tavily-mcp@latest"
      ],
      "env": {
        "TAVILY_API_KEY": "tvly-..."
      }
    }
  }
}

Same shape as Claude Desktop. Restart Windsurf to pick up changes.

~/.continue/config.json
{
  "mcpServers": [
    {
      "name": "tavily-mcp",
      "command": "npx",
      "args": [
        "-y",
        "tavily-mcp@latest"
      ]
    }
  ]
}

Continue uses an array of server objects rather than a map.

~/.config/zed/settings.json
{
  "context_servers": {
    "tavily-mcp": {
      "command": {
        "path": "npx",
        "args": [
          "-y",
          "tavily-mcp@latest"
        ]
      }
    }
  }
}

Add to context_servers. Zed hot-reloads on save.

claude mcp add tavily-mcp -- npx -y tavily-mcp@latest

One-liner. Verify with claude mcp list. Remove with claude mcp remove.

Use Cases

Real-world ways to use Tavily MCP

Answer a question about something that happened after the model cutoff

👤 Anyone using Claude with questions that need fresh info ⏱ ~5 min beginner

When to use: You ask about a 2026 release, a recent CVE, a new pricing page, or today's market — the model doesn't know and must go look.

Prerequisites
  • Tavily API key — Sign up at tavily.com (free tier = 1,000 calls/mo)
  • Tavily MCP installed — Paste the config block above into your client's MCP settings
Flow
  1. Ask directly
    What did Anthropic ship in Claude Sonnet 4.7 this month? Use Tavily to find the announcement and summarize with sources.✓ Copied
    → Agent calls tavily_search, returns a summary with linked sources
  2. Drill in on one source
    The second source looks most authoritative — use tavily_extract to pull its full text and quote the exact line about context window.✓ Copied
    → Direct quote with URL + paragraph number

Outcome: Current, cited answer in one turn — no manual googling.

Pitfalls
  • Search returned SEO junk first — Add site hints: '... from anthropic.com or anthropic's official blog'
  • Summaries drift from sources — Require direct quotes — 'paraphrase but preserve numbers, dates, names exactly'
Combine with: filesystem · memory

Do a competitive product scan in one session

👤 PMs, founders, marketers ⏱ ~30 min intermediate

When to use: You need a one-page brief on every competitor for a given feature category by end of day.

Flow
  1. Discover competitors
    Using Tavily, find the top 8 products competing with us in 'AI-native CRM for SMB'. For each, return name, URL, one-liner, founding year.✓ Copied
    → Structured 8-row table with source links
  2. Map each site
    For each competitor, tavily_map their site to find their pricing and features pages. Return the URLs.✓ Copied
    → 2 URLs per competitor
  3. Extract pricing
    tavily_extract each pricing page and build a comparison grid: plan name, monthly price, top 3 differentiators.✓ Copied
    → Clean grid; cells cite the pricing page URL

Outcome: A sharable brief with sources — ready for a PMM slide in 30 minutes.

Pitfalls
  • Pricing JS-rendered and extract misses it — Fall back to tavily_crawl with render=on, or hit the /pricing sitemap directly
Combine with: filesystem

Write a tutorial with live-verified links

👤 Technical writers, DevRel ⏱ ~25 min intermediate

When to use: You're publishing a how-to and every external link has to resolve to the right content today.

Flow
  1. Collect candidate refs
    Using tavily_search, find the top 5 canonical docs pages for 'OAuth 2.1 PKCE flow'. Prefer RFCs and vendor docs over blogs.✓ Copied
    → 5 URLs with a short rationale each
  2. Verify each one
    tavily_extract each URL. For each, confirm the page still covers PKCE and flag any that look redirected or stale.✓ Copied
    → Per-URL live verdict
  3. Embed in the draft
    Rewrite my draft tutorial to cite only the verified URLs, with anchor text that matches the page's actual heading.✓ Copied
    → Updated draft; every link text matches the real page heading

Outcome: Published tutorial with zero dead links and accurate anchor text.

Combine with: filesystem

Combinations

Pair with other MCPs for X10 leverage

tavily-mcp + filesystem

Search, extract into disk, then analyze locally without re-fetching

Search Tavily for recent OWASP top-10 sources, extract them, save to /research/owasp/, then compare the content offline.✓ Copied
tavily-mcp + memory

Build a research journal that persists between sessions

For each Tavily search, save a one-line note and the URLs to memory under 'project:acme'. Next session, reuse.✓ Copied
tavily-mcp + context7

Tavily for web context + Context7 for library docs — don't confuse them

Use Context7 for docs questions; Tavily for news, blog posts, and anything not in library indexes.✓ Copied

Tools

What this MCP exposes

ToolInputsWhen to callCost
tavily_search query: str, max_results?: int, search_depth?: 'basic'|'advanced', include_domains?: str[] Primary tool — one query, LLM-ready snippets with URLs 1 API call
tavily_extract urls: str[], extract_depth?: 'basic'|'advanced' You already have a URL and want clean text — no HTML, no ads 1 API call per URL
tavily_map url: str, max_depth?: int, categories?: str[] Discover a site's structure — useful before extract/crawl 1 API call
tavily_crawl url: str, max_depth?: int, limit?: int, instructions?: str Broad ingest of a small site or doc section — expensive, prefer extract when you already know the URLs Multiple API calls (one per page)

Cost & Limits

What this costs to run

API quota
Free tier = 1,000 API calls/month; scaling plans from $30/mo
Tokens per call
Returns ~500–5000 tokens of clean content — much less than raw HTML would
Monetary
Free tier covers individual daily use; heavy workflows need paid
Tip
Prefer tavily_search over tavily_crawl — search is one call, crawl is N. Only crawl when you truly need breadth.

Security

Permissions, secrets, blast radius

Credential storage: TAVILY_API_KEY in env var (set in the MCP config's env block)
Data egress: Queries and URLs you pass are sent to api.tavily.com. Don't paste proprietary info into the query string.

Troubleshooting

Common errors and fixes

401 Unauthorized

Double-check TAVILY_API_KEY in your MCP config. The env block lives inside the server config, not at top level.

Verify: Call any Tavily tool; if the error persists, rotate the key in tavily.com dashboard
Empty results despite a real query

Switch search_depth from 'basic' to 'advanced' for niche topics; add include_domains to bias toward authoritative sources

Verify: Repeat with search_depth: 'advanced'
tavily_extract returns paywalled gibberish

Tavily follows robots.txt and respects paywalls. For paywalled content, note it's unreachable — don't try to bypass.

429 Rate limit

Free tier = 60 RPM. Space out calls, or upgrade at tavily.com. The MCP auto-backs-off once, then surfaces the error.

Verify: Check usage in tavily.com dashboard

Alternatives

Tavily MCP vs others

AlternativeWhen to use it insteadTradeoff
exa-mcp-serverYou want Exa's neural/embedding-style search and similarityDifferent result quality on different query types; both are LLM-tuned
fetchYou have the URL and just want HTML → markdown, no searchNo search; no structured extract; you handle the URL discovery
perplexity-askYou want a research-level answer synthesized by Perplexity, not raw resultsHigher latency, higher cost, less composable — it's one big tool
firecrawl-mcp-serverYou need heavy crawling or JS rendering as a core workflowPricier; overkill for one-off lookups

More

Resources

📖 Read the official README on GitHub

🐙 Browse open issues

🔍 Browse all 400+ MCP servers and Skills