Claude Code can connect to external tools through MCP (Model Context Protocol) servers — Sentry for production errors, JetBrains for IDE introspection, Context7 for library docs, Perplexity for web search. The problem: with six MCP servers available, Claude doesn't always pick the right one. It might grep through 20 files to find a Symfony route when JetBrains can return it in one call, or query Context7 for "latest PHP version" when only Perplexity has current data.

The fix is a global CLAUDE.md — a persistent instruction file that teaches Claude which tool to reach for based on the query type. This post walks through my setup, the decision tree I built, and how you can build one for your own stack.

The Problem: Too Many Tools, No Strategy

MCP servers give Claude Code superpowers: browser automation, error tracking, documentation lookup, IDE integration. But more tools means more choices, and without guidance Claude makes reasonable but suboptimal picks.

Common failure modes I observed:

  • Wrong tool for the job — querying Context7 for "latest Symfony version" (it only has docs, not release metadata) instead of Perplexity
  • Expensive path when a cheap one exists — using Grep to search for Symfony routes across multiple files instead of asking JetBrains MCP for a structured route list
  • Missing the specialist — not checking Sentry for a production error when the stack trace would immediately reveal the root cause
  • Redundant queries — trying multiple tools sequentially when the memory could route to the right one immediately

The solution is explicit routing rules stored in Claude Code's global CLAUDE.md. Claude reads this file at the start of every session, so the decision tree is always available.

My MCP Stack

Here are the six MCP servers I run and what each does best.

Context7 — Library and Framework Docs

Context7 serves versioned documentation with code examples. You give it a library name and a question, and it returns the relevant section from official docs.

Workflow: resolve-library-id (find the library) → query-docs (fetch docs for a specific question). It supports versioned lookups — I can query /sylius/sylius/v1.14.6 specifically, not just "latest."

Best for:

  • Method signatures and configuration examples
  • Migration guides between framework versions
  • How-to patterns from official docs (Symfony forms, Doctrine mappings, API Platform filters)

Fails for:

  • Current version numbers or release dates (it has docs, not metadata)
  • PHP language features (PHP itself isn't a library with versioned docs in Context7)
  • Security CVEs or advisories
  • Anything that requires real-time information

Perplexity — Current Facts and Research

Perplexity is an AI-powered web search with four modes: search (web results with citations), ask (AI-synthesized answers via sonar-pro), research (deep multi-source analysis, 30+ seconds), and reason (step-by-step logical reasoning).

Best for:

  • Latest version numbers, release dates, EOL schedules
  • Security advisories and CVE details
  • PHP language features and RFCs (property hooks, asymmetric visibility — these aren't in Context7)
  • External service pricing (API costs, hosting comparisons)
  • General programming best practices and benchmarks

Key role: Perplexity fills every gap Context7 has. When Context7 returns nothing or returns stale docs, Perplexity almost always has the answer.

JetBrains — IDE-Level Code Intelligence

This is the MCP server that saves the most tokens. JetBrains MCP connects Claude Code to your IDE's index — the same index that powers autocomplete, go-to-definition, and refactoring. The base JetBrains MCP provides generic tools (file search, symbol lookup, text search, terminal commands), and framework plugins extend it with specialized tools — the Symfony plugin adds route listing, service lookup, Doctrine entity inspection, and Twig analysis.

My typical workflow: I paste a Jira ticket link into the conversation. Claude reads the task via the Atlassian MCP, then uses JetBrains MCP to do a quick code reconnaissance — finding the relevant services, checking route definitions, inspecting entity fields — all before writing a single line of code. This "analyze first" step catches misunderstandings early and gives Claude the context to ask better clarifying questions.

Symfony-specific capabilities (via the Symfony plugin):

  • list_symfony_routes_controllers — all routes with controller, path, methods. One call instead of grepping through attributes across dozens of files
  • locate_symfony_service — find any service definition by its fully-qualified class name
  • list_doctrine_entity_fields — entity fields, types, and relationships in structured format
  • list_symfony_commands, list_symfony_forms — console commands and form types at a glance

Token savings example: Finding all routes matching /api/ in a Symfony project:

  • Without JetBrains: Grep for #[Route across src/Controller/, read each matching file, parse the route attributes, cross-reference with _routes.yaml. Easily 5-10 tool calls and thousands of tokens of file content.
  • With JetBrains: One list_symfony_routes_controllers call returns a structured, filterable list. Done.

Also useful for: Indexed text search (search_in_files_by_text), file search by name, symbol lookup, running terminal commands in the IDE, build/test execution.

Chrome DevTools — Browser Automation

Chrome DevTools MCP lets Claude control a browser: navigate to URLs, click elements, fill forms, take screenshots, inspect network requests, run JavaScript, and execute Lighthouse audits.

Best for:

  • Testing UI changes visually after modifying templates or CSS
  • Running Lighthouse performance/accessibility audits
  • Debugging frontend issues (checking console errors, network requests)
  • Verifying responsive behavior at different viewport sizes

Sentry — Production Error Tracking

Sentry MCP connects Claude to your error tracking system. It can search issues, retrieve stack traces, analyze errors with Sentry's AI (Seer), and look up release and deployment information.

The workflow that makes this valuable:

  1. You notice an error (or a user reports one)
  2. Claude queries Sentry: "search for 500 errors in the last 24 hours"
  3. Sentry returns the stack trace, affected users count, first/last seen timestamps
  4. Claude reads the relevant source file, identifies the root cause, and proposes a fix
  5. The entire debug cycle happens without leaving the terminal

Best for:

  • Investigating production errors with full stack traces
  • Understanding error frequency and patterns (is it new? is it getting worse?)
  • Correlating errors with recent deployments

Atlassian/Jira — Task Management

Jira MCP provides full issue lifecycle management: create, read, edit, transition, comment, and search with JQL.

Best for:

  • Reading task specifications before starting work
  • Updating issue status as work progresses
  • Adding technical comments to issues for team visibility
  • JQL searches to find related issues or check what's in the current sprint

The Decision Tree

The core of the global CLAUDE.md is a routing table that maps query types to the best tool. Here's the actual content from mine:

## Search & Research — Tool Decision Tree

### When to use Context7
**Best for**: library/framework API docs with clean code examples
- Versioned library docs (Sylius, Doctrine, API Platform, Symfony, GitHub Actions)
- Official method signatures, configuration examples, how-to patterns
- Concise, authoritative answers directly from official source
- `resolve-library-id` first, then `query-docs`
- **Fails for**: current version/release info, PHP language features,
  security CVEs, pricing, general programming

### When to use Perplexity
**Best for**: anything current, factual, or not a library doc
- Latest versions, release dates, EOL schedules
- Security advisories, CVEs, vulnerability details
- PHP language features (property hooks, new syntax)
- External service pricing
- General programming best practices, benchmarks
- Supplement when Context7 fails or for real-world context

### When to use WebSearch
- Official blog posts / release announcements
- As last resort or to supplement

The key pattern: each section leads with "best for" (when to pick this tool) and ends with "fails for" (when to skip it). Claude uses both signals — positive routing and negative routing.

The Benchmark Table

I ran the same 10 query types through Context7, Perplexity, and WebSearch and rated the results. This table lives in the global CLAUDE.md so Claude can reference it when deciding:

Query type Context7 Perplexity WebSearch
Versioned library docs ★★★★★ ★★★★ ★★★
Current version/release info ★★★★★ ★★★★
Code examples from official docs ★★★★★ ★★★★★ ★★★★
PHP language features ★★★★★ ★★★★
Framework how-to (API Platform etc.) ★★★★★ ★★★★★ ★★★★
Security CVEs / advisories ★★★★★ ★★★★
General programming (benchmarks etc.) ★★★★ ★★★★
CI/DevOps workflows ★★★★ ★★★★ ★★★★
Release notes / new features ★★★★ ★★★ ★★★★★
External service pricing ★★★★★ ★★★★★

The pattern is clear: Context7 is excellent for versioned docs but scores zero on anything requiring current or real-world data. Perplexity covers nearly everything. WebSearch is the strongest for blog posts and release announcements.

Including this table in the global CLAUDE.md gives Claude a quantitative basis for tool selection, not just rules.

How Global Memory Works

Claude Code supports a global instruction file at ~/.claude/CLAUDE.md. This file is loaded at the start of every conversation, regardless of which project you're working in. It's the right place for tool routing rules because MCP servers are configured globally, not per-project.

Compare this with project-level AGENTS.md:

AGENTS.md ~/.claude/CLAUDE.md
Scope One project All projects
Content Code conventions, architecture, commands Tool routing, personal preferences
Example "Use ddev exec for all PHP commands" "Use Context7 for Symfony docs"

Structuring the Global CLAUDE.md

The global CLAUDE.md works best when it's structured like a reference manual, not a narrative. Claude scans it at session start — clear headings and explicit rules make that scan effective.

Tips from my experience:

  1. Lead with the decision rule, not the description. "Best for: versioned library docs" is more useful than "Context7 is a documentation server that..."
  2. Include failure modes. "Fails for: current versions" prevents Claude from trying Context7 for queries it can't handle.
  3. Add a server inventory. List every MCP server with its tools — Claude can reference this when it needs a capability it hasn't used before.
  4. Use concrete examples. "Latest Symfony version → Perplexity" beats "use Perplexity for current data."
  5. Update when tools change. Added a new MCP server? Update the file. Removed one? Remove its entry. Stale routing rules are worse than no rules.

Build Your Own

The specific MCP servers don't matter — the pattern does. Whether you use Cursor, Windsurf, or Claude Code, whether you write Python or Go, the principle is the same: teach your AI assistant which tool to reach for.

Step-by-step:

  1. List your MCP servers (or equivalent tool integrations) and what each one does
  2. Identify overlaps — where can two tools answer the same query? (Context7 and Perplexity both handle Symfony docs, but with different strengths)
  3. Benchmark — run the same 5-10 representative queries through each overlapping tool. Rate the results. This gives you data, not gut feeling.
  4. Write routing rules — for each tool, write "best for" and "fails for" sections with specific query types
  5. Include the benchmark — the table gives your AI a quantitative reference, not just instructions
  6. Iterate — the first version won't be perfect. When Claude picks the wrong tool, update the file. Over a few sessions, the routing gets tight.

My global CLAUDE.md started as a list of MCP servers with one-line descriptions. After a few weeks of observing where Claude made suboptimal choices, it evolved into the decision tree above. The benchmark table was the biggest single improvement — it turned vague "prefer X over Y" rules into concrete data Claude could act on.

The investment is small (an hour to set up, a few minutes per update) and the payoff compounds: fewer wasted tokens, faster answers, and less time correcting tool choices mid-conversation.