Model Context Protocol MCP: Why It Matters (And When to Skip It)

2026-04-02 · Nico Brandt

Every AI tool announcement in 2026 drops the same acronym. Your Slack is full of “we should integrate MCP.” Your team lead sent a blog post about it. You still don’t know if the Model Context Protocol actually matters — or if it’s another standard that’ll be irrelevant by next year’s conference season.

Here’s the honest version: MCP solves a real problem, but not every problem. By the end of this, you’ll know exactly when it saves you engineering time and when a plain REST endpoint is the smarter call.

What Problem MCP Actually Solves

Every AI model has invented its own way to talk to external tools. OpenAI has function calling. LangChain has tool abstractions. Every agent framework rolls its own integration layer. You build a database connector for one client, then rewrite it for the next.

The Model Context Protocol is a standardized interface. Build one MCP server, and any compatible client — Claude, Cursor, VS Code Copilot, your custom agent — can discover and call your tools without custom integration code.

The analogy that actually holds up isn’t USB-C. It’s LSP. Before the Language Server Protocol, every editor needed its own language plugin. After LSP, you build one language server and every editor speaks to it. MCP does the same thing for AI tool integration: one server, many clients.

The concrete version: connecting your database tool to three AI clients used to mean three custom integrations. With MCP, it’s one server. The clients just connect. If you’ve built integrations for AI-powered development tools, you already know how painful the “rewrite for every client” loop gets.

That sounds useful in theory. But if you’ve been shipping software long enough, you know that “standardized protocol” can also mean “extra abstraction layer for no reason.”

MCP vs Traditional APIs: When Each One Wins

Let’s be direct: if you have one AI client calling one backend service, a REST API is simpler and you should use it. MCP earns its complexity when the integration landscape gets wider. For more on when to use REST vs other integration patterns, see building with AI APIs.

MCP wins when:

REST wins when:

The decision heuristic: will more than one AI client use this? Will the AI need to discover capabilities at runtime? Yes to either → MCP. No to both → REST is fine.

Now that you know when MCP fits, here’s what you actually need to build with it.

The Three Primitives: Tools, Resources, and Prompts

MCP has exactly three building blocks. You’ll use tools 80% of the time, resources 15%, and prompts 5%. Don’t overthink it.

Tools — The Workhorses

Tools are functions the AI can call. They’re the closest thing to API endpoints, but with built-in schema descriptions so the AI knows what each tool does, what parameters it takes, and when to use it — without you writing integration docs.

A query_database tool takes a SQL string and returns results. A create_issue tool takes a title and body and files a GitHub issue. The AI reads the tool’s description and decides when to invoke it. You define the shape once:

tool: query_database
  description: "Run a read-only SQL query against the analytics database"
  input: { query: string }
  output: { rows: array, row_count: number }

The description field matters more than you’d think. It’s how the AI decides whether to use the tool. Vague descriptions like “database stuff” lead to wrong tool selection. Specific descriptions like “run a read-only SQL query against the analytics database” lead to correct, predictable behavior.

Resources — Context on Demand

Resources are read-only data the AI can pull in for context. Think files, configs, or live data feeds the AI reads to make better decisions — but can’t modify through the resource interface.

A project_readme resource exposes your repo’s README. A recent_errors resource serves the last 50 error log entries. A database_schema resource shows table definitions so the AI can write accurate queries for your tools.

Resources are what make the difference between an AI that guesses at your codebase and one that knows it. If you’ve ever wished your AI agent orchest had more context about your specific project, resources are the mechanism.

Prompts — Templates You’ll Add Later

Prompts are reusable templates stored server-side. A code_review prompt includes your team’s checklist. A sql_query prompt includes few-shot examples for your specific schema. The AI client discovers and uses them instead of you copy-pasting context every session.

Most developers skip prompts at first. That’s fine. The practical starting pattern: build 2–3 tools that expose your most common operations. Add resources for context the AI needs to make good tool calls. Add prompts when you notice your team pasting the same instructions repeatedly.

Here’s the shape of a minimal MCP server — language-agnostic, just the structure:

server "analytics-mcp"
  tool "query_database"
    description: "Run read-only SQL against analytics DB"
    input_schema: { query: string, limit?: number }
    handler: → execute query, return rows

  resource "database_schema"
    description: "Current table definitions"
    handler: → return schema DDL

  prompt "write_query"
    description: "Few-shot examples for analytics queries"
    template: → system prompt with example queries

Three tools, a resource for context, a prompt for standardization. That’s a production-ready MCP server for most use cases. The TypeScript and Python SDKs turn that pseudocode into running code in under 50 lines.

That covers the building blocks. But knowing the primitives and surviving production are two different things.

Production Gotchas Nobody Mentions

Every tutorial shows you how to build an MCP server. None of them mention what breaks when real users connect.

Security is your problem. MCP servers run with whatever permissions you give them. A tool that can query your database can query all of it unless you scope access. The spec supports OAuth 2.0 and JWT-based authentication for remote servers — use it. For local servers, run with least-privilege credentials. Always validate tool inputs server-side. The AI will sometimes send unexpected parameters, and your server shouldn’t trust them any more than you’d trust user input in a web application. Never expose write operations without authentication and authorization checks.

Transport matters. MCP supports stdio (for local tools like Cursor and Claude Desktop) and Streamable HTTP / SSE (for remote deployments). Stdio is fast — same-machine, no network overhead. Remote transports add latency. If your tool does heavy computation, make it async: return a job ID and let the AI poll for results. Don’t block the connection while crunching numbers.

Error handling is inconsistent across clients. Some AI clients retry on MCP errors. Some hallucinate a result. Some give up silently. Your server should return structured errors with clear messages — the AI will surface these to the user or adjust its approach. Assume the worst and make your errors descriptive.

The “too many tools” trap. Giving an AI 50 tools degrades its ability to pick the right one. This is the same problem as building a UI with 50 buttons — more options, worse decisions. Keep your MCP server focused: 5–10 well-described tools beats 50 vague ones. If you genuinely have many capabilities, split them across multiple focused servers.

The Bottom Line

You wanted to know if MCP is worth your attention or just protocol hype. Here’s the answer: the Model Context Protocol is a genuinely useful standard if you’re building tools that multiple AI clients will use, or if runtime capability discovery matters for your workflow. It’s overkill if you have a single integration point and a REST call does the job.

The one-sentence decision: build an MCP server when you’d otherwise build the same integration twice.

The spec is clean, the TypeScript and Python SDKs are mature, and Claude Desktop, Cursor, and VS Code Copilot all support MCP clients today — you can test a server in minutes. No need to wait for the ecosystem. It’s already here.

MCP doesn’t change the world. It just means you stop rewriting the same tool integration for every new AI client that shows up. For a protocol, that’s exactly enough.