Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that lets AI agents connect to external tools and data through a uniform server and client interface. By April 2026 it is the default integration layer for Claude, Cursor, ChatGPT, and Gemini agents. If you build with any of them, you are already using MCP, whether you noticed or not.
What is MCP, in one paragraph?
I wrote my first MCP server in early 2025 to wire Claude Code into our internal content pipeline. By the end of that year we had eight of them running across AI Agency. So this is a builder's tour, not a press release. Read /blog/being-gen-ai for our broader thesis on the operator stack, then follow along here.
MCP is a small, open protocol that defines how an AI agent host talks to an external server that exposes tools, resources, and prompts. Anthropic published the spec on November 25, 2024 (per their announcement at anthropic.com/news/model-context-protocol). Servers expose capabilities. Clients (the agent runtimes) discover and call them. The wire format is JSON-RPC over stdio or HTTP. That is it. The protocol does not specify which model you use, which language you write in, or which vendor hosts the server. It is deliberately small. The whole specification fits in a single afternoon of reading. That smallness is why adoption was so fast: anyone could implement a client or a server in a weekend, and tens of thousands of developers did exactly that through 2025.
Why does MCP exist?
Before MCP, every AI integration was bespoke. If you wanted Claude to read your Notion, you wrote a custom Notion connector for Claude. If you wanted Cursor to do the same, you wrote a separate Notion connector for Cursor. ChatGPT plugins were a third silo. Each model vendor reinvented the same plumbing, badly, in slightly different shapes. The connector explosion was N times M, where N was every model and M was every tool. With ten models and a hundred tools, that's a thousand integrations to build and maintain. Nobody did it. So agents stayed mostly disconnected from real systems, and the "AI agent" pitch in 2023 and most of 2024 was largely vapor. MCP collapses that into N plus M. Each model implements MCP once. Each tool implements MCP once. Now any model can use any tool. Same pattern as USB replacing per-device cables, HTTP replacing per-network application protocols, and POSIX replacing per-vendor system calls. We have seen this movie before, and it always ends with the protocol winning.
How does MCP work technically?
An MCP server is a small process that exposes three primitives: tools (callable functions with JSON schema arguments), resources (read-only data the agent can fetch), and prompts (reusable templates the agent can invoke). The host application (Claude Desktop, Cursor, ChatGPT) launches one or more servers and treats them as a flat capability pool. When the user asks the agent to do something, the agent picks tools from that pool, calls them, and uses the results in its next turn.
The transport is JSON-RPC 2.0. Local servers run over stdio (the host spawns the process and pipes JSON over stdin/stdout). Remote servers run over HTTP with Server-Sent Events for streaming responses. Authentication for remote servers uses standard OAuth flows. Errors are typed. Streaming is built in.
Critically, the protocol is stateful per session. The host opens a connection, the server advertises its capabilities, and the host caches them for the session. When you install a new MCP server, the host restarts the connection and the new tools appear inside the agent on the next turn. No retraining, no fine-tuning, no model updates required.
Which agents support MCP in 2026?
By April 2026 the list of supported clients includes:
- Claude (desktop, code, web, API tool use). MCP launched here. Native first-class support.
- Cursor (since v0.45, released early 2025 per the Cursor changelog). MCP servers configurable from the settings panel.
- ChatGPT (announced at OpenAI DevDay 2025). MCP replaces the older plugin system.
- Gemini (in agent mode, rolled out across 2025). Google contributed remote-server transport improvements.
- GitHub Copilot (agent mode). Microsoft added MCP after the Anthropic + OpenAI dual-vendor convergence in mid-2025.
- Open-source frameworks: LangChain, LlamaIndex, Continue, AutoGen, and the smaller ones too.
The ecosystem of servers is now into the thousands. The official directory at modelcontextprotocol.io lists hundreds of vetted servers. Real ones we use daily: Notion MCP, Linear MCP, Postgres MCP, Cloudflare MCP, Filesystem MCP, Brave Search MCP, GitHub MCP, Stripe MCP. Companies are now shipping MCP servers the way they used to ship REST APIs five years ago, and the gap between "we have an API" and "we have an MCP server" is now a meaningful product signal. Read /blog/category/ai-tools-reviews for our walkthroughs of the specific servers worth installing.
How do I write an MCP server?
The fastest path is to clone the official quickstart repo on GitHub, swap in your tool definitions, and run it locally. Here's the shape, in plain English.
You declare a server. You give it a name and a version. You register one or more tools, each with a name, a human-readable description, and a JSON schema for its arguments. You implement a handler for each tool that does the actual work (hits an API, queries a database, writes a file). You run the server. You add a single JSON entry to your Claude Desktop or Cursor config that points at the server binary. Restart the host. The tools show up inside the agent.
I built our first internal MCP server, the one that lets Claude query our content database, in about 90 minutes. It was 60 lines of TypeScript. The second one took 30 minutes. The protocol is small enough that the second time you write a server, you stop reading the docs entirely.
The hardest part is not the protocol. The hardest part is writing good tool descriptions. The agent decides whether to use your tool based on the description string alone. Vague descriptions get ignored. Overly specific descriptions get over-called on the wrong inputs. Treat the description as the prompt for a small classifier, because that is functionally what it is. We rewrite tool descriptions more often than we rewrite the tool implementations.
What changes for operators?
Three concrete shifts. First, the cost of giving an AI agent a new capability dropped from "build a custom integration" to "install a server." Anything you can wire to a CLI or an API can become an agent capability in an afternoon. We have MCP servers for our calendar, our customer database, our content pipeline, our ad accounts, our internal docs, and our deploy system. The agent uses all of them in the same session.
Second, the moat of a SaaS product is no longer just its UI. If your product ships a great MCP server, customers can compose your tool with their other tools inside whatever agent they prefer. If you do not ship one, you are invisible to the agent layer. Notion, Linear, and Stripe figured this out fast. Many incumbents have not.
Third, internal tooling becomes leverage in a way it never was. The custom MCP servers we run at AI Agency are not products we sell. They are how our team operates faster than teams that don't have them. Every workflow we automate behind an MCP server is a workflow our agents can chain into longer sequences without us touching it. That is where the compounding lives.
Why MCP and why now?
The honest answer is timing. By late 2024 the underlying models (Claude 3.5 Sonnet at the time, GPT-4o, Gemini 1.5) were finally strong enough that giving them tools made agents reliable instead of theatrical. The bottleneck stopped being model intelligence and became integration plumbing. MCP arrived exactly when the field was ready to be unblocked. That is the same shape as HTTP arriving when client-server computing was ready, or USB arriving when peripherals had outgrown PS/2 and serial. Standards win when the substrate is mature.
Anthropic did the smart thing by giving it away. They could have made it Claude-only and tried to lock developers in. Instead they published the spec, open-sourced the SDKs, and actively encouraged competitors to adopt. By the time OpenAI announced support at DevDay 2025, the network effect had already flipped. Refusing to ship MCP would have been the costly choice for any agent vendor.
The deeper reason it stuck is that MCP shipped with batteries included. The reference SDKs in TypeScript and Python are clean, documented, and maintained. The Claude Desktop config format is two lines of JSON. The error messages are decent. None of this sounds remarkable, but every previous attempt at an agent-tool standard (ChatGPT plugins, OpenAPI manifests, function-call schemas) shipped without one or more of those, and each one died from friction. MCP just had less friction than the alternatives at the moment everyone was looking for one.
What's next for MCP?
Four threads to watch through the rest of 2026. Remote-server authentication is still the rough edge. OAuth flows work but are clunky inside agent UIs. Expect that to get cleaner across host vendors as the user-experience gap becomes obvious. Sampling, where a server can ask the host's model to do a sub-call, is in the spec but underused. When it lands at scale, MCP servers themselves become agentic, and the line between "tool" and "agent" gets blurry in a useful way.
The registry problem (how a host discovers trustworthy servers) is still open. Today you copy a JSON snippet from a README into your config file and trust the author. That works at hobbyist scale and breaks at enterprise scale. Expect either Anthropic, GitHub, or a neutral foundation to ship a curated registry with signing, version pinning, and reputation by Q4.
The fourth thread is permission scoping. Right now an installed MCP server gets full access to whatever its tools can do. The granular consent UI inside agent hosts is still primitive. Watch for per-tool approval, scoped tokens, and audit trails to land across hosts as MCP moves into regulated industries.
If you operate at the intersection of AI and a real business, MCP is the protocol layer of the stack you are building on. Treat it like you treated HTTP in 1998 or REST in 2010. Learn it once, build on it for a decade. The teams who internalize that early get to compose tooling other teams cannot, and that compounding is what the next phase of operator advantage looks like.
Join the operators building on MCP
MCP is the protocol layer of the AI-native operator stack. We use it daily at AI Agency to wire our agents into the systems that actually run our business: the database that holds our content pipeline, the calendar that holds our cohort schedules, the dashboards that hold our numbers. Each of those used to be a tab a human kept open. Now they are tools an agent picks up when it needs them.
If you take one thing from this post, take this: the protocols that win are the ones that turn an N times M problem into N plus M. MCP did that for AI agents. The teams that internalize the shift early get to compose tooling that the rest of the field is still building from scratch.
If you want to follow along with what's working in production, join AI Masterminds for the community of operators building on top of it. And read /blog/category/ai-trends for our running coverage of the agent layer.
FAQ
Is MCP open source?
Yes. MCP is published as an open specification at modelcontextprotocol.io with reference implementations in TypeScript and Python under the MIT license. Anthropic released it in November 2024 specifically as an open standard so that any model provider, agent IDE, or tool vendor could adopt it without licensing friction. The protocol itself, the SDKs, and the official server registry are all maintained on GitHub. Community contributions are merged regularly, and OpenAI, Google, and Microsoft have all submitted protocol-level proposals since adopting it in their own products through 2025.
Can I write my own MCP server?
Yes, and it is the fastest way to understand the protocol. A minimal MCP server is roughly 40 lines of TypeScript or Python. You import the SDK, declare the tools your server exposes (name, description, JSON schema for arguments), implement a handler for each tool, and run the server over stdio or HTTP. Once it is running, point Claude Desktop, Cursor, or any MCP client at the server config and your custom tools appear inside the agent. We have shipped internal MCP servers for our own database, calendar, and content pipeline at AI Agency in under an hour each.
Does MCP require Anthropic models?
No. The protocol is model-agnostic. Anthropic created it but designed it so that any LLM can use it, and by April 2026 ChatGPT, Gemini, open-source models running through LangChain or LlamaIndex, and IDEs like Cursor and Continue all speak MCP natively. The same MCP server you write for Claude works without modification when called by any of these clients. That is the whole point. The protocol sits between models and tools, not inside any single model provider.
What is the difference between MCP and a tool use API?
Tool use APIs (the function calling features inside Claude, ChatGPT, or Gemini) are model-side. They define how a single model decides to call a function and how the function returns a result back into the model's context. MCP is the wire format outside the model. It standardizes how the agent host (Claude Desktop, Cursor, ChatGPT) talks to an external server that exposes tools, resources, and prompts. Tool use is what the model does. MCP is how the host fetches tools to give the model in the first place.
Which AI agents support MCP today?
By April 2026, MCP is supported across Claude (desktop, code, API), Cursor (since v0.45 in 2025), ChatGPT (announced at OpenAI DevDay 2025), Gemini in agent mode, GitHub Copilot agent mode, and most major open-source frameworks including LangChain, LlamaIndex, and Continue. The official registry at modelcontextprotocol.io lists thousands of community servers. If you are building an AI product in 2026 and your tooling layer cannot speak MCP, you are effectively walled off from the rest of the agent ecosystem.
Sources
- Introducing the Model Context Protocol · Anthropic
- Model Context Protocol official documentation · modelcontextprotocol.io
- Cursor v0.45 changelog: MCP support · Cursor
- OpenAI DevDay 2025 announcements · OpenAI

