Agent Economy Agents Tools Open Source Deployment

MCP Is Winning the Agent Tool Protocol War

Model Context Protocol is becoming the USB-C of agent-tool integration — here's what that means for builders

| 11 min read
THE SIGNAL
  • Anthropic launched the Model Context Protocol (MCP) as an open specification in November 2024. Fifteen months later, the community registry lists over 2,000 MCP servers spanning databases, developer tools, SaaS APIs, and enterprise connectors.
  • OpenAI added native MCP support to the Agents SDK and ChatGPT desktop app in March 2025, marking the moment MCP crossed from Anthropic ecosystem tool to industry-wide default.
  • Google DeepMind confirmed MCP client support in Gemini and Vertex AI Agent Builder in late 2025, eliminating the last major holdout and cementing MCP as the only protocol with backing from all three frontier labs.
  • The MCP spec has evolved through three major revisions — adding Streamable HTTP transport, OAuth 2.1 authorization, and structured tool annotations — transforming it from a local-only developer protocol into a production-grade, remotely-deployable integration layer.
  • Agent frameworks including LangChain, CrewAI, AutoGen, and Vercel AI SDK now ship with MCP-first tool discovery, meaning any tool exposed as an MCP server is automatically available to agents built on these platforms without custom glue code.

What Happened

In November 2024, Anthropic open-sourced the Model Context Protocol — a JSON-RPC 2.0 based specification for connecting AI models to external tools and data sources. The pitch was simple: instead of every agent framework inventing its own tool-calling interface, MCP would provide a single, open standard. A universal adapter. The industry had seen this play before with container orchestration (Kubernetes), package management (npm), and hardware interfaces (USB). The question was whether any single protocol could win fast enough to prevent fragmentation.

It won fast. Within six months of launch, MCP had reference implementations in TypeScript and Python, a growing registry of community-built servers, and — critically — adoption from Anthropic’s direct competitors. OpenAI integrated MCP client support into the Agents SDK in March 2025. Google followed in late 2025. By early 2026, every major agent framework had standardized on MCP as the default tool integration layer. The protocol war that many anticipated between OpenAI’s function calling format, Google’s Vertex Extensions, and Anthropic’s MCP never materialized. MCP’s advantage was not technical superiority — it was that Anthropic gave it away as a fully open specification with an Apache 2.0 licensed SDK, while competitors had tied their tool interfaces to proprietary platforms.

The spec itself has matured considerably since launch. The initial release supported only stdio-based local communication between a host application and MCP servers running as child processes. By February 2026, the protocol supports three transport mechanisms (stdio, Server-Sent Events over HTTP, and the newer Streamable HTTP transport), includes OAuth 2.1 for remote server authentication, and has formalized tool annotations that let servers declare whether a tool is read-only or has side effects. This is no longer a toy for local developer workflows. It is production infrastructure.

INSIGHT

MCP’s adoption curve mirrors Kubernetes in 2016-2017. The technical details mattered less than the ecosystem dynamics: an open spec, backed by a major player willing to cede control, adopted by competitors who preferred a shared standard to building their own. The lesson is the same — in infrastructure protocol wars, open + good enough beats proprietary + technically superior every time.

The Protocol Architecture

Before diving into builder implications, it is worth understanding what MCP actually standardizes. The protocol defines three core primitives that an MCP server can expose to an AI agent:

Tools are the most widely used primitive. A tool is a function with a name, a JSON Schema description of its parameters, and optional annotations describing its behavior (read-only vs. destructive, idempotent vs. not). When an agent decides to use a tool, the MCP client sends a tools/call request to the server, which executes the function and returns the result. This is the mechanism that lets an agent query a database, create a GitHub issue, or send a Slack message.

Resources provide structured data access. Unlike tools, which are model-controlled (the agent decides when to invoke them), resources are application-controlled — the host application decides which resources to include in context. Think of resources as the protocol’s way of handling file contents, database schemas, API documentation, or any reference data the agent needs to reason about.

Prompts are reusable prompt templates that servers can expose. These are less commonly used than tools and resources, but they allow MCP servers to package domain-specific prompting strategies — for example, a database MCP server might expose a “query_optimization” prompt template that instructs the agent how to write efficient SQL for that specific database engine.

RISK

The MCP ecosystem currently has minimal server verification or supply chain security. Anyone can publish an MCP server, and many community servers have not been audited. A malicious MCP server has full access to whatever capabilities the host application grants it — file system reads, network requests, database writes. Treat MCP servers like npm packages: vet them before you install them, pin versions, and run untrusted servers in sandboxed environments. The spec now includes tool annotations for declaring side effects, but these are self-reported by the server and not enforced by the protocol.

BUILDER BREAKDOWN

Building an MCP Server

The fastest path to production is the official TypeScript or Python SDK. An MCP server is a process that speaks JSON-RPC 2.0 over one of the supported transports and exposes tools, resources, or prompts (or any combination of the three).

TypeScript Example — A Database Query Server:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import Database from "better-sqlite3";

const server = new McpServer({
  name: "sqlite-query",
  version: "1.0.0",
});

const db = new Database("/path/to/your.db", { readonly: true });

// Expose a tool for running read-only SQL queries
server.tool(
  "query",
  "Execute a read-only SQL query against the database",
  {
    sql: z.string().describe("The SQL SELECT query to execute"),
  },
  async ({ sql }) => {
    // Safety: only allow SELECT statements
    if (!sql.trim().toUpperCase().startsWith("SELECT")) {
      return {
        content: [{ type: "text", text: "Error: Only SELECT queries allowed" }],
        isError: true,
      };
    }
    const rows = db.prepare(sql).all();
    return {
      content: [{ type: "text", text: JSON.stringify(rows, null, 2) }],
    };
  }
);

// Expose the database schema as a resource
server.resource(
  "schema",
  "db://schema",
  "The database schema for all tables",
  async () => {
    const tables = db
      .prepare("SELECT sql FROM sqlite_master WHERE type='table'")
      .all();
    return {
      contents: [{
        uri: "db://schema",
        mimeType: "text/plain",
        text: tables.map((t: any) => t.sql).join("\n\n"),
      }],
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

Python Example — A Web API Connector:

from mcp.server.fastmcp import FastMCP
import httpx

mcp = FastMCP("github-issues", version="1.0.0")

@mcp.tool()
async def list_issues(
    repo: str,
    state: str = "open",
    limit: int = 10,
) -> str:
    """List GitHub issues for a repository.

    Args:
        repo: Repository in owner/name format (e.g. 'anthropics/mcp')
        state: Filter by state — open, closed, or all
        limit: Maximum number of issues to return
    """
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"https://api.github.com/repos/{repo}/issues",
            params={"state": state, "per_page": limit},
            headers={"Accept": "application/vnd.github.v3+json"},
        )
        resp.raise_for_status()
        issues = resp.json()

    lines = []
    for issue in issues:
        lines.append(f"#{issue['number']} [{issue['state']}] {issue['title']}")
    return "\n".join(lines) if lines else "No issues found."

@mcp.tool()
async def create_issue(
    repo: str,
    title: str,
    body: str = "",
) -> str:
    """Create a new GitHub issue.

    Args:
        repo: Repository in owner/name format
        title: Issue title
        body: Issue body in markdown
    """
    async with httpx.AsyncClient() as client:
        resp = await client.post(
            f"https://api.github.com/repos/{repo}/issues",
            json={"title": title, "body": body},
            headers={
                "Accept": "application/vnd.github.v3+json",
                "Authorization": f"Bearer {os.environ['GITHUB_TOKEN']}",
            },
        )
        resp.raise_for_status()
        issue = resp.json()
    return f"Created issue #{issue['number']}: {issue['html_url']}"

if __name__ == "__main__":
    mcp.run(transport="stdio")

Transport Mechanisms

MCP supports three transports, and choosing the right one determines how your server deploys:

stdio is the original transport. The MCP client launches the server as a child process and communicates over standard input/output. Best for: local development tools, CLI integrations, desktop applications. Claude Desktop, Claude Code, and Cursor all use stdio for local MCP servers. Zero network configuration required.

Streamable HTTP is the recommended transport for remote deployments (introduced in the 2025-03-26 spec revision). The client sends JSON-RPC requests via HTTP POST to a single endpoint. The server can respond with a single JSON response or upgrade to a Server-Sent Events stream for progress updates and multi-part results. This replaced the older SSE transport and is now the preferred production mechanism.

// Remote MCP server using Streamable HTTP transport
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from
  "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";

const app = express();
app.use(express.json());

const server = new McpServer({ name: "remote-api", version: "1.0.0" });

// ... register tools, resources, prompts ...

const transport = new StreamableHTTPServerTransport({ endpoint: "/mcp" });
app.use("/mcp", async (req, res) => {
  await transport.handleRequest(req, res);
});

await server.connect(transport);
app.listen(3001);

SSE (deprecated) was the original remote transport. It used one endpoint for client-to-server messages (POST) and another for server-to-client events (GET with SSE). Still supported for backward compatibility, but new servers should use Streamable HTTP.

Authentication for Remote Servers

The spec now includes an OAuth 2.1 authorization flow for remote MCP servers. When a client connects to a remote server that requires auth, the server responds with a 401 and an RFC 8414 authorization server metadata URL. The client then performs a standard OAuth flow — authorization code with PKCE — and attaches the resulting Bearer token to subsequent requests. This means your MCP server can integrate with existing identity providers (Auth0, Okta, Cognito) without inventing a custom auth scheme.

Key Tool Categories in the Ecosystem

The 2,000+ MCP servers in the ecosystem cluster into predictable categories:

  • Databases: PostgreSQL, MySQL, SQLite, MongoDB, Redis, Elasticsearch — each exposing query, schema inspection, and (optionally) write tools.
  • Developer tools: GitHub, GitLab, Jira, Linear, Sentry — issue management, code search, deployment status.
  • Web and search: Brave Search, Google Search, web scraping (Firecrawl, Browserbase), URL fetching.
  • Communication: Slack, Discord, email (via SMTP or Resend/SendGrid APIs), calendar management.
  • Cloud infrastructure: AWS, GCP, Cloudflare, Vercel, Kubernetes — resource management, log queries, deployment triggers.
  • Enterprise systems: Salesforce, HubSpot, Notion, Confluence, Google Workspace — CRM data, document access, knowledge bases.
  • File and storage: Local filesystem, S3, Google Drive, Dropbox — file read/write, search, and organization.
ECONOMIC LAYER

Platform Dynamics

MCP creates a two-sided marketplace dynamic. On one side, tool and API providers who ship MCP servers make their products instantly accessible to every AI agent. On the other side, agent frameworks that support MCP as a client get access to the entire tool ecosystem without bilateral integration work. This is the classic platform flywheel: more servers attract more clients, which attract more servers.

Winners:

  • Anthropic gains ecosystem influence without direct revenue. By controlling the spec (even as an open standard), Anthropic shapes the conventions that every agent builder follows. Claude’s tool-use capabilities were designed around MCP’s primitives, giving it a subtle home-field advantage in how tools are described and invoked.
  • SaaS companies that ship MCP servers early. If your product has an MCP server in the community registry, every Claude, ChatGPT, and Gemini user can connect to it by adding a single config block. This is a new distribution channel — tool discovery via agent, not via app store. Companies like Stripe, Sentry, and Cloudflare that published official MCP servers in 2025 are seeing organic agent-driven API usage they never marketed for.
  • Agent framework developers. LangChain, CrewAI, and similar frameworks become more valuable as the MCP ecosystem grows — they are the middleware layer that connects arbitrary agents to arbitrary tools.
  • Enterprise integration platforms. Companies building managed MCP server hosting and governance (server registries, access controls, audit logging) are filling a real gap. The protocol defines the wire format; it does not define the management plane.

Losers:

  • Proprietary tool-calling ecosystems. OpenAI’s custom GPT Actions, Google’s Vertex Extensions, and similar proprietary integration formats are being deprioritized by builders in favor of “build once with MCP, deploy everywhere.” Products that only support one vendor’s integration format are limiting their addressable market.
  • Traditional iPaaS platforms (Zapier, Make, Workato) face a strategic challenge. MCP allows agents to call APIs directly with structured, schema-aware tool interfaces — exactly the problem iPaaS platforms solve for human-in-the-loop workflows. The integration layer is moving from human-configured workflows to agent-discovered tools.
  • Tool providers that ignore MCP. The integration tax is real and growing. If a developer must write custom tool definitions to connect your API to an agent — instead of pointing at your MCP server — they will choose a competitor that publishes one. This is the same dynamic that punished SaaS companies without REST APIs in the 2010s.

The Open Question: Who Governs the Spec?

MCP is currently maintained by Anthropic with community input via GitHub. There is no independent standards body. This mirrors the early days of Docker (company-controlled spec) before the Open Container Initiative formed. As MCP adoption deepens and enterprise budgets depend on it, expect pressure for either a formal foundation or multi-vendor governance structure. The spec’s evolution — particularly around security, auth, and server verification — will increasingly have financial consequences for the ecosystem.

“MCP is not a tool-calling format. It is a network protocol for the agent economy — the TCP/IP layer that lets any agent talk to any tool. The companies that build MCP servers today are the ones that will have distribution when agents become the primary interface for software.”

The Integration Pattern Shift

The practical effect of MCP standardization is a shift in how agent-powered products are architected. Before MCP, building an agent that could interact with external systems required writing custom tool definitions for every integration — describing the function signature, parsing the response, handling auth, managing errors. Each agent framework had its own format for these definitions. Adding a new tool meant writing new code.

With MCP, the pattern inverts. Instead of the agent application defining tools, tools define themselves. An MCP server advertises its capabilities through the tools/list and resources/list endpoints. The agent client discovers available tools at runtime, reads their schemas, and invokes them through a standardized protocol. Adding a new integration to your agent means adding a line to your MCP server configuration, not writing new application code.

This is the architectural shift that matters. It decouples agent logic from tool implementation. Your agent does not need to know how to query PostgreSQL or create a Jira ticket — it needs to know how to speak MCP. The implementation details live in the MCP server, which can be maintained independently, versioned separately, and shared across every agent in your organization.

INSIGHT

The most underappreciated MCP feature is resource subscriptions. An MCP client can subscribe to resource changes and receive notifications when underlying data updates. This enables reactive agent architectures — agents that respond to database changes, file modifications, or API state transitions in real time, rather than polling. Most production implementations have not adopted this pattern yet, but it will define the next generation of agent workflows.

WHAT I WOULD DO

Recommendations by Role

CTO: Make MCP the standard integration interface for your product’s API. If you offer a developer platform or SaaS product with an API, publish an official MCP server alongside your SDK. This is not optional for 2026 — it is table stakes. Internally, audit your agent-powered features and migrate any custom tool-calling implementations to MCP clients. The goal is a single integration protocol across your entire agent stack, reducing maintenance burden and enabling tool reuse across products.

Founder: If you are building in the agent tooling space, the MCP ecosystem is your distribution strategy. Ship an MCP server and get listed in the community registry. Every developer using Claude Code, Claude Desktop, Cursor, Windsurf, or any MCP-compatible agent framework becomes a potential user without you spending a dollar on marketing. If you are building agents (not tools), standardize on MCP-first tool discovery — it gives your agents access to the broadest set of integrations with the least engineering effort. Do not build proprietary tool interfaces. That path leads to integration maintenance debt that will slow your iteration speed.

Infra Lead: Deploy a centralized MCP gateway for your organization. Instead of every developer running local MCP servers with their own credentials, stand up shared remote MCP servers (using Streamable HTTP transport and OAuth 2.1) that enforce access controls, audit tool invocations, and manage secrets centrally. Treat MCP servers like microservices: containerize them, run them behind your service mesh, monitor their latency and error rates. The operational maturity of your MCP infrastructure will determine how reliably your agents perform in production. Start with your three highest-value integrations — likely your primary database, your issue tracker, and your internal knowledge base — and expand from there.

SOURCES & NOTES
  1. “Model Context Protocol Specification,” modelcontextprotocol.io/specification (2025-03-26 revision)
  2. “Introducing the Model Context Protocol,” Anthropic Blog, anthropic.com/news/model-context-protocol (November 2024)
  3. “OpenAI Agents SDK — MCP Support,” OpenAI Documentation, platform.openai.com/docs/agents (March 2025)
  4. MCP Community Servers Registry, github.com/modelcontextprotocol/servers (accessed February 2026)
  5. “MCP: The Protocol Powering Agent-Tool Integration,” Sequoia Capital AI Infrastructure Report, sequoiacap.com/article/mcp-agent-tool-protocol (January 2026)
  6. “Building Remote MCP Servers with OAuth 2.1,” Anthropic Developer Blog, docs.anthropic.com/en/docs/agents-and-tools/mcp (2025)

NEXT UP

Stay in the loop

Infrastructure intelligence, delivered weekly. No hype, just actionable analysis.