← All posts

MCP Turns One: Four Releases That Transformed How AI Agents Connect

Model Context Protocol celebrates its first anniversary with four major spec releases - from basic stdio servers to OAuth 2.1, tasks, and server-side agentic loops. Here's the technical evolution that made MCP the industry standard.

Your AI agent needs to read Slack messages, query your Postgres database, and update Jira tickets. That’s three different APIs, three auth flows, three ways to handle errors.

Unless you’re using MCP.

One year ago today, Anthropic released the Model Context Protocol - a standardized way to connect AI agents to data sources and tools. What started as a focused technical specification evolved through four major releases into the de facto protocol for agentic systems.

The numbers tell part of the story: the official MCP registry grew from zero to nearly 2,000 entries. PulseMCP now tracks 6,490+ community servers. The Discord community hit 2,900+ contributors with 100+ joining weekly. OpenAI adopted it. Microsoft built it into Windows 11.

But what actually changed in the protocol itself? Let’s trace the technical evolution across four spec releases that transformed MCP from a simple stdio-based system into a production-ready infrastructure layer.

First anniversary celebration


The Foundation: 2024-11-05 Release

On November 5, 2024, Anthropic published the initial MCP specification. The design was deliberately minimal - solve one problem well.

Core Architecture

MCP defined three roles in a client-server model inspired by Language Server Protocol:

  • Hosts: LLM applications that need external capabilities (Claude Desktop, ChatGPT, etc.)
  • Clients: Protocol connectors embedded within host applications
  • Servers: External services exposing resources, tools, and prompts

The message layer used JSON-RPC 2.0 with two critical requirements: stateful connections and capability negotiation during initialization.

Here’s what capability negotiation looked like:

{
  "jsonrpc": "2.0",
  "method": "initialize",
  "params": {
    "protocolVersion": "2024-11-05",
    "capabilities": {
      "resources": {},
      "tools": {},
      "prompts": {}
    }
  }
}

Clients and servers discover mutually-supported features during connection setup. If a server doesn’t support prompts, that capability simply doesn’t activate. This negotiation pattern became critical for backwards compatibility in later releases.

The Three Server Primitives

The initial spec defined exactly three things MCP servers could expose:

Resources: Static or dynamic context for agents to consume. Think “contents of my Notion workspace” or “recent Slack messages in #engineering”. Resources are read-only data sources.

Tools: Functions agents can execute. “Create a Jira ticket”, “Run this SQL query”, “Send an email”. Tools have side effects.

Prompts: Pre-built message templates with variable substitution. A prompt might be “Analyze the latest PR in repository {repo_name}” with parameters the user fills in.

Transport: stdio Only

The initial release supported one transport: stdio (standard input/output). MCP servers ran as local processes, communicating through stdin/stdout pipes.

This worked great for desktop applications like Claude Desktop where you’re launching local servers:

{
  "mcpServers": {
    "postgres": {
      "command": "node",
      "args": ["/path/to/postgres-mcp-server/index.js"],
      "env": {
        "DATABASE_URL": "postgres://localhost/mydb"
      }
    }
  }
}

Simple, secure (no network exposure), and perfect for the initial use case - connecting Claude Desktop to local tools.

But stdio doesn’t work for remote servers or multi-tenant scenarios. That limitation drove the first major evolution.

What Was Missing

The initial spec deliberately punted on several hard problems:

  • Authentication: No standardized auth mechanism beyond environment variables
  • Remote servers: stdio requires local processes
  • Long-running operations: No way to track task status for operations taking minutes or hours
  • Batching: Each request was independent, no optimization for bulk operations
  • Tool metadata: Limited ability to describe tool behavior (read-only vs destructive, etc.)

These gaps weren’t oversights - they were conscious decisions to ship a minimal, working protocol fast. The community would surface which problems mattered most in production.


First Evolution: 2025-03-26 Release

Four months after launch, the March 2025 spec addressed the most critical production gaps. This release fundamentally changed MCP’s capabilities.

Streamable HTTP Transport

The biggest change: replacing stdio-only architecture with Streamable HTTP transport.

Before: Servers ran as local processes. Clients launched them via command-line.

After: Servers could run anywhere - on your infrastructure, in the cloud, at the edge. Clients connect via HTTP.

The transport uses HTTP POST for client-to-server messages and optional Server-Sent Events (SSE) for server-to-client streaming:

// Client connects to remote MCP server
const transport = new StreamableHttpTransport({
  url: "https://api.example.com/mcp",
  headers: {
    "Authorization": "Bearer ${token}"
  }
});

// Server can stream responses back
transport.onMessage((message) => {
  if (message.type === "progress") {
    console.log(`Progress: ${message.progress}%`);
  }
});

This unlocked multi-tenant scenarios (each user connects to their own server instances), enterprise deployments (MCP servers behind firewalls), and edge computing (Cloudflare Workers-based servers).

Backwards compatibility was maintained - stdio still works. Clients advertise which transports they support during capability negotiation.

OAuth 2.1 Authorization Framework

The March release introduced a comprehensive OAuth 2.1-based authorization system with mandatory PKCE (Proof Key for Code Exchange).

MCP servers were explicitly categorized as OAuth Resource Servers, enabling standardized discovery:

# Servers expose OAuth metadata at well-known endpoint
GET https://api.example.com/.well-known/oauth-protected-resource

{
  "resource": "https://api.example.com/mcp",
  "authorization_servers": ["https://auth.example.com"],
  "scopes_supported": ["mcp:read", "mcp:write"]
}

This was huge for production deployments. Instead of ad-hoc auth mechanisms, teams got:

  • Standardized token acquisition and refresh
  • Automatic discovery of authorization servers
  • Built-in support for enterprise SSO
  • Fine-grained permission scoping

The spec required PKCE for all clients to prevent authorization code interception attacks.

JSON-RPC Batching

The March release added support for batching multiple requests in a single roundtrip:

[
  {"jsonrpc": "2.0", "id": 1, "method": "resources/read", "params": {...}},
  {"jsonrpc": "2.0", "id": 2, "method": "resources/read", "params": {...}},
  {"jsonrpc": "2.0", "id": 3, "method": "tools/call", "params": {...}}
]

For high-latency connections or operations that needed to fetch multiple resources at once, batching reduced overhead significantly.

(Spoiler: this feature was removed in the June release after production usage showed limited adoption and implementation complexity wasn’t justified.)

Tool Annotations

Servers could now describe tool characteristics with structured metadata:

{
  "name": "delete_database",
  "description": "Permanently deletes a database",
  "annotations": {
    "destructive": true,
    "requiresConfirmation": true
  }
}

Clients could use these annotations to warn users before destructive operations or automatically confirm read-only operations.

Audio Content Type

Text and images were supported from day one. March added audio data as a first-class content type, enabling voice-based agent interactions.

Progress Notifications

For long-running operations, servers could now send progress updates:

{
  "jsonrpc": "2.0",
  "method": "notifications/progress",
  "params": {
    "progressToken": "backup-task-123",
    "progress": 0.45,
    "message": "Backing up table users (45%)"
  }
}

This was a step toward proper task management, but still required clients to track operation state. The full task abstraction came later.

Why These Changes Mattered

The March release transformed MCP from “desktop app protocol” to “enterprise-ready infrastructure”.

Remote servers enabled SaaS companies to offer MCP endpoints. OAuth enabled proper multi-tenant auth. Streamable HTTP enabled scaling beyond single-process stdio limitations.

Production deployments became viable.


Maturing the Protocol: 2025-06-18 Release

Three months later, the June spec refined the protocol based on production learnings. This release was about polish and security hardening.

Structured Tool Output

The biggest addition: tools could now return structured content with explicit type definitions.

Before: Tools returned text/image content. Clients had to parse structure out of unstructured responses.

After: Tools declare output schemas and return structured data:

{
  "name": "get_user_profile",
  "description": "Fetch user profile data",
  "outputSchema": {
    "type": "object",
    "properties": {
      "id": {"type": "string"},
      "email": {"type": "string"},
      "plan": {"type": "string", "enum": ["free", "pro", "enterprise"]},
      "usage": {
        "type": "object",
        "properties": {
          "requests": {"type": "number"},
          "tokens": {"type": "number"}
        }
      }
    }
  }
}

When the tool executes, it returns structured JSON that matches the schema. LLMs can parse this reliably without hallucinating field names or types.

This was critical for building reliable agent workflows. When your agent chains multiple tools together, you need strong guarantees about output format.

Enhanced OAuth Security

The June release tightened OAuth security significantly:

RFC 8707 Resource Indicators: MCP clients must specify which resource they’re requesting tokens for. This prevents malicious servers from obtaining access tokens intended for different services.

POST /oauth/token
Content-Type: application/x-www-form-urlencoded

grant_type=authorization_code
&code=abc123
&resource=https://api.example.com/mcp
&client_id=...

Protected Resource Metadata: MCP servers expose detailed OAuth metadata describing authorization requirements. Clients can programmatically discover auth configuration instead of requiring manual setup.

Auth0 published a comprehensive guide on these changes: MCP Spec Updates: All About Auth.

Elicitation: Server-Initiated User Input

Servers gained the ability to request additional information from users during operations:

{
  "jsonrpc": "2.0",
  "method": "client/elicit",
  "params": {
    "prompt": "Which environment should this deploy to?",
    "options": ["staging", "production"],
    "allowCustom": false
  }
}

The client shows this prompt to the user, collects the response, and sends it back to the server. This enables interactive workflows where servers need human input mid-operation.

For example, a deployment tool might elicit environment selection. An expense tracking tool might ask which category to assign a transaction to.

Tool call results could now include links to related resources:

{
  "content": [
    {
      "type": "text",
      "text": "Created PR #42 in repository example/app"
    }
  ],
  "resourceLinks": [
    {
      "uri": "github://example/app/pulls/42",
      "mimeType": "application/vnd.github.pull"
    }
  ]
}

Clients can use these links to automatically fetch related context or present clickable references to users.

Protocol Version Headers

When using HTTP transport, the negotiated protocol version must now be specified in subsequent requests:

POST /mcp
MCP-Protocol-Version: 2025-06-18
Content-Type: application/json

This enables servers to support multiple protocol versions simultaneously and handle version-specific behavior correctly.

What Got Removed: Batching

After production usage analysis, the spec team removed JSON-RPC batching support. The June changelog cited implementation complexity versus limited real-world usage.

This was a good signal - the maintainers were willing to remove features that didn’t justify their cost. The protocol stayed lean.

Security Posture Changed

The June release significantly hardened MCP’s security model. Between OAuth refinements, resource indicators, and mandatory PKCE, enterprise security teams could now evaluate MCP against compliance requirements.

Microsoft published detailed guidance on securing MCP in Windows 11 deployments. Cloudflare wrote about authentication patterns for MCP servers on Workers.

The protocol was ready for production at scale.

Technical evolution


The Anniversary Release: 2025-11-25

Five months later, the November 25 spec arrived on MCP’s first birthday. This release added the most requested features from production teams.

Tasks: Proper Long-Running Operation Support

The biggest addition was formal task support. Finally, MCP had a standardized way to handle operations that take minutes or hours.

The Problem: Your agent kicks off a database backup that takes 20 minutes. How does it check status? How does it retrieve results? What if the user wants to cancel?

The Solution: Task abstraction with explicit lifecycle management:

// Server creates a task for long-running work
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "task_id": "backup-db-20251125-001",
    "status": "working",
    "progress": 0.0,
    "cancellable": true,
    "created": "2025-11-25T10:30:00Z"
  }
}

// Client queries task status
{
  "jsonrpc": "2.0",
  "method": "tasks/get",
  "params": {
    "task_id": "backup-db-20251125-001"
  }
}

// Response shows progress
{
  "task_id": "backup-db-20251125-001",
  "status": "working",
  "progress": 0.67,
  "message": "Backing up table orders (67%)"
}

Tasks have five states: working, input_required, completed, failed, and cancelled.

The input_required state is particularly clever - it integrates with the elicitation capability from June. A task can pause mid-execution, request user input, and resume once provided.

Clients can poll task status, cancel operations, and retrieve results up to a server-defined retention period. This unlocks scenarios like:

  • Deploying applications (15+ minute operations)
  • Running data pipelines (hours-long batch jobs)
  • Processing large file uploads (progress tracking essential)
  • Multi-step workflows with user confirmation gates

Sampling with Tools: Server-Side Agentic Loops

This feature fundamentally changes how complex workflows work in MCP.

Before: Clients orchestrate everything. To “analyze codebase and create security report”, the client must:

  1. Call LLM to decide which files to analyze
  2. Call filesystem MCP server to read files
  3. Call LLM to identify security issues
  4. Call Jira MCP server to create tickets
  5. Repeat for every issue found

The client sees every intermediate step and orchestrates every LLM call.

After: Servers can run their own agentic loops using the sampling capability:

from mcp import Server

@server.sampling_handler
async def analyze_security(request):
    # Server makes LLM calls with tool access
    files = await server.sample_llm(
        messages=[{
            "role": "user",
            "content": "Find all API endpoints in this codebase"
        }],
        tools=["filesystem_read", "ast_parser"]
    )

    # Server orchestrates multi-step analysis
    issues = await server.sample_llm(
        messages=[{
            "role": "user",
            "content": f"Analyze these endpoints for security issues: {files}"
        }],
        tools=["security_scanner", "dependency_checker"]
    )

    # Server creates tickets for each issue
    tickets = []
    for issue in issues:
        ticket = await server.sample_llm(
            messages=[{
                "role": "user",
                "content": f"Create Jira ticket for: {issue}"
            }],
            tools=["jira_create_issue"]
        )
        tickets.append(ticket)

    return {
        "issues_found": len(issues),
        "tickets_created": tickets
    }

The client just sees the final result - “analyzed codebase, found 7 issues, created 7 tickets.” All intermediate LLM calls and tool orchestration happen server-side.

This enables:

  • Complex domain logic: Servers encode multi-step expertise (security analysis, code review, data pipeline orchestration)
  • Reduced latency: Server-side loops avoid multiple client-server roundtrips
  • Better encapsulation: Clients don’t need to understand domain-specific workflows
  • Cost optimization: Servers can use different models for different steps (cheap models for routing, expensive models for final analysis)

Authorization Enhancements

Three Specification Enhancement Proposals (SEPs) shipped authorization improvements:

SEP-991: URL-Based Client Registration

MCP clients can register themselves using OAuth Client ID Metadata Documents:

GET https://myclient.example.com/.well-known/oauth-client-configuration

{
  "client_id": "https://myclient.example.com",
  "client_name": "MyMCP Client",
  "redirect_uris": ["https://myclient.example.com/oauth/callback"],
  "grant_types": ["authorization_code"],
  "scope": "mcp:read mcp:write"
}

Authorization servers automatically discover client metadata instead of requiring manual registration. This enables dynamic client discovery in large-scale deployments.

SEP-1046: OAuth Client Credentials

Machine-to-machine scenarios now have first-class support. Backend services can authenticate without user interaction:

POST /oauth/token
Content-Type: application/x-www-form-urlencoded

grant_type=client_credentials
&client_id=backend-service
&client_secret=...
&scope=mcp:read

This was critical for automated workflows - CI/CD pipelines, scheduled jobs, backend agents operating without human involvement.

SEP-990: Cross-App Access

Enterprise SSO integration is standardized. One authentication grants access to all MCP servers in an organization’s domain.

Combined, these changes make enterprise MCP deployments practical. IT teams can centrally manage authorization, enforce policies, and audit access across their entire MCP fleet.

Developer Experience Improvements

The anniversary release included several quality-of-life improvements that make MCP development less painful:

  • SEP-986: Standardized tool naming conventions (consistent patterns across servers)
  • SEP-1319: Decoupled request payloads (easier to extend without breaking changes)
  • SEP-1699: Improved connection management (gracefully handle disconnects and reconnects)
  • SEP-1309: Enhanced SDK version management (backwards compatibility guarantees)

These are the unglamorous fixes that save hours of debugging. When you’re integrating 15 different MCP servers, having consistent naming conventions matters. When your server disconnects mid-operation, automatic reconnection with state recovery matters.

What This Release Enables

The November release completed MCP’s evolution from “protocol for simple integrations” to “infrastructure for complex agentic systems.”

With tasks + sampling, you can now build:

  • Multi-hour data processing pipelines with user checkpoints
  • Complex domain workflows that hide implementation details from clients
  • Server-side agentic reasoning with tool access
  • Production deployments with enterprise auth and audit requirements

The protocol is feature-complete for the current generation of agent applications.


Key Ecosystem Players

While the protocol matured through four releases, the ecosystem exploded to nearly 2,000 official registry entries and 6,490+ community servers. Here’s who built the infrastructure:

Platform Adoption: OpenAI integrated MCP across ChatGPT desktop, Agents SDK, and Responses API. Microsoft built it into Windows 11, Copilot Studio, and Azure AI Foundry. Anthropic’s Claude Desktop remains the reference implementation.

Infrastructure & Gateway: Cloudflare positioned Workers as the deployment layer (10 companies showcased at MCP Demo Day: Block, PayPal, Sentry). Obot AI launched the first enterprise MCP gateway ($35M seed funding) for centralized management.

Authentication: Auth0, Stytch, Scalekit, WorkOS, and Pomerium shipped OAuth 2.1 implementations. Aaron Parecki (@aaronpk) from Okta shaped the auth specs.

Discovery & Tooling: Smithery (Henry Mao @Calclavia) provides npm-like discovery with 4,000+ servers. PulseMCP (Tadas Antanavicius @tadasayy) tracks ecosystem growth with weekly newsletters. MCP-UI and GitMCP (Ido Salomon @idosal1) brought interactive interfaces and GitHub integration.

Enterprise Builders: Supabase (Pedro Rodrigues @rodriguespn23), Block/Square (Angie Jones @techgirl1908), Postman (Josh Dzielak @joshed_io) deployed production MCP servers at scale.

The Numbers: 6.4k GitHub stars, 303 contributors, 2,900+ Discord members, 6.6M+ monthly Python SDK downloads, 17 SEPs processed in Q4 2025 alone.

Growing community

The Transport Evolution Nobody Talks About

One of the most important technical progressions happened gradually across releases: the shift from local stdio to remote HTTP transports.

2024-11-05: stdio only. Servers are local processes. Simple, secure, perfect for desktop apps.

2025-03-26: Streamable HTTP added. Servers can now be remote. SSE enables server-to-client streaming. stdio still works (backwards compatibility).

2025-06-18: Protocol version headers required for HTTP. Better multi-version support.

2025-11-25: HTTP transport is now the default pattern for production deployments.

This progression was crucial. MCP started as a desktop protocol and evolved into a distributed systems protocol without breaking existing implementations.

The MCP transport documentation now describes three patterns:

  1. stdio: Local processes, zero network overhead, perfect for desktop tools
  2. Streamable HTTP: Remote servers, multi-tenant, enterprise deployments
  3. Custom transports: WebSockets, gRPC, or domain-specific protocols (via extensions)

Teams choose the transport that matches their deployment model. A code editor might use stdio for local language servers. A SaaS platform uses Streamable HTTP for multi-tenant MCP endpoints.

The Observability Gap

Here’s what production teams discovered: as MCP deployments scale, visibility disappears.

Your agent connects to 15 MCP servers. Makes hundreds of tool calls per session. Some succeed, some fail. Token costs add up. Response times vary.

How do you debug when something goes wrong? Which server failed? Why? How much did that operation cost? Is this the third time this week that specific tool timed out?

Most teams are flying blind. They see final agent output but have no visibility into the MCP layer.

This is exactly the problem we’re solving at Agnost. Track every MCP interaction across your entire fleet:

  • Server call frequency and patterns
  • Success rates per tool
  • Latency distributions (p50, p95, p99)
  • Token costs broken down by server and operation
  • Error patterns and failure modes
  • Trace individual agent sessions across multiple MCP calls

When an agent misbehaves, trace exactly which MCP calls led to the problem. When costs spike, identify which servers are responsible. When latency degrades, pinpoint the bottleneck.

Analytics visualization

As MCP becomes infrastructure, observability becomes essential. You can’t optimize what you can’t measure.

Try Agnost’s MCP analytics - 5 minute setup, complete visibility into your agent’s MCP interactions.


What’s Next: The Year Two Roadmap

Based on the anniversary blog post from the MCP team, here’s what’s coming:

Standardized Observability

Built-in health checks, metrics exports, and distributed tracing support. The protocol needs observability primitives that all implementations can adopt.

This aligns perfectly with Agnost’s mission - as the spec standardizes observability, teams need tools that implement it.

Server Composition Patterns

How do you build MCP servers that delegate to other MCP servers? How do you create meta-servers that aggregate capabilities from multiple sources?

The community is experimenting with composition, but there’s no standardized approach. Expect SEPs addressing this in 2026.

Enhanced Security Models

Production needs fine-grained access controls (user X accesses tools A and B, not C), audit logging standards, compliance frameworks (SOC 2, GDPR, HIPAA), and secret management integration.

OAuth 2.1 provides the foundation, but enterprises need more sophisticated security primitives.

Multi-Agent Coordination

As architectures move from single agents to multi-agent systems, MCP needs coordination primitives - capability discovery between agents, task negotiation, avoiding duplicate work.

These are open research questions, but MCP is positioned to standardize solutions as they emerge.


The Bottom Line

One year, four spec releases, and MCP transformed from “interesting protocol” to “infrastructure layer for agentic systems.”

2024-11-05: Basic stdio protocol with resources, tools, prompts 2025-03-26: Remote servers via Streamable HTTP, OAuth 2.1 auth 2025-06-18: Structured outputs, hardened security, elicitation 2025-11-25: Tasks, server-side sampling, enterprise auth

The progression was deliberate - ship minimal viable protocol, let production teams surface pain points, address them systematically without breaking backwards compatibility.

This is why MCP won against proprietary alternatives. Open protocol, community-driven development, real production feedback shaping evolution, and willingness to remove features that don’t justify complexity (batching removal in June).

If you’re building AI agents, MCP is now the default choice. The protocol is mature, the ecosystem is rich, the tooling is production-ready, and major platforms (OpenAI, Microsoft, Anthropic) are committed.

And if you’re already running MCP in production, add observability before you scale further. Give Agnost a try - 5 minutes to setup, complete visibility into your MCP interactions, finally know which servers deliver value and which ones burn budget.

Here’s to year two of MCP. The foundation is solid. Now we build.


Meta Description: MCP celebrates one year with four major spec releases - from stdio servers to OAuth 2.1, tasks, and sampling. Technical deep-dive into protocol evolution.

Estimated Read Time: 12 minutes

Tags: MCP, Model Context Protocol, AI Agents, Agent Analytics, Developer Tools, Protocol Evolution

Featured Image Suggestion: Timeline visualization showing four MCP releases (2024-11-05, 2025-03-26, 2025-06-18, 2025-11-25) with key features at each milestone


Sources