← All posts

MCP and AGENTS.md Find a New Home: Inside the Agentic AI Foundation Launch

Anthropic donates Model Context Protocol, OpenAI contributes AGENTS.md, and Block brings goose to the newly formed Agentic AI Foundation under Linux Foundation mentorship. Here's what this massive governance shift means for developers building the next wave of AI agents.

If you’ve been building with the Model Context Protocol or following the AI agent ecosystem, yesterday’s announcement was kind of a big deal.

On December 9, 2025, the Linux Foundation launched the Agentic AI Foundation (AAIF) - a neutral, vendor-independent home for the standards and tools powering AI agents. The foundation kicked off with three foundational projects: Anthropic’s Model Context Protocol (MCP), OpenAI’s AGENTS.md, and Block’s goose framework.

Eight tech giants signed on as platinum members: Amazon Web Services, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. Not a typical Tuesday.

But here’s what makes this interesting for developers: these aren’t just symbolic donations. This is a fundamental shift in how critical AI infrastructure gets built, governed, and evolved.

Let’s break down what actually happened, why it matters, and what changes (or doesn’t) for anyone building with these tools.

Collaboration handshake


What Is the Agentic AI Foundation, Really?

Think of AAIF as a governance layer - similar to how the Linux Foundation oversees Kubernetes, Node.js, or PyTorch. It’s not building new technology. Instead, it provides neutral infrastructure for projects that are becoming too important to live under a single company’s control.

The foundation operates under what the Linux Foundation calls a “directed fund.” That’s fancy language for: member organizations fund it, a governing board makes strategic decisions, but individual projects maintain technical autonomy.

The Core Mission

AAIF exists to solve a specific problem: as AI agents become production infrastructure, developers need assurance that the protocols they build on won’t suddenly change direction based on one company’s roadmap.

From the official announcement:

“The AAIF provides a neutral, open foundation to ensure this critical capability evolves transparently, collaboratively, and in ways that advance the adoption of leading open source AI projects.”

Translation: build with confidence. These standards aren’t going anywhere.

Who’s Behind This?

The foundation launches with three tiers of membership:

Platinum Members (strategic-level influence):

  • Amazon Web Services
  • Anthropic
  • Block
  • Bloomberg
  • Cloudflare
  • Google
  • Microsoft
  • OpenAI

Gold Members include Arcade.dev, Docker, Hugging Face, IBM, JetBrains, and others.

Silver Members bring the community breadth - startups, tool builders, and companies actively shipping agent infrastructure.

Notice the lineup? Anthropic and OpenAI - historically fierce competitors - are co-founding members. That tells you something about how critical neutral governance has become for this layer of the stack.


The Three Founding Projects

AAIF launches with three projects, each solving a distinct problem in the agent ecosystem. Let’s dig into what they do and why they matter.

1. Model Context Protocol (MCP)

If you’ve been anywhere near AI development in the past year, you’ve probably heard of MCP. It’s the protocol that standardizes how AI models connect to external tools, data sources, and services.

The Problem It Solves:

Before MCP, every integration was custom. Want your AI agent to read Slack messages? Build a Slack connector. Query Postgres? Build a database connector. Update Jira? Another connector. Three different APIs, three auth flows, three error-handling patterns.

MCP provides a universal protocol. Build once, use everywhere.

How It Works:

MCP follows a client-server architecture inspired by the Language Server Protocol (the thing that powers autocomplete in VS Code):

  • Servers: Lightweight programs that expose data or capabilities via the MCP standard
  • Clients: Components in AI applications that connect to servers
  • Protocol: JSON-RPC 2.0 messages over stdio or HTTP

Here’s a simple example - connecting Claude Desktop to a filesystem server:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Documents"]
    }
  }
}

Now your AI can read, search, and reference files in that directory.

The Adoption Numbers:

MCP hit some wild milestones in its first year:

  • 10,000+ published MCP servers across the ecosystem
  • 97 million monthly SDK downloads (yes, million)
  • Adopted by OpenAI (ChatGPT desktop app, Apps SDK), Microsoft (built into Windows 11), and dozens of other platforms

In March 2025, OpenAI officially adopted MCP across its products. That was a signal - this wasn’t just an Anthropic thing anymore.

What’s Changing:

Anthropic is donating MCP to AAIF. The protocol’s governance becomes community-driven under neutral oversight. According to Anthropic’s announcement:

“The project’s maintainers will continue to prioritize community input and transparent decision-making.”

For developers: your MCP servers don’t break. The spec evolution continues. But now it happens through a multi-stakeholder process instead of one company calling the shots.

Protocol standardization

2. AGENTS.md (from OpenAI)

This one’s beautifully simple. AGENTS.md is a convention - like a README for AI agents.

The Problem It Solves:

AI coding agents (like GitHub Copilot, Cursor, Windsurf, etc.) need context to work effectively in your repository. What’s the build command? Which files are auto-generated and should be ignored? What coding conventions do you follow?

Without standardized guidance, every agent improvises. Sometimes that works. Sometimes it… doesn’t.

How It Works:

AGENTS.md is literally a markdown file you drop in your project root:

# AGENTS.md

## Project Overview
This is a Next.js application with a Python FastAPI backend.

## Build Commands
- Frontend: `npm run build`
- Backend: `cd backend && uvicorn main:app`

## Important Notes
- Don't modify files in `/generated` - they're auto-created
- Use Prettier for all TypeScript/JavaScript
- Backend uses Ruff for Python formatting

## Testing
Run tests with `npm test` (frontend) and `pytest` (backend)

AI agents read this file first, gaining project-specific context before suggesting changes or running commands.

Adoption:

AGENTS.md launched in August 2025. Within months:

  • 60,000+ open source projects adopted it
  • VS Code integrated support
  • Major AI coding tools added first-class AGENTS.md parsing

It solved a real friction point: onboarding AI agents to your codebase.

What’s Changing:

OpenAI is contributing AGENTS.md to AAIF, ensuring long-term community stewardship. From OpenAI’s announcement:

“Donating AGENTS.md to the Linux Foundation ensures that as the format evolves, it serves the entire AI ecosystem - not just one platform.”

For developers: keep writing AGENTS.md files. The format stays simple. But now it evolves through community consensus rather than unilateral changes.

3. goose (from Block)

Block’s contribution is the wild card - a full AI agent framework, not just a protocol or convention.

The Problem It Solves:

Building production-grade AI agents is hard. You need LLM orchestration, tool integration, state management, error handling, and security sandboxing. Most teams rebuild these primitives from scratch.

goose provides a batteries-included agent runtime with a twist: it’s local-first and MCP-native.

How It Works:

goose combines three layers:

  1. Language model orchestration: Works with any LLM (OpenAI, Anthropic, open models)
  2. Extensible tools: Pre-built integrations for file systems, shells, databases, APIs
  3. MCP-native: Discovers and connects to MCP servers automatically

Here’s the developer experience:

# Install goose
pip install goose-ai

# Run with MCP discovery
goose --discover-mcp

goose scans for available MCP servers, auto-configures connections, and makes those capabilities available to the agent runtime.

Why Local-First Matters:

Unlike cloud-based agent platforms, goose runs entirely on your machine. Your code, data, and API keys never leave your environment. For security-conscious teams or regulated industries, this is non-negotiable.

What’s Changing:

Block is transitioning goose to community governance under AAIF. According to Block’s announcement:

“Donating goose to the Linux Foundation gives Block access to community stress tests while positioning it as a working example of AAIF’s vision.”

goose becomes a reference implementation - proof that MCP, AGENTS.md, and agent frameworks can compose into production systems.


Why Linux Foundation Mentorship Matters

You might be wondering: why the Linux Foundation specifically? What does “mentorship” actually mean here?

The Linux Foundation has a proven track record shepherding critical infrastructure projects:

  • Kubernetes: Cloud-native orchestration standard
  • Node.js: JavaScript runtime powering millions of applications
  • PyTorch: Dominant machine learning framework

These started as corporate projects (Google, Joyent, Facebook). They became industry standards under neutral governance.

What LF Brings to the Table

1. Neutral Infrastructure

No single vendor controls the roadmap. Technical decisions happen through transparent governance processes, not closed-door strategy meetings.

2. Legal Protection

Trademark management, IP policies, contributor agreements - the unsexy but critical stuff that lets thousands of developers contribute without legal ambiguity.

3. Funding Mechanisms

Member dues fund maintenance, security audits, documentation, community programs. Projects don’t depend on one company’s budget priorities.

4. Community Building

Conference organization, certification programs, developer outreach - building ecosystems, not just code.

For AI agent infrastructure, this matters a lot. These protocols are becoming load-bearing. Neutral governance reduces existential risk for anyone building on them.


What This Means for Developers

Okay, the governance stuff is important, but let’s talk about what actually changes in your day-to-day work.

If You’re Building MCP Servers

Nothing breaks. Your existing servers continue working. The protocol spec doesn’t change overnight.

Future evolution happens collaboratively. New features and spec changes now go through a multi-stakeholder process. If you’ve been contributing to MCP discussions, your input matters even more.

Expect faster cross-platform adoption. With eight major companies aligned on governance, implementing MCP support becomes less risky for platforms that were waiting for stability.

If You’re Using AGENTS.md

Keep writing them. The format stays simple and backward-compatible.

Expect broader tool support. More AI coding tools will add first-class AGENTS.md parsing now that it’s vendor-neutral.

Consider contributing to format evolution. Want to propose new sections or conventions? There’s now a clear path to influence the standard.

If You’re Exploring goose

Try the local-first workflow. If you’ve been hesitant about cloud-based agent platforms, goose offers a compelling alternative.

Expect rapid iteration. Community governance doesn’t mean slower development - it often accelerates once contribution barriers drop.

Watch for enterprise adoption. Block’s donating goose signals confidence in community-driven development. Expect more companies to build on top of it.

General Implications

1. Bet on Interoperability

The AAIF thesis is clear: the future of AI agents is multi-model, multi-tool, multi-platform. Build with that assumption.

If your architecture locks you into one LLM provider or proprietary agent runtime, you’re swimming against the current.

2. Reduce Vendor Lock-In Risk

Projects under AAIF governance can’t be unilaterally discontinued or made proprietary. That’s massive for production systems.

You can confidently build multi-year roadmaps on MCP, AGENTS.md, or goose without worrying about a corporate pivot making your stack obsolete.

3. Contribute Early

These projects are moving to community governance. Early contributors shape conventions, influence roadmaps, and build reputation in an emerging ecosystem.

The Agnost AI community, for instance, has been exploring creative MCP implementations - that kind of experimentation becomes even more valuable in a neutral-governance environment.

4. Expect Consolidation Around Standards

Right now, there are dozens of competing agent frameworks and integration protocols. AAIF’s formation signals that the standardization phase has begun.

Projects that align with MCP, AGENTS.md, and similar standards will have tailwinds. Proprietary alternatives will face headwinds.

Building together


The Broader Context: Why Now?

This announcement didn’t happen in a vacuum. Let’s zoom out and look at what’s been building in the AI agent ecosystem over the past year.

The Agent Explosion

2025 has been the year AI agents went from research curiosity to production reality:

  • Code assistants: GitHub Copilot, Cursor, Windsurf, Cline - AI agents writing, refactoring, and debugging code
  • Customer support: Intercom, Zendesk, Ada - agents handling tier-1 support
  • Data analysis: Agents querying databases, generating reports, running A/B tests
  • DevOps: Agents monitoring infrastructure, triaging incidents, deploying fixes

These aren’t demos. They’re handling real workloads at scale.

The Integration Problem

As adoption grew, a pattern emerged: every platform was building the same integration layer differently.

Want your agent to access Notion? GitHub? Salesforce? Each platform had its own connector architecture, auth flow, and API abstraction.

Developers were stuck maintaining N × M integrations (N platforms × M services). Unsustainable.

MCP solved this by providing a universal adapter. But a protocol is only valuable if the whole ecosystem adopts it - and single-vendor ownership was a blocker for some companies.

The Trust Gap

Here’s the uncomfortable question that dogged MCP’s growth: What happens if Anthropic pivots?

Anthropic could theoretically:

  • Make the protocol proprietary
  • Deprecate versions aggressively
  • Optimize for Claude at the expense of other LLMs

They showed no signs of doing this. But the possibility created hesitation, especially among enterprises making multi-year commitments.

Donating to AAIF eliminates that risk. The protocol can’t be un-open-sourced or abruptly discontinued.

The Standardization Window

There’s a brief window in every technology wave where standards get set. Miss it, and you’re stuck with fragmentation (see: messaging protocols, IoT standards, blockchain interoperability).

The AI agent ecosystem is in that window right now. AAIF’s formation is a bet that coordinated standardization beats fragmented competition.

As TechCrunch reported:

“The main goal is to have enough adoption in the world that it’s the de facto standard. We’re all better off if we have an open integration center where you can build something once as a developer and use it across any client.” - David Soria Parra, MCP co-creator


What Doesn’t Change

Amidst all the governance shifts, let’s be clear about what stays the same:

MCP servers you’ve built still work. No breaking changes, no migration required.

The development pace doesn’t slow down. Neutral governance doesn’t mean bureaucracy. AAIF is designed to “move at the speed of AI.”

Technical decisions stay with maintainers. The governing board handles budget and membership. Technical direction remains with project maintainers and steering committees.

Contribution processes stay open. You don’t need to be a member company to contribute code, report issues, or participate in discussions.

Licenses don’t change. MCP remains open source under its existing license (MIT). Same for goose (Apache 2.0) and AGENTS.md (open format).


How to Get Involved

If you’re excited about where this is heading, here’s how to engage:

1. Build with the Standards

The best way to support AAIF’s mission is to use the tools in production:

  • Build MCP servers for services you integrate with
  • Add AGENTS.md files to your repositories
  • Try goose for local agent workflows

Real-world usage surfaces gaps, edge cases, and feature needs that guide evolution.

2. Contribute to the Projects

All three founding projects accept community contributions:

Documentation improvements, bug reports, feature proposals - it all matters.

3. Join the AAIF Community

The foundation is actively building its contributor base. Visit AAIF.io to:

  • Track new projects joining the foundation
  • Follow governance discussions
  • Explore membership options (if you represent an organization)

4. Share Your Experience

Building something interesting with MCP, AGENTS.md, or goose? Write about it. The community is hungry for real-world examples, especially:

  • Integration patterns that worked (or didn’t)
  • Performance considerations at scale
  • Security practices for production deployments

Communities like Agnost AI are actively exploring these tools - your experience helps others avoid pitfalls and discover possibilities.


The Bigger Picture

Zooming all the way out: this announcement is about more than three projects finding neutral homes.

It’s a signal about how the AI industry is maturing.

A year ago, AI companies were racing to ship features as fast as possible. Proprietary advantages, closed ecosystems, and winner-take-all dynamics dominated.

Now? The smartest companies realize the foundation layer needs to be open and interoperable. Lock-in at the protocol level hurts everyone, including the platforms.

AAIF represents a bet that collaborative infrastructure beats proprietary fragmentation - at least for the plumbing that powers AI agents.

The LLMs can compete. The applications can compete. But the integration layer? Better for everyone if it’s neutral and open.

What Success Looks Like

Five years from now, if AAIF succeeds, developers won’t think twice about agent interoperability. You’ll write an MCP server once and use it across ChatGPT, Claude, Gemini, and whatever comes next. You’ll drop an AGENTS.md file in your repo and every coding assistant will understand your project instantly.

The complexity disappears into standardized infrastructure - just like HTTP, SQL, or REST APIs did before.

That’s the vision. Yesterday’s announcement was the first real step toward it.

Future vision


Common Questions and Concerns

”Does this mean Anthropic is giving up on MCP?”

No. Anthropic remains heavily invested - they’re a platinum AAIF member and core MCP maintainers. This is about governance, not abandonment.

Think of it like Google donating Kubernetes to the Cloud Native Computing Foundation. Google still uses and maintains Kubernetes, but now the ecosystem trusts it more because no single vendor controls it.

”Will MCP development slow down under foundation governance?”

History says no. Projects like Kubernetes, Node.js, and VS Code’s Language Server Protocol accelerated after moving to neutral foundations because contribution barriers dropped.

More companies will contribute when they’re confident their work won’t be overridden by one vendor’s roadmap.

”What happens to existing MCP servers during this transition?”

Nothing. They keep working. The spec isn’t changing retroactively. Future evolution will maintain backward compatibility - that’s one of AAIF’s core principles.

”Can smaller companies or individuals participate in AAIF?”

Yes. While platinum/gold/silver membership tiers exist for companies making financial contributions, technical participation is open to everyone.

You don’t need to be a member to contribute code, file issues, participate in working groups, or influence technical direction.

”Is this just a big company cartel?”

Fair concern, but the structure matters. AAIF operates under transparent governance with:

  • Public decision-making processes
  • Open technical discussions
  • Project autonomy from the governing board

Compare this to proprietary consortiums where decisions happen behind closed doors and participation requires massive membership fees.

AAIF’s model prioritizes adoption and community health over corporate deal-making.


Conclusion

On December 9, 2025, the AI agent ecosystem got its Linux Foundation moment.

MCP, AGENTS.md, and goose moving to the Agentic AI Foundation isn’t just a governance shift - it’s a statement about how critical infrastructure should evolve. Transparently. Collaboratively. With neutral stewardship that outlasts any single company’s strategy.

For developers, this means:

  • Lower integration risk: Build on protocols that won’t disappear or pivot unexpectedly
  • Better interoperability: Standards that work across platforms, not just one vendor’s stack
  • Stronger community: Contribution pathways that don’t require corporate affiliation

The agent era is here. The infrastructure is being laid. And for the first time, that infrastructure is being built in the open - with governance that ensures it serves the whole ecosystem, not just the companies who happened to create it.

If you’re building AI agents, now’s the time to engage with these standards. Not because you have to, but because the tailwinds are undeniable.


Sources