On This Pageexpand_more
The MCP Revolution: How Model Context Protocol Became the USB-C of AI
Learn how Model Context Protocol (MCP) became the universal standard for connecting AI models to tools and data, reshaping the entire AI ecosystem.

The Integration Problem Nobody Was Talking About
If you support three LLM providers and ten tools, you are maintaining thirty bespoke integrations. Add a fourth provider? Ten more. This M-times-N problem was quietly strangling the AI ecosystem: every company building AI applications was spending extraordinary engineering time not on intelligence, not on user experience, but on plumbing.
The industry needed what USB did for peripherals, what HTTP did for the web, what LSP did for code editors: a single, open protocol that any model could use to talk to any tool. In November 2024, Anthropic proposed exactly that. They called it the Model Context Protocol, or MCP, and within a year it went from a blog post to the connective tissue of the entire AI industry.
This is the story of how that happened, why it matters, and what it means for everyone building with AI.
What Is the Model Context Protocol?
At its core, MCP is a standardized, open protocol that defines how AI applications communicate with external data sources and tools. Think of it as a universal adapter layer: instead of every AI model needing custom code to interact with every service, MCP provides a single interface that both sides can implement once.
The analogy to USB-C is not superficial. Before USB, every device had its own proprietary connector. Before MCP, every LLM had its own proprietary way of calling tools. MCP does not replace the models themselves, just as USB does not replace the devices. It standardizes the connection.
MCP builds on JSON-RPC 2.0, a lightweight remote procedure call protocol, and defines a clear client-server architecture. The protocol is transport-agnostic: it can run over standard I/O (stdin/stdout) for local processes, over HTTP with Server-Sent Events (SSE) for remote servers, or over the newer Streamable HTTP transport introduced in the 2025 spec revisions.
The Three Primitives
MCP organizes what servers can expose into three core primitives:
Tools are the most commonly used primitive. A tool is an executable function that the AI model can invoke, think "run a SQL query," "create a GitHub issue," or "search the web." Tools are model-controlled: the LLM decides when and how to call them based on the user's request. Each tool has a name, a description (which helps the model understand when to use it), and a JSON Schema defining its input parameters.
Resources are data that a server can expose for the client to read. Unlike tools, resources are typically application-controlled or user-controlled: the host application decides which resources to include in the context window. A resource might be the contents of a file, a database record, or live data from an API. Resources have URIs and can be static or dynamic, and servers can notify clients when resources change.
Prompts are reusable prompt templates that a server can offer. These are user-controlled, surfacing as options the user can select (like slash commands). A prompt might be a template for "summarize this code review" or "generate a migration plan for this database schema," complete with arguments the user can fill in.
This three-way split is a deliberate design choice. It separates what the model can do (tools), what context the model has (resources), and what interaction patterns are available to the user (prompts). The separation matters because it enables fine-grained permission control and makes it clear who initiates each type of interaction.
The Architecture: Hosts, Clients, and Servers
MCP defines three distinct roles in its architecture, and understanding the boundaries between them is essential for building with the protocol.
MCP Hosts
The host is the AI-powered application that the user interacts with directly. Claude Desktop, an IDE with AI features, or your custom-built AI assistant: these are all hosts. The host is responsible for managing the user experience, enforcing security policies, and coordinating between the AI model and one or more MCP clients. Critically, the host controls which MCP capabilities are actually exposed to the model. Even if a connected server offers fifty tools, the host can choose to surface only five.
MCP Clients
Each MCP client maintains a one-to-one connection with a specific MCP server. The client handles the protocol-level communication: capability negotiation, message framing, request routing. In many implementations, the host application contains multiple MCP clients, each connected to a different server. The client is an internal component; users rarely interact with it directly.
MCP Servers
MCP servers are where the actual capabilities live. A server wraps some external system (a database, an API, a file system, a SaaS product) and exposes it through the MCP protocol. Servers are lightweight programs that can be written in any language with a JSON-RPC implementation. The official SDKs support TypeScript, Python, Java, Kotlin, C#, and Swift, with community SDKs covering Go, Rust, Ruby, and others.
The mental model looks like this:
User <-> [Host Application]
|
[MCP Client A] <--> [MCP Server: GitHub]
[MCP Client B] <--> [MCP Server: PostgreSQL]
[MCP Client C] <--> [MCP Server: Slack]Each connection is independent. If the GitHub server crashes, the PostgreSQL and Slack connections are unaffected. This architecture mirrors the LSP (Language Server Protocol) model that transformed code editors, and that is not a coincidence — MCP was explicitly inspired by LSP's success.
The Connection Lifecycle
When a client connects to a server, the first thing that happens is capability negotiation. The client sends an initialize request with its protocol version and capabilities. The server responds with its own capabilities, including which primitives it supports (tools, resources, prompts) and any optional features. Once both sides agree, the client sends an initialized notification, and the connection is live.
This negotiation step is important. It means the protocol can evolve without breaking existing implementations. A server that only supports the 2024-11-05 spec can still work with a client that supports the current 2025-11-25 revision; they simply negotiate down to the common feature set. The spec has moved fast: 2025-06-18 added Elicitation (multi-turn human-in-the-loop requests initiated by the server) and structured tool output (structuredContent with declared output schemas, so the host gets typed results instead of opaque strings); 2025-11-25 upgraded the default JSON Schema dialect to 2020-12 and decoupled payloads from RPC methods.
How MCP Differs from Function Calling
If you have worked with OpenAI's function calling, Anthropic's tool use, or Google's function declarations, you might wonder: how is MCP different? After all, these APIs already let models call tools.
The difference is architectural, not functional. Function calling and tool use are features of specific model APIs. They define how you describe tools to a particular model and how that model formats its request to call a tool. But they do not standardize what happens on the other side of that call. They do not define how to discover tools, how to manage connections, how to handle authentication, or how tools communicate state changes back to the model.
Here is a concrete comparison:
| Aspect | Function Calling (API-level) | MCP |
|---|---|---|
| Scope | Single model API | Cross-model protocol |
| Tool discovery | Manual, per request | Dynamic, via negotiation |
| Connection management | None (stateless) | Persistent, bidirectional |
| Transport | HTTP API call | stdio, SSE, Streamable HTTP |
| State updates | Polling or none | Server-initiated notifications |
| Ecosystem | Vendor-locked | Open, vendor-neutral |
Think of it this way: function calling tells the model how to ask for a tool to be used. MCP defines the entire infrastructure for tool connectivity: discovery, invocation, lifecycle management, and ecosystem standardization.
In practice, MCP and function calling work together. When an MCP-connected model decides to use a tool, the model's function calling mechanism generates the tool call. The MCP client then routes that call to the appropriate MCP server, which executes it and returns the result. MCP is the transport and discovery layer; function calling is the model-level interface. See how reasoning models like DeepSeek-R1 leverage tool use for a deeper look at how models decide when and how to invoke external tools.
The Adoption Curve: From Anthropic to Industry Standard
The speed of MCP's adoption is unusual. Understanding the timeline reveals how much pent-up demand existed for exactly this kind of standard.
Phase 1: Anthropic's Launch (November 2024)
Anthropic released MCP as an open-source specification on November 25, 2024, alongside a set of reference implementations: SDKs for TypeScript and Python, pre-built MCP servers for Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer, and native support in Claude Desktop. The protocol was published under the MIT license with the specification itself openly available.
The initial reception was cautiously enthusiastic. Developers appreciated the LSP-inspired design but questioned whether other model providers would adopt a protocol created by a competitor.
Phase 2: The Ecosystem Ignites (Early 2025)
Those doubts evaporated quickly. In the first months of 2025, the MCP ecosystem exploded. The community-maintained MCP server registry grew from a handful to hundreds of servers covering everything from AWS services to Notion to Spotify. Development tool companies like Zed, Replit, Codeium, and Sourcegraph integrated MCP support. Block (formerly Square) and Apollo adopted MCP for their internal AI tooling.
The TypeScript and Python SDKs matured rapidly, and community SDKs appeared for Java, C#, Go, Rust, and other languages. An authorization framework based on OAuth 2.1 was added to the spec, addressing one of the early concerns about security for remote MCP servers.
Phase 3: The Giants Adopt (March-May 2025)
The pivotal moment came in March 2025 when OpenAI announced MCP support across its products, including the Agents SDK, ChatGPT Desktop, and the Responses API. Sam Altman publicly endorsed the protocol, stating that OpenAI would work with Anthropic to evolve MCP and support it broadly. This was not a grudging acknowledgment — it was a full embrace.
Google followed closely. By mid-2025, Google DeepMind announced MCP support in Gemini and integrated it into Android and its developer tools. Microsoft, which had initially pushed its own approach, added MCP support to Copilot Studio, Azure AI, and Windows. The floodgates were open.
Phase 4: The Agentic AI Foundation (December 2025)
In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation (AAIF), a directed fund within the Linux Foundation co-founded with Block and OpenAI, with Google, Microsoft, AWS, and Cloudflare joining as supporting members. This transferred governance from a single company to a neutral foundation with a vendor-balanced board. Notably, Google's A2A protocol joined the same foundation (after IBM's earlier ACP merged into A2A in August 2025), putting the two complementary standards under shared stewardship.
The tradeoff is concrete. Anthropic gave up the ability to steer the roadmap unilaterally; in exchange, competitors lost their main reason to fork. Neutrality is no longer a promise, it is structurally enforced by the governance body.
The Numbers Tell the Story
By March 2026, the official MCP SDKs were drawing 97 million monthly downloads across npm and PyPI. The public-server count had grown from roughly 500 at the end of 2025 to 10,000–12,000 across GitHub, Smithery, and the other registries in a single quarter. Over 300 distinct clients now speak the protocol. Every major AI model provider (Anthropic, OpenAI, Google, Microsoft, Amazon, Meta) either adopted MCP or announced support. The ecosystem went from a handful of reference servers to a thriving marketplace inside 16 months.
Practical MCP: What Servers Look Like Today
To make this concrete, let us walk through what the MCP ecosystem looks like in practice. MCP servers fall into several broad categories.
Developer Tools
The heaviest early adoption came from developer-focused tools. MCP servers exist for:
- Version control: GitHub, GitLab, and Git operations (read files, create PRs, manage issues)
- Databases: PostgreSQL, MySQL, SQLite, MongoDB (query, inspect schemas, manage data)
- Cloud infrastructure: AWS, GCP, Azure (manage resources, read logs, deploy services)
- Code search and navigation: Sourcegraph, local filesystem access
- CI/CD: Build and deployment pipeline management
For the coding tools built on MCP, see Vibe Coding and the AI Development Stack, which covers how MCP-powered tools are reshaping development workflows.
Productivity and SaaS
MCP servers for business tools were quick to follow:
- Communication: Slack, email (read messages, send messages, search)
- Project management: Jira, Linear, Notion (create and manage tickets, pages)
- Documentation: Confluence, Google Docs (read, search, create)
- CRM: Salesforce, HubSpot (query records, update data)
Data and Knowledge
- Web: Brave Search, Fetch/scraping tools, Puppeteer for browser automation
- Files: Local filesystem, Google Drive, Dropbox
- Memory and knowledge graphs: Persistent memory systems that give AI long-term recall
Specialized Domains
- Finance: Market data, trading APIs
- Infrastructure monitoring: Sentry, Datadog, Prometheus
- Design: Figma file access and manipulation
Anatomy of a Simple MCP Server
To understand how MCP servers are built, consider the conceptual structure of a minimal server. In Python, using the official SDK, a server definition looks roughly like this:
Server "weather-server"
Tool: "get_forecast"
Description: "Get weather forecast for a city"
Input schema: { city: string, days: integer (1-7) }
Handler: calls weather API, returns formatted forecast
Resource: "weather://current/{city}"
Description: "Current weather conditions"
Handler: returns real-time weather data as text
Prompt: "weather_briefing"
Description: "Generate a morning weather briefing"
Arguments: [city, units]
Template: "Give a friendly morning weather briefing for {city}..."The server registers its tools, resources, and prompts. When a client connects, it discovers these capabilities through the negotiation phase. When the LLM decides to call get_forecast, the MCP client sends a tools/call request to the server, which executes the handler and returns the result. The entire exchange uses JSON-RPC 2.0 messages over whatever transport the server supports.
The key insight is that the server author does not need to know anything about which AI model will be calling these tools. They implement the MCP interface once. Any MCP-compatible host (Claude, ChatGPT, Gemini, a custom application) can connect and use it.
Building with MCP: The Developer Experience
For developers building AI applications, MCP fundamentally changes the integration story. Instead of writing custom integration code for every external service, you configure your application to connect to MCP servers.
Configuration-Driven Integration
Most MCP hosts use a JSON configuration file to define which servers to connect to. A typical configuration might look like:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<token>"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://localhost/mydb"
}
}
}
}That is it. No SDK-specific integration code. No model-specific tool definitions. The MCP servers handle the protocol communication, and the host application discovers capabilities at runtime.
Building Custom Servers
When you need to connect an AI model to a proprietary system (your company's internal API, a custom database, a domain-specific service) you build a custom MCP server. The official SDKs make this straightforward.
The process follows a consistent pattern regardless of language:
- Create a server instance with a name and version
- Define tools with their input schemas and handler functions
- Optionally define resources and prompts
- Connect the server to a transport (stdio for local, HTTP for remote)
The handler functions contain your actual business logic: querying your API, transforming data, performing operations. MCP handles everything else: discovery, serialization, error handling, transport.
Remote MCP Servers
The initial MCP spec focused on local servers running as child processes (communicating via stdio). This works well for desktop applications and development tools but does not scale for production deployments. The spec has since evolved to support remote servers over HTTP, including:
- Streamable HTTP transport: A modern, efficient transport that supports both request-response and streaming patterns
- OAuth 2.1 authorization: A standardized authentication flow so remote MCP servers can securely identify users and manage permissions
- Server-Sent Events: For servers that need to push updates to clients
Remote MCP servers unlock enterprise use cases: centrally managed tool servers that entire organizations can share, SaaS products that expose MCP endpoints alongside their REST APIs, and marketplace-style ecosystems where anyone can publish and consume MCP services.
MCP and AI Agents
MCP is not just about letting a chatbot call a tool. Its real significance becomes clear in the context of AI agents: autonomous systems that can plan, reason, and execute multi-step workflows.
Agents need to interact with the world: reading data, making decisions, taking actions, observing results, and iterating. Before MCP, every agent framework had to implement its own tool integration layer. LangChain had its tools, AutoGPT had its plugins, CrewAI had its tools, all incompatible, all requiring separate development effort from tool providers.
MCP provides a universal tool layer that any agent framework can use. An agent built with any framework can connect to any MCP server. This means:
- Tool authors write once: Build an MCP server, and it works with every agent framework
- Agent developers focus on intelligence: Spend time on planning and reasoning, not plumbing
- Users get composability: Mix and match tools from different providers without compatibility worries
MCP is the backbone enabling AI Agents in Production: it provides the standardized connectivity layer that makes production agent deployments practical and maintainable.
The protocol's support for bidirectional communication is particularly important for agents. MCP servers can send notifications to clients (for example, when a resource changes or a long-running operation completes), and the protocol supports sampling, where a server can request that the host's AI model perform an inference. This enables patterns like "agent calls tool, tool needs clarification, tool asks model, model responds, tool completes" without requiring the agent to orchestrate every step.
MCP vs. Google's Agent-to-Agent (A2A) Protocol
Shortly after MCP gained momentum, Google introduced the Agent-to-Agent (A2A) protocol. Some initial coverage framed these as competitors, but they actually address different problems and are designed to be complementary.
MCP standardizes how an AI model (or agent) connects to tools and data sources. It is a vertical integration protocol: model-to-tool.
A2A standardizes how agents communicate with each other. It is a horizontal integration protocol: agent-to-agent.
Consider a complex enterprise workflow: a customer support agent receives a ticket, determines it requires a code change, and hands it off to a coding agent, which implements the fix and passes it to a deployment agent. A2A defines how those three agents discover, negotiate, and communicate with each other. MCP defines how each individual agent connects to the tools it needs (the support agent to the ticketing system, the coding agent to the repository, the deployment agent to the CI/CD pipeline).
| Aspect | MCP | A2A |
|---|---|---|
| Primary relationship | Model-to-tool | Agent-to-agent |
| Core problem | Tool integration | Multi-agent orchestration |
| Communication pattern | Client-server | Peer-to-peer |
| Key primitive | Tools, Resources, Prompts | Agent Cards, Tasks, Messages |
| Discovery | Capability negotiation | Agent Cards (JSON metadata) |
In a mature AI ecosystem, you would expect both protocols to be widely used, often within the same system. An orchestrating agent uses A2A to coordinate with specialist agents, each of which uses MCP to access its tools. They are layers in the stack, not competitors.
That said, MCP has a significant head start in adoption and ecosystem maturity. A2A will need to demonstrate similar cross-vendor adoption to achieve the same level of industry standardization.
The Security Reality
MCP's rapid adoption has outpaced its operational maturity, and the security track record in early 2026 is sobering. Practitioners deploying MCP in production need to understand the threat model.
Tool poisoning is the structural risk. A malicious or compromised server can return content crafted to alter the agent's behavior on subsequent turns. Invariant Labs demonstrated this in April 2025 by smuggling instructions through tool descriptions and resource contents. Because the host treats server output as trusted context, prompt injection through the tool channel bypasses the usual user-input sanitization.
Real incidents have landed. In June 2025, Asana pulled its MCP integration for roughly two weeks after a cross-tenant data bleed surfaced through shared server state. The mcp-remote package (around 500,000 downloads) was hit by a CVSS 9.6 remote code execution vulnerability. Over 30 CVEs were filed against MCP servers and clients in the first two months of 2026 alone. In April 2026, Ox Security disclosed a design-level flaw putting an estimated 200,000 servers at takeover risk; Anthropic disputed the framing, but the exposure numbers are what they are.
The practitioner implications are straightforward. Treat every MCP server like an untrusted third-party dependency: sandbox it, cap its capabilities, and audit what it can return. Prefer signed and pinned server versions over npx -y shortcuts. For remote servers, the OAuth 2.1 flow is a floor, not a ceiling; add tenant isolation, resource scoping, and output validation on top. And assume tool outputs are adversarial by default, especially outputs derived from external web content or user-provided data.
Production Friction Beyond Security
Security is the sharpest edge, but the quieter production costs matter too. A few that show up repeatedly once teams move past demos:
- Tool-name collisions. Wire up five servers and you will hit two tools named
search,query, orget. The spec does not enforce namespacing, and hosts resolve collisions inconsistently. Expect to rename tools at the host level or wrap servers behind a prefix layer. - Remote-server latency tax. Local stdio servers add microseconds. Remote MCP servers over HTTP add 50–300ms per tool call, and agentic workflows make 10+ calls per task. Compare this honestly against a direct REST API call inside your trusted network before defaulting to MCP for high-QPS paths.
- Stateful servers and agent restarts. MCP servers holding per-session state (open DB connections, scratch files, auth sessions) silently break when the host restarts mid-run. The spec treats sessions as ephemeral, so persistence is your problem.
- Schema drift. Servers update their tool descriptions and input schemas over time. Agents that cached the old schema at plan time silently malform calls at execute time. Pin server versions in production.
- Observability gaps. MCP gives you request-response frames, but not the full operational picture: which server is slow, which tool fails under which inputs, which agent retries inflate cost. You still have to bolt on traces, metrics, and structured logs.
None of these invalidate MCP. They are the difference between "connected to a server in Claude Desktop" and "running 30 tool calls per task, 24/7, across hundreds of users." Budget for them.
The Code Execution Pattern
One optimization has become load-bearing enough to call out separately. In late 2025, Anthropic published a pattern for "code execution with MCP" that addresses two pathologies of the naive tool-calling approach: tool-definition bloat (loading hundreds of schemas into context on every turn) and intermediate-result redundancy (ferrying large payloads through the model when the model does not actually need to see them).
The pattern flips the interaction. Instead of exposing tools as direct function calls, the host presents them as importable modules in a sandboxed code environment. The agent writes code that imports, composes, and filters tool calls inside that environment, looping over data where needed. Only the final, relevant output flows back through the model's context.
Anthropic reported a 98.7% reduction in token usage on realistic agentic workloads (150,000 tokens down to 2,000) by moving filtering and control flow out of the model and into code. The pattern also enables progressive disclosure of tool schemas (the agent discovers only what it needs), data-level privacy (sensitive rows can be filtered before ever touching context), and persistence of reusable orchestrations as "skills" across runs.
The tradeoff is infrastructure: you need a secure sandbox with resource limits and good observability. For agents with small tool counts and small payloads, it is overkill. For agents connecting to 30+ MCP servers or processing large structured data, it is becoming the default.
What Comes Next
MCP is still a young protocol, and several frontiers are actively being developed.
Richer Authorization and Multi-Tenancy
As MCP moves from developer tools to enterprise production systems, authorization becomes critical. The OAuth 2.1 integration is a start, but organizations need fine-grained access control: which users can invoke which tools, what data scoping applies, how audit trails work. Expect the authorization story to mature significantly.
Registry and Discovery Standards
Registries have emerged, though no single one has become canonical. Smithery is currently the dominant hub with around 7,000 listed servers, a CLI, and hosted execution. Glama, PulseMCP, and mcp.run cover overlapping ground with different specialties. The ecosystem still lacks the equivalent of npm's verified-publisher trust layer and uniform capability search, and that gap is the next piece of plumbing the community needs to nail down.
Stateful and Long-Running Operations
The current MCP spec is well-suited for request-response interactions but less so for operations that take minutes or hours. Batch data processing, model training pipelines, complex deployment workflows all need standardized patterns for progress tracking, cancellation, and resumption. The 2026 roadmap calls out resumable streams as a priority addition, along with richer progress semantics layered on top of the existing notification channel.
Edge and On-Device MCP
As AI models increasingly run on-device (phones, laptops, embedded systems), MCP servers running locally become important for privacy-sensitive use cases. The stdio transport already supports this, but optimizations for resource-constrained environments and tighter OS integration are natural next steps.
Protocol Evolution and Versioning
The Linux Foundation governance structure provides a stable framework for evolving the protocol. Version negotiation is already built into the spec, which means the protocol can add new primitives, transports, and features without breaking existing implementations. This forward-compatibility is perhaps MCP's most important technical property: it means the ecosystem can grow without fragmentation.
What MCP Actually Changes for Practitioners
MCP does not change what your AI system can do. It changes what the integration work costs over a multi-year horizon. That is the whole point and it is worth being precise about it.
- Tools become portable. Build your integration once against MCP and it survives the next model swap. The alternative (N vendor-specific tool schemas) is the scaffolding tax you pay today.
- Production systems get fewer surfaces to maintain. One protocol to version and monitor beats N brittle adapters, especially when every SaaS you integrate ships its own MCP server.
- Agent architectures become composable. Mix and match components without a compatibility matrix. This is the part that actually enables multi-agent systems that are not research demos.
The M-times-N problem is not solved, it is reduced. The long tail of custom tooling, auth edge cases, and proprietary systems still requires bespoke work. But the baseline has moved: "build a tool integration" used to be weeks of per-provider plumbing, and is now hours of wrapping an existing API as an MCP server. That shift is the actual story.
Key Takeaways
- MCP solves the M-times-N integration problem by providing a single, open protocol for connecting AI models to tools and data sources, regardless of which model or tool is involved.
- The architecture is deliberately layered: hosts manage user experience, clients manage connections, servers expose capabilities. This separation enables independent evolution of each component.
- MCP complements, not replaces, function calling: function calling is how models express tool use intent; MCP is the infrastructure that connects that intent to actual tools.
- Adoption was historically fast: from Anthropic's November 2024 launch to support from OpenAI, Google, Microsoft, and Linux Foundation governance, all within roughly a year.
- Three primitives cover the design space: Tools (model-controlled actions), Resources (application-controlled data), and Prompts (user-controlled templates) provide a clean separation of concerns.
- MCP and A2A are complementary, not competing: MCP handles model-to-tool integration; A2A handles agent-to-agent communication. Production systems will use both.
- The ecosystem is real and growing: thousands of MCP servers exist across developer tools, productivity apps, data sources, and specialized domains.
- Linux Foundation governance ensures neutrality: no single vendor controls the protocol, which is structurally essential for a universal standard.
- The shift to open standards signals industry maturity: competitive differentiation is moving from integration lock-in to model quality and user experience, which benefits everyone.
- MCP is safe to commit to, but not safe to trust blindly. The protocol has the adoption and governance to be a durable foundation. The servers running on top of it do not yet have the operational track record to be treated as anything other than untrusted third-party code.