Back to profile

MCP Is the USB-C for AI Agents

sravan.sarrajuMarch 28, 20264 min read
aimcpagentsengineeringdistributed-systemsllm

MCP Is the USB-C for AI Agents

Before MCP, every AI integration was a bespoke handshake.

You wanted your LLM to read a database? Write a custom tool. Query an API? Another custom tool. Call an internal service? Yet another wrapper, with its own auth, its own schema, its own retry logic. The surface area exploded fast. And every agent framework — LangChain, AutoGen, CrewAI, whatever came next — reinvented the same plumbing.

Anthropic's Model Context Protocol is the attempt to stop that.

What it actually is

MCP is a client-server protocol. The LLM (or agent runtime) is the client. Your tools, data sources, and services are servers. The protocol defines how they talk: what a tool looks like, how the model requests it, how results come back.

Three primitives:

  • Tools — functions the model can call (think: search_database, create_ticket, run_query)
  • Resources — data the model can read (files, database rows, API responses)
  • Prompts — reusable prompt templates the server can expose

That's it. The model doesn't need to know whether it's talking to a local Python script or a remote microservice. The protocol is the contract.

Why this matters more than it sounds

The real problem with agent tooling wasn't writing the tool. It was the combinatorial explosion of integrations.

If you have 10 agent frameworks and 50 data sources, you're writing 500 integration adapters. MCP collapses that to 10 + 50. Each framework implements the client side once. Each data source implements the server side once. You get composability for free.

This is what USB did for hardware. Before USB, every peripheral had its own connector, its own driver model, its own negotiation protocol. After USB, you plug it in and it works. MCP is trying to do that for the AI tool layer.

The architecture in practice

An MCP server is just an HTTP service (or a local process over stdio) that exposes a standard schema. When an agent starts up, it discovers what tools are available by calling tools/list. When it wants to invoke one, it calls tools/call with a JSON payload. The server executes and returns a result.

Agent Runtime
     │
     ├── MCP Client
     │       │
     │       ├── tools/list  ──────► MCP Server A (your internal DB)
     │       ├── tools/call  ──────► MCP Server B (GitHub API)
     │       └── resources/read ──► MCP Server C (file system)

The model sees a flat list of available tools. It doesn't know or care about the topology underneath. That's the point.

What's still rough

Discovery is local. Right now, MCP servers need to be configured manually — there's no registry, no service mesh, no auto-discovery. You're essentially maintaining a list of server endpoints. For a single developer running a local agent, that's fine. For a fleet of autonomous agents in production, you need to build that layer yourself.

Auth is underspecified. The protocol has hooks for auth but leaves the implementation to you. In practice this means every MCP server you write has to solve auth independently. OAuth flows, API key management, token refresh — none of that is standardized yet.

Stateful sessions are awkward. MCP is largely stateless per-call. If your tool needs multi-turn context (like a browser session or a database transaction), you have to manage that state server-side and thread a session ID through. Doable, but not elegant.

The deeper shift

What MCP signals isn't just a new protocol. It's a bet that the agent layer is going to be as important as the model layer — and that it needs its own infrastructure primitives.

Right now, most teams are still in the "duct tape" phase: custom tools, custom schemas, custom everything. MCP is the first serious attempt to say: here's a standard. Build to it once, and your tools work everywhere.

Whether it wins depends on adoption. The SDK ecosystem is growing fast (Python, TypeScript, Java SDKs already exist). Major tools are shipping MCP servers. The momentum is real.

The question isn't whether agent tooling gets standardized. It will. The question is whether MCP is the standard that sticks — or whether it's the first draft that the eventual winner learns from.