MCP
Model Context Protocol
An open protocol that lets AI applications connect to external data sources and tools in a standardized way.
MCP (Model Context Protocol) is an open protocol introduced by Anthropic in late 2024 that quickly became an industry standard. The one-line pitch: USB-C for AI. Before MCP, every AI app had to integrate with every data source separately — one integration for Claude, another for Cursor, another for ChatGPT. MCP ends that chaos.
Three-part architecture: Host (the LLM-containing app like Claude Desktop, Cursor, VS Code), Client (a protocol client running inside the Host), and Server (a process exposing data or tools — e.g. Postgres MCP server, Filesystem MCP server, GitHub MCP server). Servers expose tools (functions), resources (readable data), and prompts (canned prompt templates).
Runs on JSON-RPC 2.0, connects over stdio or HTTP. Write an MCP server once, and every MCP-aware app gets to use it.
Remember life before USB-C? Every device had its own cable, its own charger, ten adapters in your bag. USB-C: one port, one cable, everything works.
MCP is doing the same for AI. Yesterday you wrote custom code to let Claude read Slack; you wrote different code for ChatGPT. Today: install a Slack MCP server once, and every MCP-aware AI client (Claude Desktop, Cursor, etc.) can use it.
Say you want to give Claude Desktop access to your company's Postgres
database. You install the Postgres MCP server, add the connection string
to claude_desktop_config.json, and you're done. Now ask Claude
"show me the top 10 customers by orders last month" — Claude writes SQL,
the MCP server runs it, Claude reads the result and responds. Plug the
same server into Cursor tomorrow and it works there too. Zero new code.
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://user:pass@localhost:5432/avva"
]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/me/projects"
]
}
}
}import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "avva-orders",
version: "1.0.0",
});
// Tool: look up an order by ID
server.tool(
"get_order",
"Returns order details for a given order ID",
{ order_id: z.string() },
async ({ order_id }) => {
const row = await db.query("SELECT * FROM orders WHERE id = $1", [order_id]);
return {
content: [{ type: "text", text: JSON.stringify(row) }],
};
},
);
await server.connect(new StdioServerTransport());- Multiple AI apps need access to the same data source
- Exposing internal databases, filesystems, or APIs to AI safely
- Building a tool meant to be shared (like an npm package, but for AI)
- Stateless function calling isn't enough — you need persistent connections or sessions
- One-off scripts — a direct API call is simpler
- Only using one LLM app that already has its own native tool system
- Latency-critical paths — protocol adds a small overhead
Thinking MCP is Claude-only
It's an open standard. Cursor, VS Code, Continue.dev, Zed and many others support it. Even OpenAI added compatibility.
Underestimating server security
An MCP server runs with its own permissions. Give it write access to your DB and an LLM-triggered call could DELETE. Use read-only users and least-privilege accounts.
Mixing up Client and Server
The Host (e.g. Claude Desktop) embeds a Client; the Server runs as a separate process. Local servers connect via stdio, remote ones over HTTP.