Build Your Own MCP Server
Write an MCP server from scratch that adds capabilities to your own AI assistant — a local task-manager scenario, TypeScript and Python examples, works with any MCP-compatible host.
Why write your own MCP server?
MCP (Model Context Protocol) is fast becoming the standard way to give an AI assistant the ability to touch the outside world. The catalog of ready servers (filesystem, github, postgres, dozens more) covers most common needs, but you'll write your own when:
- Personal assistant: you want an assistant that talks to your notes, your habits, your library, your data.
- Internal systems: opening a corporate CRM, ERP, or product API to your assistant.
- Open-source tooling: packaging a useful integration as MCP for the community.
- Control layer: wrapping a SaaS API as a controlled tool instead of handing it raw.
This guide walks you through writing one end to end, independent of any specific host. Any MCP-compatible assistant — Claude Desktop, Claude Code, Cline, Continue, future hosts — can connect to your server; the protocol is one standard.
What an MCP server provides
A server exposes three kinds of things:
- Tools: functions the assistant can call (e.g.
add_task). - Resources: data sources the assistant can read.
- Prompts: prompt templates the server suggests.
In most practical scenarios, tools are enough. This guide is tool-focused.
The transport is JSON-RPC over stdio. The SDK hides that detail; you work with a high-level API.
Scenario: a personal task manager
You want to tell your assistant "add 'write the report' for tomorrow at 10", "show what's due this week", "mark X as done". You want the data on disk, in a file you control, not in some third-party cloud. Local data, your control, conversational interface through any AI assistant.
Tools:
list_tasks(status?, due_before?)— list tasks.add_task(title, due?, tags?)— add a task.complete_task(id)— mark complete.search_tasks(query)— search by title or tag.
Storage: ~/.tasks.json, fully local.
Same server in TypeScript first, then Python.
TypeScript implementation
Setup
mkdir tasks-mcp && cd tasks-mcp
npm init -y
npm install @modelcontextprotocol/sdk
npm install -D typescript tsx @types/node
npx tsc --initpackage.json:
{
"type": "module",
"scripts": {
"start": "tsx src/index.ts",
"build": "tsc"
}
}Server code
src/index.ts:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { readFile, writeFile } from "node:fs/promises";
import { join } from "node:path";
import { homedir } from "node:os";
import { randomUUID } from "node:crypto";
const STORE = join(homedir(), ".tasks.json");
type Task = {
id: string;
title: string;
due?: string;
tags?: string[];
done: boolean;
created: string;
};
async function loadTasks(): Promise<Task[]> {
try {
const raw = await readFile(STORE, "utf8");
return JSON.parse(raw);
} catch {
return [];
}
}
async function saveTasks(tasks: Task[]) {
await writeFile(STORE, JSON.stringify(tasks, null, 2), "utf8");
}
const server = new Server(
{ name: "tasks-mcp", version: "0.1.0" },
{ capabilities: { tools: {} } },
);
// 1. Declare which tools we expose
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "list_tasks",
description:
"Returns the to-do list. Optional status and due-before filters.",
inputSchema: {
type: "object",
properties: {
status: { type: "string", enum: ["open", "done", "all"] },
due_before: { type: "string", description: "ISO date" },
},
},
},
{
name: "add_task",
description: "Add a new task. due is optional ISO date; tags is an array of strings.",
inputSchema: {
type: "object",
properties: {
title: { type: "string", description: "Task title" },
due: { type: "string", description: "Due date, ISO format" },
tags: {
type: "array",
items: { type: "string" },
description: "Array of tags",
},
},
required: ["title"],
},
},
{
name: "complete_task",
description: "Mark a task as done.",
inputSchema: {
type: "object",
properties: { id: { type: "string", description: "Task ID" } },
required: ["id"],
},
},
{
name: "search_tasks",
description: "Search tasks by title or tag.",
inputSchema: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
],
}));
// 2. Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (req) => {
const { name, arguments: args } = req.params as { name: string; arguments: any };
const tasks = await loadTasks();
switch (name) {
case "list_tasks": {
const status = args?.status ?? "open";
const dueBefore = args?.due_before ? new Date(args.due_before) : null;
const filtered = tasks.filter((t) => {
if (status === "open" && t.done) return false;
if (status === "done" && !t.done) return false;
if (dueBefore && t.due && new Date(t.due) >= dueBefore) return false;
return true;
});
return {
content: [{ type: "text", text: JSON.stringify(filtered, null, 2) }],
};
}
case "add_task": {
const task: Task = {
id: randomUUID().slice(0, 8),
title: args.title,
due: args.due,
tags: args.tags,
done: false,
created: new Date().toISOString(),
};
tasks.push(task);
await saveTasks(tasks);
return {
content: [{ type: "text", text: `Added: ${task.id} — ${task.title}` }],
};
}
case "complete_task": {
const t = tasks.find((x) => x.id === args.id);
if (!t) {
return {
content: [{ type: "text", text: `Task not found: ${args.id}` }],
isError: true,
};
}
t.done = true;
await saveTasks(tasks);
return {
content: [{ type: "text", text: `Done: ${t.title}` }],
};
}
case "search_tasks": {
const q = (args.query as string).toLowerCase();
const hits = tasks.filter(
(t) =>
t.title.toLowerCase().includes(q) ||
(t.tags ?? []).some((tag) => tag.toLowerCase().includes(q)),
);
return {
content: [{ type: "text", text: JSON.stringify(hits, null, 2) }],
};
}
default:
throw new Error(`Unknown tool: ${name}`);
}
});
// 3. Connect over stdio
const transport = new StdioServerTransport();
await server.connect(transport);Three core parts:
ListToolsRequestSchemadeclares the available tools and their parameter schemas.CallToolRequestSchemahandles each tool call.StdioServerTransportconnects the server to the protocol.
Python implementation
pip install mcptasks_mcp/server.py:
import json
import os
import uuid
from datetime import datetime
from pathlib import Path
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
STORE = Path.home() / ".tasks.json"
def load_tasks() -> list[dict]:
if not STORE.exists():
return []
return json.loads(STORE.read_text("utf-8"))
def save_tasks(tasks: list[dict]) -> None:
STORE.write_text(json.dumps(tasks, indent=2, ensure_ascii=False), "utf-8")
server = Server("tasks-mcp")
@server.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="list_tasks",
description="Returns the to-do list.",
inputSchema={
"type": "object",
"properties": {
"status": {"type": "string", "enum": ["open", "done", "all"]},
"due_before": {"type": "string"},
},
},
),
Tool(
name="add_task",
description="Adds a new task.",
inputSchema={
"type": "object",
"properties": {
"title": {"type": "string"},
"due": {"type": "string"},
"tags": {"type": "array", "items": {"type": "string"}},
},
"required": ["title"],
},
),
Tool(
name="complete_task",
description="Marks a task as done.",
inputSchema={
"type": "object",
"properties": {"id": {"type": "string"}},
"required": ["id"],
},
),
Tool(
name="search_tasks",
description="Searches tasks by title or tag.",
inputSchema={
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
},
),
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
tasks = load_tasks()
if name == "list_tasks":
status = arguments.get("status", "open")
due_before = arguments.get("due_before")
out = []
for t in tasks:
if status == "open" and t["done"]:
continue
if status == "done" and not t["done"]:
continue
if due_before and t.get("due") and t["due"] >= due_before:
continue
out.append(t)
return [TextContent(type="text", text=json.dumps(out, indent=2, ensure_ascii=False))]
if name == "add_task":
task = {
"id": uuid.uuid4().hex[:8],
"title": arguments["title"],
"due": arguments.get("due"),
"tags": arguments.get("tags"),
"done": False,
"created": datetime.utcnow().isoformat(),
}
tasks.append(task)
save_tasks(tasks)
return [TextContent(type="text", text=f"Added: {task['id']} — {task['title']}")]
if name == "complete_task":
for t in tasks:
if t["id"] == arguments["id"]:
t["done"] = True
save_tasks(tasks)
return [TextContent(type="text", text=f"Done: {t['title']}")]
return [TextContent(type="text", text=f"Task not found: {arguments['id']}")]
if name == "search_tasks":
q = arguments["query"].lower()
hits = [
t for t in tasks
if q in t["title"].lower() or any(q in tag.lower() for tag in (t.get("tags") or []))
]
return [TextContent(type="text", text=json.dumps(hits, indent=2, ensure_ascii=False))]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read, write):
await server.run(read, write, server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())Run:
python -m tasks_mcp.serverTesting: MCP Inspector
The easiest way to exercise the server before hooking it into a host is the official MCP Inspector. Browser UI, manual tool calls, JSON schema view, error log.
npx @modelcontextprotocol/inspector node dist/index.js
# or for Python
npx @modelcontextprotocol/inspector python -m tasks_mcp.serverVerify the tool list, add a few tasks, list them, intentionally pass a bad parameter to see how errors flow. Skipping this step means debugging silent failures inside a host.
Connecting to an AI assistant
Because MCP is standard, the same server works with any MCP-compatible host. The exact config file path varies; the structure is the same.
Typical config:
{
"mcpServers": {
"tasks": {
"command": "node",
"args": ["/full/path/to/tasks-mcp/dist/index.js"]
}
}
}Python variant:
{
"mcpServers": {
"tasks": {
"command": "python",
"args": ["-m", "tasks_mcp.server"]
}
}
}Add env for environment variables. Keep secrets in .env and reference them via ${VAR}.
Per host:
- Claude Desktop —
~/Library/Application Support/Claude/claude_desktop_config.json(macOS),%APPDATA%\Claude\claude_desktop_config.json(Windows). - Claude Code —
.mcp.jsonin the project root or~/.claude/mcp.jsonfor user-wide. - Cline / Continue / other hosts — their own UI for adding MCP servers, or a similar JSON file.
- Your own assistant — community client SDKs let you wire MCP servers into LLM API calls (Anthropic, OpenAI, Google).
After the host restarts, the server is connected. "Add 'write the report' for tomorrow at 10" makes the assistant call add_task, which writes to disk and returns a summary.
Common pitfalls
Vague tool descriptions
The assistant decides when to call a tool from its description. "Add a task" is weaker than "Add a new task. due must be ISO date format; tags is an array of strings." Add per-parameter description too. This small detail is the difference between "the model uses this tool at the right moment" and "the model never calls it".
Over-broad permissions
Even local-file servers should follow least privilege; with real systems it's critical. Read-only tokens, scoped IAM roles, an API gateway restricting paths — whichever applies. A model mistake shouldn't damage live data.
Swallowed errors
When fetch / httpx or a disk op throws, surface a meaningful message. Instead of a generic 500: "Task not found: ID 12345". The assistant reads it and explains it cleanly. Silent failures are the worst UX.
Forgotten required
Missing required lets the model omit a parameter; the server crashes later. Keep schemas tight: complete required lists, explicit types on optional fields.
Wrong transport
MCP supports HTTP/SSE besides stdio, but most hosts expect stdio. Start with stdio; switch only when a remote server is genuinely needed.
Concurrency safety
The assistant may call several tools in one message. Make the server thread/async safe. With a DB pool, prefer connection-per-call. In our example, sequential JSON writes — under high concurrency, file locking would matter.
Publishing and sharing
Once it works:
- npm/PyPI: add a
binfield topackage.json(TypeScript) sonpx tasks-mcpruns. - README: include setup and example configs.
- awesome-mcp-servers: open a PR to the community catalog.
- Private registry: for sensitive logic, publish to an internal npm/PyPI.
Common scenario ideas
The same skeleton applied to different data sources gives many useful servers:
- Personal notes / wiki: read and search local Markdown files.
- Smart home: turn lights on/off, query temperature, summarize cameras.
- Music library: recent plays, add to playlist.
- Expense ledger: add expense, total by category.
- Digital archive: book/movie/game tracking.
- Business-specific: CRM, product analytics, content calendar, ticket system.
Anything you wish your assistant could do is a candidate for an MCP server.
Continue reading
- MCP — the underlying concept.
- MCP Server Catalog — check before writing your own.
- Claude Skills Guide — how skills and MCP servers complement each other.
- Function Calling — the tool-calling behavior MCP rides on.