Building Your Own MCP Server — Guiding LLMs and Custom Workflows

You’ve had your espresso macchiato in Part 2 — now it’s time for something stronger. Grab a flat white, because we’re rolling up our sleeves. This one’s all about building — your own Model Context Protocol (MCP) server that can guide large language models (LLMs) and automate developer workflows your way.

By the end, you’ll have a working mental (and code) model of how to make LLMs understand your world — your framework, your architecture, your rules.

If you’re new here, MCP (Model Context Protocol) is the open standard that lets AI assistants connect to your dev tools and data sources through structured, context-rich interactions.

Why Build Your Own MCP Server?

In Part 2, we saw how MCP powers VS Code, Copilot, and ChatGPT-style tools. But those are general-purpose. If you’re running your own dev platform, workflow engine, or framework, you need something more specific — something that speaks your dialect.

When to build your own:

  • You have custom workflows (deployment pipelines, review systems, domain logic).
  • You want LLMs to follow your conventions, not generic ones.
  • You need a plugin system so multiple agents or services can share the same context.
  • You want to create AI copilots that understand your framework’s inner workings.

In short:

Building your own MCP server gives you a programmable interface for AI collaboration. You’re not just calling an API anymore — you’re defining the rules of your world.

Planning Your Server

Before writing code, let’s plan the architecture.

1. Define your protocol shape

At its core, MCP is structured JSON-RPC 2.0 message exchange over stdio or HTTP transports. You’ll want to define three types of capabilities:

  • Tools: Executable functions (“createRoute”, “analyzeError”, “deployBuild”).
  • Resources: Data sources (“projectConfig”, “buildLogs”, “testResults”).
  • Prompts: Reusable templates (“codeReview”, “bugReport”, “documentation”).

2. Capability design

Each MCP capability should provide focused, well-defined functionality:

  • Tools: Accept specific inputs (zod schemas), perform actions, return structured results.
  • Resources: Expose data URIs that clients can read (config://, logs://, docs://).
  • Prompts: Template reusable interactions with clear parameter requirements.

3. Discoverability

Design for capability discovery:

  • MCP clients can automatically discover available tools, resources, and prompts.
  • Use descriptive names, clear descriptions, and proper schema validation.
  • Group related capabilities into logical modules or plugins.

Think of it like a well-designed REST API — but for AI interactions.

Step-by-Step: Building Your Own MCP Server

Let’s walk through the process.

Step 1 — Project Setup

You can use Node.js with TypeScript for a familiar ecosystem:

mkdir my-mcp-server
cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod typescript ts-node @types/node
npx tsc --init

Basic folder structure:

/src
  /tools
    analyzeError.ts
    createRoute.ts
  /resources
    projectConfig.ts
  /prompts
    codeReview.ts
  server.ts

Step 2 — Create the MCP Server

Let’s scaffold a basic server:

// src/server.ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import express from 'express';
import { z } from 'zod';

// Import tool registration functions
import { registerAnalyzeErrorTool } from './tools/analyzeError.js';
import { registerCreateRouteTool } from './tools/createRoute.js';
import { registerFrameworkTools } from './plugins/myFrameworkPlugin.js';

const server = new McpServer({
  name: "my-framework-mcp",
  version: "1.0.0"
});

// Register tools (we'll add these next)
registerAnalyzeErrorTool(server);
registerCreateRouteTool(server);

// Set up Express and HTTP transport
const app = express();
app.use(express.json());

app.post('/mcp', async (req, res) => {
  // Note: This example reinitializes the transport for simplicity,
  // but production should reuse the transport instance for efficiency.
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
    enableJsonResponse: true
  });

  res.on('close', () => {
    transport.close();
  });

  await server.connect(transport);
  await transport.handleRequest(req, res, req.body);
});

const port = parseInt(process.env.PORT || '4000');
app.listen(port, () => {
  console.log(`🚀 MCP Server running on http://localhost:${port}/mcp`);
});

// Alternative: stdio transport for local integrations
// import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
// const transport = new StdioServerTransport();
// await server.connect(transport);

Step 3 — Defining Commands and Context

Each command follows the MCP contract: `command(context, payload) → result`.

Example: analyzeError

// src/tools/analyzeError.ts
function registerAnalyzeErrorTool(server) {
  server.registerTool(
    'analyze-error',
    {
      title: 'Analyze Error',
      description: 'Analyze error messages from application logs',
      inputSchema: {
        errorMessage: z.string().describe('The error message to analyze'),
        projectPath: z.string().optional().describe('Path to project directory')
      },
      outputSchema: {
        status: z.enum(['success', 'error']),
        analysis: z.string().optional(),
        nextSteps: z.array(z.string()).optional(),
        message: z.string().optional()
      }
    },
    async ({ errorMessage, projectPath }) => {
      try {
        // Simulate log analysis
        const mockLogs = ['Config missing in app.json', 'Database connection failed'];
        const recentLog = mockLogs.find(l => l.toLowerCase().includes(errorMessage.toLowerCase()));
        
        if (!recentLog) {
          const output = { status: 'error', message: 'Error not found in logs' };
          return {
            content: [{ type: 'text', text: JSON.stringify(output) }],
            structuredContent: output
          };
        }

        const output = {
          status: 'success',
          analysis: `The error likely occurs due to missing config in ${recentLog}.`,
          nextSteps: ["Check config.json", "Re-run build with --verbose"]
        };
        
        return {
          content: [{ type: 'text', text: JSON.stringify(output) }],
          structuredContent: output
        };
      } catch (error) {
        const output = { status: 'error', message: error.message };
        return {
          content: [{ type: 'text', text: JSON.stringify(output) }],
          structuredContent: output
        };
      }
    }
  );
}

Example: createRoute

// src/tools/createRoute.ts
import { promises as fs } from 'fs';

function registerCreateRouteTool(server) {
  server.registerTool(
    'create-route',
    {
      title: 'Create Route',
      description: 'Create a new route file with specified configuration',
      inputSchema: {
        path: z.string().describe('Route path (e.g., "users", "api/auth")'),
        auth: z.enum(['public', 'private', 'admin']).describe('Authentication requirement'),
        projectRoot: z.string().describe('Project root directory')
      },
      outputSchema: {
        status: z.enum(['success', 'error']),
        created: z.string().optional(),
        message: z.string().optional()
      }
    },
    async ({ path, auth, projectRoot }) => {
      try {
        const filePath = `${projectRoot}/routes/${path}.ts`;
        const template = `export const route = { 
  path: "${path}", 
  auth: "${auth}",
  handler: async (req, res) => {
    // Route implementation here
    res.json({ message: 'Hello from ${path}' });
  }
};`;

        await fs.writeFile(filePath, template);
        
        const output = { status: 'success', created: filePath };
        return {
          content: [{ type: 'text', text: JSON.stringify(output) }],
          structuredContent: output
        };
      } catch (error) {
        const output = { status: 'error', message: error.message };
        return {
          content: [{ type: 'text', text: JSON.stringify(output) }],
          structuredContent: output
        };
      }
    }
  );
}

Step 4 — Connecting to MCP Clients

Now for the fun part: connecting your MCP server to AI clients that can communicate with language models.

Your MCP server itself does not communicate with LLMs but exposes tools and capabilities for MCP hosts that do.

MCP servers don’t talk to models directly — they define capabilities. Hosts like Claude Desktop or VS Code handle model communication.

Here’s how to connect to your server using the MCP Inspector for testing:

# Install MCP Inspector for testing
npx @modelcontextprotocol/inspector

# Connect to your HTTP server
# Visit http://localhost:3000 in your browser
# Enter your server URL: http://localhost:4000/mcp

For production, clients like Claude Desktop can connect to your server via config:

// Claude Desktop config (~/.claude_desktop_config.json)
{
  "mcpServers": {
    "my-framework-mcp": {
      "command": "node",
      "args": ["path/to/your/server.js"]
    }
  }
}

The client handles the LLM communication – your server just provides structured tools and context that the AI can reason about and use.

Tool Discovery Example

Request:

{ "jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {} }

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      { "name": "analyze-error", "description": "Analyze error messages" },
      { "name": "create-route", "description": "Create a new route file" }
    ]
  }
}

Step 5 — Dynamic Tool Registration

You can dynamically add tools and resources to keep your server extensible.

// src/plugins/myFrameworkPlugin.ts
export function registerFrameworkTools(server) {
  // Register multiple related tools
  registerCreateRouteTool(server);
  registerAnalyzeCodeTool(server);
  registerMigrationTool(server);
  
  // Register framework-specific resources
  server.registerResource(
    'project-config',
    'config://project.json',
    {
      title: 'Project Configuration',
      description: 'Current project configuration and settings',
      mimeType: 'application/json'
    },
    async (uri) => {
      const config = await loadProjectConfig();
      return {
        contents: [{
          uri: uri.href,
          text: JSON.stringify(config, null, 2)
        }]
      };
    }
  );
}

// Register plugin tools in server.ts
import { registerFrameworkTools } from './plugins/myFrameworkPlugin.js';

registerFrameworkTools(server);
registerAnalyzeErrorTool(server);
registerCreateRouteTool(server);

// Add a code review prompt example
server.registerPrompt(
  'code-review',
  {
    title: 'Code Review Assistant',
    description: 'Generate code review feedback for files',
    argsSchema: {
      filePath: z.string().describe('Path to the file to review'),
      language: z.string().optional().describe('Programming language')
    }
  },
  async ({ filePath, language }) => ({
    messages: [{
      role: 'user',
      content: {
        type: 'text',
        text: `Please review this ${language || 'code'} file for best practices, potential bugs, and improvements: ${filePath}`
      }
    }]
  })
);

Now your MCP server becomes a framework-aware AI gateway with modular capabilities.

MCP also supports exposing resources and prompts alongside tools. For example, you can register a resource like `workspace:/README.md` to make docs accessible to LLM hosts.

Best Practices

When you start scaling this setup, here’s what matters most:

Security

  • Never expose your full workspace context to remote models.
  • Filter context before sending (no secrets, tokens, credentials).
  • Implement your own access policies — MCP doesn’t yet define role-based access control (RBAC).

Error Handling

  • Use consistent response schemas: `{ status, message, data }`.
  • Return meaningful diagnostics — not just stack traces.

Modularity

  • Treat each command like a microservice.
  • Keep plugins independent and versioned.

Testing

  • Mock LLM responses with fixtures.
  • Write integration tests for commands and context assembly.
  • Use snapshot tests to validate JSON schemas for tools and resources.

Observability

  • Log command invocations and responses.
  • Measure latency and token usage if you’re calling remote LLMs.
  • Consider structured logging (e.g., pino or winston) for traceable JSON logs.

Tool Client Example

Here’s a minimal example of connecting to your MCP server as a client using the SDK:

import { McpClient } from '@modelcontextprotocol/sdk/client/mcp.js';
const client = new McpClient('http://localhost:4000/mcp');
const tools = await client.listTools();
console.log(tools);

Real-World Use Cases

Your MCP server can be the bridge between AI and your ecosystem.

  1. Custom IDE Extensions
    Your VS Code or JetBrains plugin can call your MCP server for framework-specific tasks (e.g., “generate a new component with tests”).
  2. AI-Powered Documentation
    Auto-generate developer docs from project state — the MCP server gathers context, and the LLM writes.
  3. Workflow Automation
    Integrate with build systems or CI pipelines. LLMs can triage failed builds or suggest fixes based on logs.
  4. Domain-Specific Assistants
    Teach the model to follow your company’s internal frameworks, naming conventions, or compliance requirements.

Scaling and Evolving Your MCP Server

  • Scale horizontally: Use message queues or event buses for high-volume workloads.
  • Cache intelligently: Store recent contexts or completions.
  • Version your schema: Breaking changes should be predictable.
  • Automate docs: Generate OpenAPI-style specs for your MCP endpoints.
  • Iterate fast: Treat your MCP commands like evolving APIs — keep them small and composable.
  • Containerize your server (Docker) and scale via worker threads or clusters for parallel processing.

Wrapping Up (and a Hint of Part 4)

You now have the blueprint for your own MCP-powered ecosystem — one where LLMs don’t just autocomplete code but understand your intent, enforce your standards, and extend your workflows.

Try building your first MCP tool today — start with a simple ‘hello-world’ tool and test it using MCP Inspector at localhost.

And you did it all over a flat white. ☕

But we’re not done yet.

Next time, we’ll dive into Part 4: The Future of Context-Aware AI Development — how MCP, LLMs, and autonomous agents will shape the next generation of collaborative coding. Think less “AI assistant,” more “AI teammate.”

MCP is evolving quickly — always check modelcontextprotocol.io/spec for the latest updates and SDK versions.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top