MCP in the Age of LLMs — How VS Code, AI, and Dev Tools Use Context

If Part 1 was the “why,” this one’s the “how.”

We’re diving into how the Model Context Protocol (MCP) is reshaping the way large language models (LLMs) and developer tools talk to each other — particularly in environments like VS Code, GitHub Copilot, and other AI-powered IDEs.

This is where context stops being an abstract idea and starts powering real, day-to-day workflows for developers.

So, grab your Espresso Macchiato, add some sugar, and let’s go.

The Rise of LLMs and Context-Driven Development

Let’s start with the obvious: AI models like GPT-4, Claude, Gemini, and others are insanely capable — but also limited.

They’re brilliant pattern matchers, but without context, they’re like a super-intelligent intern with short-term memory loss. Ask an LLM to “fix the bug in this function” without showing it the file or explaining the project, and it’ll do its best guesswork. Ask again tomorrow, and it’ll probably give you a different answer.

That’s because LLMs don’t actually know your project — they just know language. They’re trained on code and documentation, but not on your workspace, your configuration, or your intent.

That’s where context-driven development comes in. Modern dev tools are learning to act as translators between you and the model — collecting everything the model needs to make smart decisions:

  • What file you’re editing
  • Which function your cursor is inside
  • What errors are in your terminal
  • Which branch or environment you’re in
  • Even who’s on your team and how your CI/CD pipeline works

All that information becomes context — and it’s the lifeblood of intelligent development experiences.

But there’s a problem: until recently, every tool implemented its own context-passing mechanism. Enter MCP, the universal language for context exchange.

How MCP Powers Modern IDEs

Think of your IDE (say, VS Code) as a central nervous system. Every extension — Git, Linter, Formatter, Copilot, Debugger — is a nerve ending sending signals to the brain.

In the past, each extension or plugin had to define its own way to exchange context: custom APIs, shared state, or message-passing hacks. This made AI integration brittle, inconsistent, and hard to scale.

MCP changes that by providing a shared protocol for context-aware communication.

The Core Idea

Instead of every plugin doing its own thing, IDEs and tools can use MCP to standardize context exchange:

  • The IDE (client) knows what’s happening in the workspace (active file, selection, open tabs, git diff).
  • The LLM (server) receives that context, understands what’s relevant, and sends back intelligent responses — code suggestions, explanations, or actions.
  • Both sides speak the same protocol, so the system stays modular and extensible.

Essentially, MCP lets an IDE and a model collaborate like teammates who share the same whiteboard.

MCP: Current State and Future Potential

Let’s look at where MCP exists today and the exciting possibilities it enables for tomorrow’s development tools.

VS Code & Copilot: MCP Support Today

Anthropic introduced MCP in late 2024 as an open standard. As of now, VS Code (from the 1.102 release) and GitHub Copilot support MCP, so you can wire up local or remote MCP servers and use their tools directly in chat/agent flows.

What this enables right now:

  • The IDE can request structured context (“What’s in the active editor?” “What tests failed?”).
  • AI tools can return structured results (code suggestions, explanations, refactors, diagnostics).
  • Both sides communicate via standardized JSON‑RPC 2.0 messages (requests, notifications, and responses).

Here’s a simplified view of how this could look architecturally:

IDE Host ───┬──> MCP Client (manages connections)
            
            ├──> MCP Server: Filesystem Access
            
            ├──> MCP Server: Git Integration  
            
            └──> MCP Server: AI Code Analysis

MCP roles in a nutshell:

  • Host: the AI application (e.g., VS Code + Copilot, Claude Desktop) that initiates and manages the session.
  • Client: the connector inside the host that handles transport and capability discovery.
  • Server: your service exposing tools, resources, and prompts.

Communication is JSON‑RPC 2.0 (requests, notifications, responses) over stdio or streamable transports.

Security heads‑up: MCP servers can execute arbitrary code. VS Code prompts for trust the first time you start a server—only use trusted servers and review configurations.

Each MCP server exposes tools and resources through the standardized protocol, enabling modular and interoperable AI-powered development experiences.

Example:

When you ask Copilot Chat: “Can you refactor the authentication middleware to use JWTs?”

VS Code’s MCP layer:

  1. Gathers the open file, cursor location, and relevant code.
  2. Wraps it in structured MCP context (e.g., `{ filePath, language, selectionRange, symbols }`).
  3. Sends it to the MCP server (the LLM agent).
  4. Receives a refactored code suggestion and applies it back into the editor.

That’s the difference between a “smart autocomplete” and a collaborative coding partner.

Early MCP Adoption and AI Ecosystem Potential

Anthropic’s MCP is designed to enable standardized integration between AI applications and external systems. While still new, the protocol addresses real needs in the AI tooling ecosystem.

Claude Desktop is one of the first AI applications to support MCP directly, allowing users to connect to MCP servers for enhanced capabilities. The pattern that MCP enables:

  • Servers declare what they can do (tools, resources, prompts).

Quick resource example (discovery → read):

Request:

{ "jsonrpc": "2.0", "id": 1, "method": "resources/list", "params": {} }

Response:

{ "jsonrpc": "2.0", "id": 1, "result": { "resources": [{ "uri": "workspace:/README.md", "name": "Project README" }] } }

Fetch content:

{ "jsonrpc": "2.0", "id": 2, "method": "resources/read", "params": { "uri": "workspace:/README.md" } }

  • AI applications receive structured context and capabilities.
  • Standardized JSON-RPC 2.0 communication ensures interoperability.

As the ecosystem matures, these capabilities could extend into editors, terminals, CI/CD pipelines, and documentation systems — all powered by the same standardized protocol.

Future Potential: Multi-Agent AI Systems

While not yet implemented, MCP’s design makes it ideal for multi-agent AI systems like what GitHub Copilot could become.

Imagine if each AI capability could act as an MCP server or client:

  • A documentation agent that exposes docs as MCP resources
  • A testing agent that provides test execution as MCP tools
  • A code analysis agent that offers insights via MCP prompts

MCP’s standardized protocol could make such agents truly interoperable, sharing context through consistent JSON-RPC 2.0 messages instead of custom integrations. This represents the kind of ecosystem MCP is designed to enable.

How LLMs Leverage MCP for Smarter Development

So how does MCP actually make LLMs “smarter”? Let’s walk through a concrete example.

Say you’re using a Copilot-like agent in VS Code, and you type: “Optimize this loop to use async I/O.”

Before calling a tool, the host typically discovers capabilities:

Request:

{ "jsonrpc": "2.0", "id": 10, "method": "tools/list", "params": {} }

Response:

{
  "jsonrpc": "2.0",
  "id": 10,
  "result": {
    "tools": [{
      "name": "refactorCode",
      "description": "Refactor selected code",
      "inputSchema": { 
        "type": "object", 
        "properties": { 
          "filePath": { "type": "string" }, 
          "selectedCode": { "type": "string" } 
        }, 
        "required": ["selectedCode"] 
      }
    }]
  }
}

Behind the scenes:

  1. The IDE knows which file you’re in and your current selection.
  2. It packages that into an MCP tool call request:
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "refactorCode",
    "arguments": {
      "filePath": "/src/utils/io.ts",
      "language": "typescript",
      "cursor": 138,
      "selectedCode": "for (let i = 0; i < files.length; i++) { ... }"
    }
  }
}

  1. The MCP server (the AI-powered tool) processes this request.
  2. The tool analyzes the code snippet along with context like imports, type hints, and project conventions.
  3. It returns a structured MCP response:
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [{
      "type": "text", 
      "text": "Refactored to use Promise.all for better async performance:\n\n```typescript\nawait Promise.all(files.map(async f => process(f)))\n```"
    }]
  }
}

  1. The IDE applies the change — or shows a preview.

The magic isn’t in the LLM itself — it’s in the shared understanding of context that MCP provides. Without that, the LLM wouldn’t know where it’s editing or why the code matters.

Anatomy of an MCP-Powered LLM Workflow

Let’s break this down systematically — this is the blueprint for every modern MCP + AI integration.

  1. Context Gathering (Client Side)
  • The IDE, CLI, or app collects relevant state.
  • Examples: open files, git status, environment, logs, user commands.
  • It formats this into MCP context.
  1. Request Formation
  • The context and intent are packaged into a JSON-RPC 2.0 request:
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call", 
  "params": {
    "name": "analyzeError",
    "arguments": {
      "errorMessage": "TypeError: Cannot read property 'length' of undefined",
      "filePath": "/src/utils/parser.ts",
      "lineNumber": 42
    }
  }
}

  1. MCP Transport
  • The message is sent to the MCP endpoint — local or remote.
  1. Model Processing
  • The LLM or agent uses that context to perform reasoning.
  • It can also request more context (e.g., “show me dependencies”).
  1. Response + Action
  • The model returns structured results, not plain text — code diffs, insights, or next steps.
  1. Execution + Feedback
  • The IDE applies the changes or presents them for review.
  • Feedback is looped back into the system, enriching the context for future steps.

That’s the context loop — and it’s where MCP shines.

Integrating MCP with Your Dev Environment

Alright, let’s talk implementation. If you’re building an MCP server to provide tools and resources to AI applications, here’s how you’d do it with the official TypeScript SDK:

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

import * as fs from 'fs/promises';

const server = new Server(
  {
    name: 'code-analysis-server',
    version: '0.1.0',
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

// Define available tools
server.setRequestHandler('tools/list', async () => {
  return {
    tools: [
      {
        name: 'summarizeCode',
        description: 'Analyze and summarize code snippets',
        inputSchema: {
          type: 'object',
          properties: {
            filePath: { type: 'string' },
            codeSnippet: { type: 'string' }
          },
          required: ['codeSnippet']
        }
      }
    ]
  };
});

// Handle tool execution
server.setRequestHandler('tools/call', async (request) => {
  const { name, arguments: args } = request.params;
  
  if (name === 'summarizeCode') {
    const analysis = analyzeCodeSnippet(args.codeSnippet);
    return {
      content: [
        {
          type: 'text',
          text: `This code defines a ${analysis.purpose} with ${analysis.complexity} complexity.`
        }
      ]
    };
  }
});

// Start server with stdio transport (for local usage)
const transport = new StdioServerTransport();
await server.connect(transport);

For remote scenarios, you can use streamable HTTP or WebSocket transports—pick stdio for local host integrations and HTTP/WS for networked servers.

AI applications like Claude Desktop can then connect to your MCP server and use your tools through the standardized JSON-RPC 2.0 protocol. The MCP framework handles all the message formatting, capability negotiation, and error handling automatically.

Best Practices

If you’re planning to integrate or build with MCP, here are a few pro tips:

  1. Start Small:
    Begin with one command (“explain”, “refactor”, “analyze”) before scaling.
  2. Define Context Schemas:
    Decide what metadata matters most — file path, project type, user role, etc. Standardize it.
  3. Use Typed Interfaces:
    MCP works best when messages are validated. Use TypeScript interfaces or JSON schemas.
  4. Leverage Capability Discovery:
    MCP supports capability negotiation — agents can declare what they can handle.
  5. Build for Extensibility:
    Treat MCP servers like plugins — other agents or clients should be able to reuse them.

Challenges and Opportunities

Every new paradigm brings trade-offs. MCP is no exception.

Security and Privacy

Context can be sensitive. When an IDE shares state with an LLM, it might include:

  • Proprietary code
  • API keys
  • User data

MCP mitigates this by enabling selective context sharing — clients decide what context is sent and when. Still, it’s crucial to:

  • Sanitize outgoing context.
  • Mask credentials.
  • Run MCP servers locally whenever possible.

In the near future, expect MCP clients to have built-in “context firewalls” — letting you whitelist or block certain data types.

Extensibility vs. Complexity

With great power comes… a lot of interfaces.

Defining context schemas that work across multiple tools is tricky. You want flexibility without chaos. The solution? Community-driven schemas.

Anthropic’s MCP specification encourages shared vocabularies for common contexts — code, document, task, and user interactions. As the ecosystem grows, expect open-source libraries to emerge around standardized schemas (and you can contribute your own).

Performance and Latency

Context-heavy requests can be big — and LLM calls aren’t cheap. To keep things snappy:

  • Cache results where possible.
  • Use incremental updates (diffs instead of full files).
  • Keep long-term memory local (e.g., embeddings or vector stores).

MCP doesn’t force a specific transport, so you can optimize around your needs — HTTP, WebSocket, or even local IPC.

The Big Opportunity: Context as a Service

Here’s the exciting part — MCP turns “context” into a first-class system service.

Imagine:

  • A shared MCP layer that feeds every AI in your company.
  • A design tool that tells your LLM how the UI is built.
  • A deployment service that informs your AI about current build states.
  • A documentation assistant that understands your project’s architecture automatically.

That’s where the ecosystem is heading: from “AI that guesses” to “AI that knows.”

For Framework Providers: Teaching LLMs *Your Way*

If you maintain a framework, library, or dev tool, MCP isn’t just for IDEs—it’s your chance to teach language models how to use your technology the right way. Instead of hoping LLMs “figure out” your conventions, you can make them explicit, codifying best practices, project structure, and workflows that matter to your users.

What you can make explicit with MCP

  • How to scaffold a new project (“Use `mycli init` instead of copying files”)
  • How to run tests, builds, or deploys (“Run `mycli test` with the right env”)
  • How to structure files and folders (“Put components in `/src/components`”)
  • Custom CLI commands, code generators, or templates
  • Project-specific linting or formatting rules
  • Integration with external services (APIs, DBs, CI/CD)

What this unlocks

  • LLMs that generate code and commands *tailored to your framework*—not generic guesses
  • Consistent onboarding for new users (“The model always uses our patterns!”)
  • Fewer support tickets about “wrong” usage or structure
  • Agents and assistants that can automate complex flows (setup, migration, debugging)

A tiny example: “Make the model speak your framework”

Suppose you want LLMs to always use your CLI for starting a dev server, not the default `npm run dev`. You can declare this in your MCP config:

commands:
  startDev:
    description: Start the development server
    usage: "mycli dev"
    context:
      projectType: "my-framework"
      requiredFiles: ["my.config.js"]

When an agent asks “How do I start the app?” the MCP server responds with your custom command, not a guess.

Example MCP tool call request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "startDev",
    "arguments": {
      "projectType": "my-framework",
      "cwd": "/Users/alex/my-app"
    }
  }
}

Example MCP server response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "This project uses My Framework. Start the dev server with `mycli dev`."
      }
    ]
  }
}

Implementation pattern (quick start)

  1. Define your framework’s “best practices” as MCP commands and context fields.
  2. Expose an MCP server (or plugin) that responds with these conventions.
  3. Document your MCP schema so agents and IDEs can discover capabilities.
  4. (Optional) Add logic to inspect the project and adapt responses (e.g., check config files, infer structure).
  5. Encourage users to install your MCP provider for smarter LLM experiences.

By making your conventions machine-readable, you empower every LLM-based tool to be a better teacher, code generator, and assistant for your users.

This is the foundation for the next part of the series, where we’ll show how to build and integrate your own MCP server—putting you in control of the AI-powered developer experience.

A Glimpse Ahead: Building Your Own MCP Server

Now that we’ve explored MCP’s potential for the modern dev landscape — from AI assistants to future IDE integrations — the next step is taking control of it yourself.

In Part 3, we’ll build our own MCP server from scratch:

  • Define commands and contexts.
  • Integrate with an LLM.
  • Teach your AI assistant to understand your framework, your APIs, and your way of working.

It’s like teaching your tools to think the way you do.

So grab your TypeScript editor, dust off your mental model of context, and get ready — because next time, we’re building the thing that makes your LLM yours.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top