Join our Discord Server
Collabnix Team The Collabnix Team is a diverse collective of Docker, Kubernetes, and IoT experts united by a passion for cloud-native technologies. With backgrounds spanning across DevOps, platform engineering, cloud architecture, and container orchestration, our contributors bring together decades of combined experience from various industries and technical domains.

Building AI Coding Assistants with Claude and MCP: A Complete Guide

6 min read

Exploring AI Coding Assistants for Developers

The emergence of AI coding assistants has revolutionized software development workflows. Anthropic’s Claude, combined with the Model Context Protocol (MCP), provides a powerful foundation for building intelligent coding assistants that understand context, maintain conversation history, and integrate seamlessly with development environments. This comprehensive guide walks you through building production-ready AI coding assistants leveraging these cutting-edge technologies.

Understanding Model Context Protocol (MCP)

Model Context Protocol is an open standard developed by Anthropic that enables AI models to securely access external data sources and tools. Unlike traditional API integrations, MCP provides a standardized way for AI assistants to interact with your development environment, version control systems, databases, and other tools while maintaining security and context awareness.

MCP operates on a client-server architecture where:

  • MCP Servers expose specific capabilities (reading files, executing commands, accessing APIs)
  • MCP Clients (like Claude Desktop or your custom application) connect to these servers
  • Protocol Layer handles secure communication and context management

Architecture Overview

A robust AI coding assistant built with Claude and MCP typically consists of:

  • Claude API integration for natural language understanding and code generation
  • MCP servers for tool access (filesystem, Git, package managers)
  • Context management layer for maintaining conversation state
  • Security layer for sandboxed code execution
  • Integration layer for CI/CD pipelines and development tools

Setting Up Your Development Environment

Before building your AI coding assistant, ensure you have the necessary prerequisites installed:

# Install Node.js and npm (required for MCP)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install Python 3.10+ for Claude SDK
sudo apt-get update
sudo apt-get install -y python3.10 python3-pip

# Install Docker for containerized deployments
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Install MCP SDK
npm install -g @modelcontextprotocol/sdk

# Install Claude Python SDK
pip3 install anthropic

Building Your First MCP Server

Let’s create an MCP server that provides filesystem access and Git operations for your coding assistant:

import asyncio
import json
from typing import Any
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types
import subprocess
import os

server = Server("coding-assistant-mcp")

@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
    """List available tools for the coding assistant."""
    return [
        types.Tool(
            name="read_file",
            description="Read contents of a file from the project directory",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {
                        "type": "string",
                        "description": "Relative path to the file"
                    }
                },
                "required": ["path"]
            }
        ),
        types.Tool(
            name="write_file",
            description="Write or update a file in the project directory",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {"type": "string"},
                    "content": {"type": "string"}
                },
                "required": ["path", "content"]
            }
        ),
        types.Tool(
            name="git_status",
            description="Get Git repository status",
            inputSchema={"type": "object", "properties": {}}
        ),
        types.Tool(
            name="run_tests",
            description="Execute project tests",
            inputSchema={
                "type": "object",
                "properties": {
                    "test_path": {"type": "string"}
                }
            }
        )
    ]

@server.call_tool()
async def handle_call_tool(
    name: str, arguments: dict[str, Any]
) -> list[types.TextContent]:
    """Handle tool execution requests."""
    
    if name == "read_file":
        try:
            with open(arguments["path"], "r") as f:
                content = f.read()
            return [types.TextContent(type="text", text=content)]
        except Exception as e:
            return [types.TextContent(type="text", text=f"Error: {str(e)}")]
    
    elif name == "write_file":
        try:
            os.makedirs(os.path.dirname(arguments["path"]), exist_ok=True)
            with open(arguments["path"], "w") as f:
                f.write(arguments["content"])
            return [types.TextContent(type="text", text="File written successfully")]
        except Exception as e:
            return [types.TextContent(type="text", text=f"Error: {str(e)}")]
    
    elif name == "git_status":
        result = subprocess.run(
            ["git", "status", "--short"],
            capture_output=True,
            text=True
        )
        return [types.TextContent(type="text", text=result.stdout)]
    
    elif name == "run_tests":
        test_path = arguments.get("test_path", "tests/")
        result = subprocess.run(
            ["pytest", test_path, "-v"],
            capture_output=True,
            text=True
        )
        return [types.TextContent(
            type="text",
            text=f"Exit Code: {result.returncode}\n\n{result.stdout}\n{result.stderr}"
        )]
    
    raise ValueError(f"Unknown tool: {name}")

async def main():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="coding-assistant-mcp",
                server_version="1.0.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={}
                )
            )
        )

if __name__ == "__main__":
    asyncio.run(main())

Integrating Claude API with MCP

Now let’s build the client application that connects Claude with our MCP server:

import anthropic
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import os

class CodingAssistant:
    def __init__(self, api_key: str, mcp_server_script: str):
        self.client = anthropic.Anthropic(api_key=api_key)
        self.mcp_server_script = mcp_server_script
        self.conversation_history = []
        
    async def initialize_mcp(self):
        """Initialize MCP connection."""
        server_params = StdioServerParameters(
            command="python",
            args=[self.mcp_server_script],
            env=None
        )
        
        self.stdio_transport = await stdio_client(server_params)
        self.mcp_session = ClientSession(
            self.stdio_transport[0],
            self.stdio_transport[1]
        )
        
        await self.mcp_session.initialize()
        self.available_tools = await self.mcp_session.list_tools()
        
    def format_tools_for_claude(self):
        """Convert MCP tools to Claude API format."""
        return [{
            "name": tool.name,
            "description": tool.description,
            "input_schema": tool.inputSchema
        } for tool in self.available_tools]
    
    async def execute_tool(self, tool_name: str, tool_input: dict):
        """Execute MCP tool and return result."""
        result = await self.mcp_session.call_tool(tool_name, tool_input)
        return result.content[0].text if result.content else ""
    
    async def chat(self, user_message: str):
        """Process user message with Claude and execute tools as needed."""
        self.conversation_history.append({
            "role": "user",
            "content": user_message
        })
        
        tools = self.format_tools_for_claude()
        
        while True:
            response = self.client.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=4096,
                tools=tools,
                messages=self.conversation_history
            )
            
            if response.stop_reason == "end_turn":
                final_response = next(
                    (block.text for block in response.content 
                     if hasattr(block, "text")),
                    None
                )
                self.conversation_history.append({
                    "role": "assistant",
                    "content": response.content
                })
                return final_response
            
            elif response.stop_reason == "tool_use":
                self.conversation_history.append({
                    "role": "assistant",
                    "content": response.content
                })
                
                tool_results = []
                for block in response.content:
                    if block.type == "tool_use":
                        result = await self.execute_tool(
                            block.name,
                            block.input
                        )
                        tool_results.append({
                            "type": "tool_result",
                            "tool_use_id": block.id,
                            "content": result
                        })
                
                self.conversation_history.append({
                    "role": "user",
                    "content": tool_results
                })

async def main():
    assistant = CodingAssistant(
        api_key=os.getenv("ANTHROPIC_API_KEY"),
        mcp_server_script="mcp_server.py"
    )
    
    await assistant.initialize_mcp()
    
    print("AI Coding Assistant Ready!")
    print("Type 'exit' to quit\n")
    
    while True:
        user_input = input("You: ")
        if user_input.lower() == "exit":
            break
            
        response = await assistant.chat(user_input)
        print(f"\nAssistant: {response}\n")

if __name__ == "__main__":
    asyncio.run(main())

Containerizing Your AI Coding Assistant

For production deployments, containerization ensures consistency and portability:

FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    git \
    nodejs \
    npm \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Install MCP SDK
RUN npm install -g @modelcontextprotocol/sdk

# Copy application code
COPY mcp_server.py .
COPY coding_assistant.py .
COPY config.yaml .

# Create workspace directory
RUN mkdir -p /workspace
WORKDIR /workspace

# Set environment variables
ENV PYTHONUNBUFFERED=1

CMD ["python", "/app/coding_assistant.py"]
# docker-compose.yml
version: '3.8'

services:
  coding-assistant:
    build: .
    container_name: ai-coding-assistant
    environment:
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
    volumes:
      - ./workspace:/workspace
      - ~/.gitconfig:/root/.gitconfig:ro
    stdin_open: true
    tty: true
    networks:
      - assistant-network

  redis:
    image: redis:7-alpine
    container_name: assistant-cache
    volumes:
      - redis-data:/data
    networks:
      - assistant-network

volumes:
  redis-data:

networks:
  assistant-network:
    driver: bridge

Configuration and Best Practices

Create a configuration file to manage your assistant’s behavior:

# config.yaml
assistant:
  name: "DevOps AI Assistant"
  model: "claude-3-5-sonnet-20241022"
  max_tokens: 4096
  temperature: 0.7

mcp:
  servers:
    - name: "filesystem"
      command: "python"
      args: ["mcp_server.py"]
      
    - name: "git"
      command: "python"
      args: ["mcp_git_server.py"]

security:
  allowed_paths:
    - "/workspace"
    - "/app/projects"
  
  blocked_commands:
    - "rm -rf"
    - "dd if="
    - "mkfs"
  
  max_file_size: 10485760  # 10MB

logging:
  level: "INFO"
  file: "/var/log/coding-assistant.log"
  rotation: "daily"

rate_limiting:
  requests_per_minute: 50
  tokens_per_hour: 100000

Troubleshooting Common Issues

MCP Connection Failures

If your MCP server fails to connect, verify the server is running and accessible:

# Test MCP server independently
python mcp_server.py &

# Check if process is running
ps aux | grep mcp_server

# Test with MCP inspector
npx @modelcontextprotocol/inspector python mcp_server.py

Claude API Rate Limits

Implement exponential backoff for rate limit handling:

import time
from anthropic import RateLimitError

def call_claude_with_retry(func, max_retries=3):
    for attempt in range(max_retries):
        try:
            return func()
        except RateLimitError:
            if attempt == max_retries - 1:
                raise
            wait_time = (2 ** attempt) + random.random()
            time.sleep(wait_time)

Tool Execution Timeout

Add timeout handling for long-running operations:

import asyncio

async def execute_with_timeout(coro, timeout=30):
    try:
        return await asyncio.wait_for(coro, timeout=timeout)
    except asyncio.TimeoutError:
        return "Operation timed out after {timeout} seconds"

Advanced Features and Optimizations

Context Caching

Implement caching to reduce API calls and improve response times:

import redis
import hashlib
import json

class ContextCache:
    def __init__(self, redis_url="redis://localhost:6379"):
        self.redis = redis.from_url(redis_url)
        
    def get_cached_response(self, prompt: str, context: dict):
        cache_key = hashlib.sha256(
            f"{prompt}{json.dumps(context, sort_keys=True)}".encode()
        ).hexdigest()
        
        cached = self.redis.get(cache_key)
        return json.loads(cached) if cached else None
    
    def cache_response(self, prompt: str, context: dict, response: str, ttl=3600):
        cache_key = hashlib.sha256(
            f"{prompt}{json.dumps(context, sort_keys=True)}".encode()
        ).hexdigest()
        
        self.redis.setex(cache_key, ttl, json.dumps(response))

Multi-Repository Support

Enable your assistant to work across multiple repositories:

# Initialize multi-repo workspace
mkdir -p ~/ai-assistant-workspace
cd ~/ai-assistant-workspace

# Clone repositories
git clone https://github.com/your-org/backend.git
git clone https://github.com/your-org/frontend.git
git clone https://github.com/your-org/infrastructure.git

# Start assistant with workspace context
docker run -v $(pwd):/workspace ai-coding-assistant

Monitoring and Observability

Implement comprehensive logging and metrics collection:

import logging
from prometheus_client import Counter, Histogram, start_http_server

# Metrics
request_counter = Counter('assistant_requests_total', 'Total requests')
tool_execution_time = Histogram('tool_execution_seconds', 'Tool execution time')
api_calls = Counter('claude_api_calls_total', 'Total Claude API calls')

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('/var/log/assistant.log'),
        logging.StreamHandler()
    ]
)

logger = logging.getLogger(__name__)

# Start metrics server
start_http_server(8000)

Conclusion

Building AI coding assistants with Claude and MCP opens up powerful possibilities for automating development workflows. The combination of Claude’s advanced language understanding and MCP’s standardized tool integration provides a robust foundation for creating intelligent, context-aware assistants.

Key takeaways:

  • MCP provides a secure, standardized way to extend AI capabilities with custom tools
  • Claude’s tool use capabilities enable sophisticated multi-step reasoning and execution
  • Proper containerization and configuration management are essential for production deployments
  • Implementing caching, rate limiting, and monitoring ensures reliable operation at scale
  • Security considerations around file access and command execution must be carefully designed

As you build and deploy your AI coding assistant, continue iterating based on user feedback and monitoring metrics. The MCP ecosystem is rapidly evolving, with new servers and capabilities being added regularly. Stay engaged with the community at github.com/modelcontextprotocol to leverage the latest developments.

Start building your AI coding assistant today and transform your development workflow with intelligent automation powered by Claude and MCP.

Have Queries? Join https://launchpass.com/collabnix

Collabnix Team The Collabnix Team is a diverse collective of Docker, Kubernetes, and IoT experts united by a passion for cloud-native technologies. With backgrounds spanning across DevOps, platform engineering, cloud architecture, and container orchestration, our contributors bring together decades of combined experience from various industries and technical domains.
Join our Discord Server
Index