Comprehensive guide to integrating Model Context Protocol (MCP) servers with Azcore for extended agent capabilities.
🔌 What is MCP?
Model Context Protocol (MCP) is an open protocol that standardizes how AI applications connect to external data sources and tools. It provides a unified interface for:
- Tool Integration: Exposing functions and APIs as tools
- Resource Access: Providing access to files, databases, APIs
- Prompt Templates: Sharing reusable prompts
- Sampling: Requesting AI completions
MCP in Azcore
Azcore's MCP integration allows you to:
from azcore.agents import MCPTeamBuilder
from langchain_openai import ChatOpenAI
# Connect to MCP server and use its tools
mcp_team = (MCPTeamBuilder("research_team")
.with_llm(ChatOpenAI(model="gpt-4o-mini"))
.with_mcp_server("python", ["path/to/mcp_server.py"])
.with_prompt("You are a research assistant with access to MCP tools")
.build())
# Use team with MCP tools
result = mcp_team({"messages": [HumanMessage(content="Search for AI papers")]})
🎯 Why Use MCP?
Traditional Approach vs MCP
Traditional Tool Integration
# ❌ Manual tool implementation for each service
@tool
def search_github(query: str) -> str:
"""Search GitHub repositories"""
# Custom implementation
api = GitHubAPI(token=os.getenv("GITHUB_TOKEN"))
results = api.search(query)
return format_results(results)
@tool
def search_npm(query: str) -> str:
"""Search NPM packages"""
# Another custom implementation
api = NPMAPI()
results = api.search(query)
return format_results(results)
# Repeat for every service...
MCP Approach
# ✅ Connect to MCP servers - tools automatically discovered
mcp_team = (MCPTeamBuilder("dev_team")
.with_llm(llm)
# GitHub MCP server
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-github"])
# NPM MCP server
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-npm"])
# Filesystem MCP server
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-filesystem"])
.build())
# All tools from all servers automatically available!
Benefits
- Standardization: One protocol for all integrations
- Reusability: Use existing MCP servers
- Separation of Concerns: Tool logic separate from agent logic
- Community Ecosystem: Growing library of MCP servers
- Flexibility: Easy to add/remove tool sources
- Maintenance: Update tools without changing agent code
🏗️ Architecture Overview
MCP Integration Architecture
┌─────────────────────────────────────────────────────┐
│ Azcore Application │
│ │
│ ┌────────────────────────────────────────────────┐ │
│ │ MCPTeamBuilder (Fluent API) │ │
│ └────────────┬───────────────────────────────────┘ │
│ │ │
│ ┌────────────▼───────────────────────────────────┐ │
│ │ MultiServerMCPClient │ │
│ │ (from langchain-mcp-adapters) │ │
│ └────┬───────────────────┬───────────────────────┘ │
│ │ │ │
└───────┼───────────────────┼───────────────────────────┘
│ │
│ STDIO │ SSE (HTTP)
│ │
┌───────▼─────────┐ ┌────▼──────────┐
│ MCP Server 1 │ │ MCP Server 2 │
│ (Local) │ │ (Remote) │
│ │ │ │
│ - Tool A │ │ - Tool C │
│ - Tool B │ │ - Tool D │
└─────────────────┘ └───────────────┘
Component Layers
- Application Layer: Your Azcore agents and workflows
- Integration Layer: MCPTeamBuilder and tool management
- Protocol Layer: MCP client and transport mechanisms
- Server Layer: MCP servers exposing tools and resources
✨ Key Features
1. Multiple Transport Types
Connect to MCP servers via different transport mechanisms:
# STDIO: Local process communication
.with_mcp_server("python", ["server.py"])
# SSE: Remote HTTP-based communication
.with_mcp_server(url="http://localhost:8000/sse", transport="sse")
2. Automatic Tool Discovery
Tools are automatically discovered from MCP servers:
mcp_team = (MCPTeamBuilder("team")
.with_llm(llm)
.with_mcp_server("python", ["server.py"])
.build())
# Get discovered tools
tool_names = mcp_team.get_tool_names()
print(f"Available tools: {tool_names}")
# Output: Available tools: ['search_files', 'read_file', 'write_file']
3. Multi-Server Support
Connect to multiple MCP servers simultaneously:
mcp_team = (MCPTeamBuilder("multi_server_team")
.with_llm(llm)
.with_mcp_server("python", ["github_server.py"]) # GitHub tools
.with_mcp_server("python", ["database_server.py"]) # Database tools
.with_mcp_server("python", ["api_server.py"]) # API tools
.build())
# Agent has access to tools from all three servers
4. Reinforcement Learning Integration
Optimize tool selection with RL:
from azcore.rl import RLManager
from azcore.rl.rewards import HeuristicRewardCalculator
# Setup RL
rl_manager = RLManager(
tool_names=["mcp_search", "mcp_read", "mcp_analyze"],
q_table_path="rl_data/mcp_q_table.pkl"
)
reward_calc = HeuristicRewardCalculator()
# Build MCP team with RL
mcp_team = (MCPTeamBuilder("smart_team")
.with_llm(llm)
.with_mcp_server("python", ["server.py"])
.with_rl(rl_manager, reward_calc) # Enable RL optimization
.build())
5. Graceful Error Handling
Handle connection failures gracefully:
mcp_team = (MCPTeamBuilder("resilient_team")
.with_llm(llm)
.with_mcp_server("python", ["server1.py"])
.with_mcp_server("python", ["server2.py"])
.skip_failed_servers(True) # Continue even if some servers fail
.test_connection_before_build(True) # Test before building
.build())
6. Async/Sync Support
Seamlessly works in both async and sync contexts:
# Synchronous usage
result = mcp_team({"messages": [HumanMessage(content="Query")]})
# Asynchronous usage
result = await mcp_team.ainvoke({"messages": [HumanMessage(content="Query")]})
🚀 Transport Mechanisms
STDIO Transport (Local)
For local MCP server processes:
mcp_team = (MCPTeamBuilder("local_team")
.with_llm(llm)
.with_mcp_server(
command="python",
args=["path/to/server.py"],
env={"API_KEY": "xxx"}, # Optional environment
timeout=10 # Connection timeout (seconds)
)
.build())
Use Cases:
- Local file system access
- Local database connections
- Development and testing
- Secure, isolated environments
SSE Transport (Remote)
For remote HTTP-based MCP servers:
mcp_team = (MCPTeamBuilder("remote_team")
.with_llm(llm)
.with_mcp_server(
url="http://mcp-server.example.com/sse",
transport="sse",
timeout=30, # Connection timeout
sse_read_timeout=60 # Read timeout for streaming
)
.build())
Use Cases:
- Cloud-hosted MCP servers
- Microservice architectures
- Shared tool services
- Production deployments
🔍 Tool Discovery
Automatic Discovery Process
1. Connect to MCP Server(s)
↓
2. Query Available Tools
↓
3. Convert to LangChain Tools
↓
4. Register with Agent
↓
5. Tools Available for Use
Tool Discovery Example
# Create MCP team
mcp_team = (MCPTeamBuilder("discovery_team")
.with_llm(llm)
.with_mcp_server("python", ["file_server.py"])
.build())
# Query discovered tools
all_tools = mcp_team.get_tool_names()
mcp_tools = mcp_team.get_mcp_tool_names()
server_count = mcp_team.get_mcp_server_count()
print(f"Total tools: {len(all_tools)}")
print(f"MCP tools: {len(mcp_tools)}")
print(f"Connected servers: {server_count}")
print(f"Tool names: {mcp_tools}")
Manual Tool Fetching
# Fetch tools without building team
team_builder = (MCPTeamBuilder("preview")
.with_llm(llm)
.with_mcp_server("python", ["server.py"]))
# Get tools asynchronously
import asyncio
tools = asyncio.run(team_builder.fetch_mcp_tools())
print(f"Available MCP tools: {[t.name for t in tools]}")
# Or synchronously
tools = team_builder.get_mcp_tools()
💡 Use Cases
1. File System Operations
# MCP server for file operations
filesystem_team = (MCPTeamBuilder("filesystem_team")
.with_llm(llm)
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-filesystem"])
.with_prompt("You are a file management assistant")
.build())
# Use for file operations
result = filesystem_team({
"messages": [HumanMessage(content="List all Python files in /project")]
})
2. Database Access
# MCP server for database operations
db_team = (MCPTeamBuilder("database_team")
.with_llm(llm)
.with_mcp_server("python", ["database_mcp_server.py"])
.with_prompt("You are a database analyst")
.build())
# Query database
result = db_team({
"messages": [HumanMessage(content="Show me user statistics")]
})
3. API Integration
# Multiple API MCP servers
api_team = (MCPTeamBuilder("api_team")
.with_llm(llm)
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-github"])
.with_mcp_server("python", ["slack_mcp_server.py"])
.with_mcp_server("python", ["jira_mcp_server.py"])
.with_prompt("You are a project management assistant")
.build())
# Unified access to multiple services
result = api_team({
"messages": [HumanMessage(content="Create a GitHub issue and notify team on Slack")]
})
4. Research and Analysis
# Research tools via MCP
research_team = (MCPTeamBuilder("research_team")
.with_llm(llm)
.with_mcp_server("python", ["arxiv_mcp_server.py"])
.with_mcp_server("python", ["wikipedia_mcp_server.py"])
.with_mcp_server("python", ["news_mcp_server.py"])
.with_prompt("You are a research assistant with access to multiple sources")
.build())
# Comprehensive research
result = research_team({
"messages": [HumanMessage(content="Research recent advances in quantum computing")]
})
5. Development Workflows
# Dev tools via MCP
dev_team = (MCPTeamBuilder("dev_team")
.with_llm(llm)
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-filesystem"])
.with_mcp_server("npx", ["-y", "@modelcontextprotocol/server-github"])
.with_mcp_server("python", ["code_analysis_server.py"])
.with_prompt("You are a software development assistant")
.build())
# Full dev workflow
result = dev_team({
"messages": [HumanMessage(content="Analyze code quality and create PR with fixes")]
})
🚀 Getting Started
Quick Start
from azcore.agents import MCPTeamBuilder
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# 1. Create LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 2. Build MCP team
mcp_team = (MCPTeamBuilder("quick_start_team")
.with_llm(llm)
.with_mcp_server("python", ["path/to/mcp_server.py"])
.with_prompt("You are a helpful assistant with MCP tools")
.build())
# 3. Use team
result = mcp_team({
"messages": [HumanMessage(content="Your query here")]
})
print(result["messages"][-1].content)
Installation
# Install Azcore with MCP support
pip install azcore[mcp]
# Or install separately
pip install azcore
pip install langchain-mcp-adapters
Prerequisites
- Python 3.12+
- LangChain
- langchain-mcp-adapters
- asyncio support
📊 Comparison with Other Integration Methods
| Feature | MCP Integration | Custom Tools | API Wrappers |
|---|---|---|---|
| Standardization | ✅ Open protocol | ❌ Custom each time | ⚠️ Per-API |
| Reusability | ✅ Use existing servers | ❌ Write from scratch | ⚠️ Limited |
| Tool Discovery | ✅ Automatic | ❌ Manual registration | ❌ Manual |
| Multi-Source | ✅ Easy multi-server | ⚠️ Requires merging | ⚠️ Separate integrations |
| Maintenance | ✅ Update server only | ❌ Update all agents | ⚠️ Update wrappers |
| Community | ✅ Growing ecosystem | ❌ Build everything | ⚠️ Per-API communities |
| Development Speed | ✅ Fast | ❌ Slow | ⚠️ Medium |
| Flexibility | ✅ High | ✅ High | ⚠️ Medium |
🎓 Best Practices
1. Use MCP for External Integrations
# ✅ Use MCP for external tools
mcp_team = (MCPTeamBuilder("external_team")
.with_llm(llm)
.with_mcp_server("python", ["github_server.py"]) # External API
.with_mcp_server("python", ["database_server.py"]) # External DB
.build())
2. Combine MCP with Custom Tools
from langchain_core.tools import tool
# Custom business logic tool
@tool
def calculate_profit(revenue: float, cost: float) -> float:
"""Calculate profit margin"""
return (revenue - cost) / revenue * 100
# MCP for external data
mcp_team = (MCPTeamBuilder("hybrid_team")
.with_llm(llm)
.with_tools([calculate_profit]) # Custom tool
.with_mcp_server("python", ["sales_data_server.py"]) # MCP for data
.build())
3. Enable Connection Testing
# ✅ Test connections before building
mcp_team = (MCPTeamBuilder("production_team")
.with_llm(llm)
.with_mcp_server(url="http://mcp-server.example.com/sse", transport="sse")
.test_connection_before_build(True) # Test first
.skip_failed_servers(True) # Graceful degradation
.build())
4. Use Environment Variables for Configuration
import os
# ✅ Configure via environment
mcp_team = (MCPTeamBuilder("configurable_team")
.with_llm(llm)
.with_mcp_server(
"python",
["server.py"],
env={
"API_KEY": os.getenv("API_KEY"),
"DB_URL": os.getenv("DATABASE_URL")
}
)
.build())
5. Monitor MCP Tool Usage
# ✅ Log tool usage
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
mcp_team = (MCPTeamBuilder("monitored_team")
.with_llm(llm)
.with_mcp_server("python", ["server.py"])
.build())
# Log available tools
logger.info(f"MCP tools: {mcp_team.get_mcp_tool_names()}")
logger.info(f"Server count: {mcp_team.get_mcp_server_count()}")
🎯 Summary
MCP Integration in Azcore provides:
- Standardized Protocol: Connect to any MCP server
- Multiple Transports: STDIO for local, SSE for remote
- Automatic Discovery: Tools auto-discovered from servers
- Multi-Server Support: Connect multiple servers simultaneously
- RL Optimization: Intelligent tool selection
- Production-Ready: Error handling, testing, graceful degradation
- Flexible API: Fluent builder pattern
- Community Ecosystem: Use existing MCP servers
MCP makes it easy to extend your agents with external tools and data sources without writing custom integration code for each service.