Introducing the Model Context Protocol
Introducing the Model Context Protocol
https://www.anthropic.com/news/model-context-protocol
Today, we're open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.
As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.
MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.
Model Context Protocol
The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
Today, we're introducing three major components of the Model Context Protocol for developers:
- The Model Context Protocol specification and SDKs
- Local MCP server support in the Claude Desktop apps
- An open-source repository of MCP servers
Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.
Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to further understand the context around a coding task and produce more nuanced and functional code with fewer attempts.
"At Block, open source is more than a development model—it’s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,” said Dhanji R. Prasanna, Chief Technology Officer at Block. “Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.”
Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.
Getting started
Developers can start building and testing MCP connectors today. All Claude.ai plans support connecting MCP servers to the Claude Desktop app.
Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We'll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.
To start building:
- Install pre-built MCP servers through the Claude Desktop app
- Follow our quickstart guide to build your first MCP server
- Contribute to our open-source repositories of connectors and implementations
An open community
We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.
规范
https://model-context-protocol.github.io/specification/
https://modelcontextprotocol.io/introduction
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
Why MCP?
MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
- A growing list of pre-built integrations that your LLM can directly plug into
- The flexibility to switch between LLM providers and vendors
- Best practices for securing your data within your infrastructure
General architecture
At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
Internet
Your Computer
MCP Protocol
MCP Protocol
MCP Protocol
Web APIs
Host with MCP Client
(Claude, IDEs, Tools)MCP Server A
MCP Server B
MCP Server C
Local
Data Source ALocal
Data Source BRemote
Service C
- MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
- MCP Clients: Protocol clients that maintain 1:1 connections with servers
- MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
- Local Data Sources: Your computer’s files, databases, and services that MCP servers can securely access
- Remote Services: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
视频教程
https://www.bilibili.com/video/BV13ZAPehEPm/?spm_id_from=333.337.search-card.all.click&vd_source=57e261300f39bf692de396b55bf8c41b
MCP sever库
https://blog.csdn.net/xinghen1993/article/details/145934417
1、目前有很多网站开始提供MCP Server、MCP Client等内容,推荐一下几个:
1)官网上有一些例子可以体验:Example Servers - Model Context Protocol
2)pulsemcp:https://www.pulsemcp.com/,这个平台收录了很多MCP Server,也包含了MCP Client的介绍
3)smithery:https://smithery.ai/ ,这个平台收录了很多MCP Server,提供多种MCP Client的配置命令,可以直接复制
4)这些 github 仓库 中也整理了一些好用的 MCP 工具:awesome-mcp-servers/README-zh.md at main · punkpeye/awesome-mcp-servers · GitHub
这里列几个官方文档里面提到的:
Filesystem:提供安全的文件操作功能,并且可以配置访问控制权限,确保文件访问的安全性和规范性。
Git:提供读取、搜索和操作 Git 仓库的工具,帮助开发人员管理代码版本。
GitHub:实现仓库管理、文件操作,还集成了 GitHub API,方便与 GitHub 平台进行交互。
GitLab:通过集成 GitLab API,支持项目管理功能,方便团队协作开发。
Brave Search:借助 Brave 的搜索 API,实现网络和本地搜索功能。
Fetch:专门为大语言模型优化的网页内容获取和转换工具,便于模型获取网页信息。
Puppeteer:用于浏览器自动化操作和网页数据抓取。
Memory:基于知识图谱的持久化内存系统,可用于存储和检索信息。
EverArt:利用多种模型进行 AI 图像生成,满足图像创作需求。
Sequential Thinking:通过思维序列进行动态问题解决,辅助模型处理复杂问题。
Jira:与 Jira 项目管理工具集成,实现对项目任务、问题跟踪、工作流管理等功能的操作和查询。团队成员可以通过 MCP 服务器方便地在语言模型交互过程中获取 Jira 项目的相关信息,如任务进度、问题状态等,提高项目协作效率。
SonarQube:用于代码质量管理,通过与 SonarQube 集成,可以对项目代码进行质量分析,检测代码中的漏洞、代码异味、代码复杂度等问题。帮助开发团队提高代码质量,遵循代码规范,降低维护成本。
ESLint:针对 JavaScript 和 TypeScript 代码的语法检查和代码规范工具。与 ESLint 集成后,可在开发过程中实时检查代码是否符合指定的代码规范,及时发现和纠正代码中的语法错误和潜在问题,提高代码的可维护性和可读性。
Tableau:集成 Tableau 后,可以利用其强大的数据可视化功能,将数据以各种图表、图形等形式展示出来。用户可以通过 MCP 服务器与 Tableau 交互,创建和管理数据可视化报表,方便对数据进行分析和洞察。
https://smithery.ai/server/@apappascs/mcp-servers-hub
https://modelcontextprotocol.io/examples
https://github.com/modelcontextprotocol/servers/tree/main
https://github.com/punkpeye/awesome-mcp-servers
langchain 集成
https://github.com/langchain-ai/langchain-mcp-adapters
The library also allows you to connect to multiple MCP servers and load tools from them:
# math_server.py
...
# weather_server.py
from typing import List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> int:
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__":
mcp.run(transport="sse")
python weather_server.py
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
async with MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/sse",
"transport": "sse",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
python-sdk
https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#sqlite-explorer
from mcp.server.fastmcp import FastMCP import sqlite3 mcp = FastMCP("SQLite Explorer") @mcp.resource("schema://main") def get_schema() -> str: """Provide the database schema as a resource""" conn = sqlite3.connect("database.db") schema = conn.execute( "SELECT sql FROM sqlite_master WHERE type='table'" ).fetchall() return "\n".join(sql[0] for sql in schema if sql[0]) @mcp.tool() def query_data(sql: str) -> str: """Execute SQL queries safely""" conn = sqlite3.connect("database.db") try: result = conn.execute(sql).fetchall() return "\n".join(str(row) for row in result) except Exception as e: return f"Error: {str(e)}"
from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client # Create server parameters for stdio connection server_params = StdioServerParameters( command="python", # Executable args=["example_server.py"], # Optional command line arguments env=None # Optional environment variables ) # Optional: create a sampling callback async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult: return types.CreateMessageResult( role="assistant", content=types.TextContent( type="text", text="Hello, world! from model", ), model="gpt-3.5-turbo", stopReason="endTurn", ) async def run(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session: # Initialize the connection await session.initialize() # List available prompts prompts = await session.list_prompts() # Get a prompt prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"}) # List available resources resources = await session.list_resources() # List available tools tools = await session.list_tools() # Read a resource content, mime_type = await session.read_resource("file://some/path") # Call a tool result = await session.call_tool("tool-name", arguments={"arg1": "value"}) if __name__ == "__main__": import asyncio asyncio.run(run())
与dify集成
https://blog.csdn.net/roamingcode/article/details/145901328
https://glama.ai/mcp/servers/2yy6kotoxb
https://github.com/AI-FE/dify-mcp-server

浙公网安备 33010602011771号