MCP is becoming the standard way to give AI agents access to external tools and data sources. Here's what it is, how it works, and how to use it in your agent stack.
What MCP actually is
MCP is an open protocol developed by Anthropic that standardises how AI models connect to external tools and data sources. Before MCP, every AI integration was bespoke: if you wanted your agent to query a database, you'd write custom code to pass data in and out of the model. If you wanted it to search the web, you'd build a custom tool calling pattern. Every tool was a one-off integration.
MCP replaces that with a standard interface. An MCP server is a small service that exposes a set of tools to an AI model in a consistent format. The model can discover what tools are available, call them, and receive results — without the developer writing custom glue code for each integration.
Think of MCP servers as plugins for AI agents. The same way a browser extension adds capability to a browser, an MCP server adds a capability to an agent.
The three components
MCP Hosts
The host is the AI environment that connects to MCP servers. Claude Code is an MCP host. Claude Desktop is an MCP host. Any AI application built to the MCP specification can act as a host. The host manages the connections to servers and passes tool calls back and forth.
MCP Servers
The server is where a specific capability lives. There are MCP servers for:
- Web search
- File system access
- Database queries (PostgreSQL, SQLite)
- Business apps (Slack, GitHub, Google Drive, Salesforce)
- Custom APIs and internal tools you build yourself
Each server exposes a list of tools — named functions with defined inputs and outputs — that the AI model can call.
MCP Clients
The client is the component inside the host that handles communication with servers. In practice, you rarely interact with this directly — it's handled by the host application.
How a tool call works in practice
When an AI agent needs to use a capability provided by an MCP server, the flow looks like this:
1. Agent decides it needs to use a tool
(e.g. "I need to search the web for this")
2. Agent sends a tool call request to the MCP client
{ tool: "web_search", params: { query: "..." } }
3. MCP client routes the request to the correct server
4. MCP server executes the tool and returns the result
{ result: "Search results here..." }
5. Agent receives the result and continues reasoning
From the agent's perspective, it just calls a tool and gets a result. The MCP layer handles discovery, routing, and execution.
Building your own MCP server
For most standard integrations — web search, GitHub, Google Drive — there are already community-built MCP servers you can use. The interesting case for AI automation developers is building custom MCP servers for internal tools and client-specific data sources.
A minimal MCP server in Node.js looks like this:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
const server = new McpServer({
name: "my-custom-server",
version: "1.0.0"
});
server.tool(
"get_customer_data",
{ customer_id: { type: "string" } },
async ({ customer_id }) => {
const data = await fetchFromCRM(customer_id);
return { content: [{ type: "text", text: JSON.stringify(data) }] };
}
);
This server exposes one tool — get_customer_data — that takes a customer ID and returns CRM data. Once running, any MCP-compatible agent can discover and call this tool.
MCP in a multi-agent architecture
In a multi-agent system, MCP servers are typically attached to specialist agents, not the orchestration layer. This keeps the architecture clean:
| Agent | Role | MCP servers attached | |---|---|---| | Interface agent (Flora) | Fast intake, delegates tasks | None — stays lightweight | | Orchestration agent (Tony) | Plans and sequences subtasks | None — stays focused on coordination | | Research agent | Web search, data retrieval | Web search MCP, internal data MCP | | CRM agent | Customer data access | Salesforce MCP, CRM database MCP | | Comms agent | Email, Slack, calendar | Gmail MCP, Slack MCP, Calendar MCP |
This separation means you can add a new capability by creating a new specialist agent with the relevant MCP server attached — without modifying your orchestration logic.
MCP vs webhooks vs direct API calls
When deciding how to connect your agents to external services, you have three main options:
| Method | Best for | Tradeoffs | |---|---|---| | MCP server | Tools an agent will call repeatedly; capabilities you want to reuse across agents | Requires an MCP-compatible host; slight setup overhead | | Webhook | Triggering workflows from external events; N8N integration | One-directional; great for "fire and receive result" patterns | | Direct API call | One-off integrations; simple data retrieval | No standardisation; each integration is custom code |
For most agent capabilities you're building into a production system, MCP is the right choice. It's more maintainable than custom API calls and more flexible than webhooks.
Getting started
The Anthropic MCP documentation is the best starting point. The official SDK supports both TypeScript and Python. For common integrations, check the community MCP server registry — there are hundreds of pre-built servers covering most business tools you're likely to need.
Once you have MCP servers running, they plug directly into Claude Code, Claude Desktop, and any agent platform built on the MCP standard — including Agentic Vessel's specialist agent layer.
Agentic Vessel supports agent integration via MCP, webhook, HTTP, and A2A. See how to connect your agents.
Build your AI agent workflows today.
Join developers already automating complex tasks with Agentic Vessel.
Register as a Developer