A Clear Intro to MCP (Model Context Protocol) with Code Examples

MCP is a way to democratize access to tools for AI Agents. In this article we cover the fundamental components of MCP, how they work together, and a code example of how MCP works in practice. The post A Clear Intro to MCP (Model Context Protocol) with Code Examples appeared first on Towards Data Science.

Mar 25, 2025 - 20:41
 0
A Clear Intro to MCP (Model Context Protocol) with Code Examples

As the race to move AI agents from prototype to production heats up, the need for a standardized way for agents to call tools across different providers is pressing. This transition to a standardized approach to agent tool calling is similar to what we saw with REST APIs. Before they existed, developers had to deal with a mess of proprietary protocols just to pull data from different services. REST brought order to chaos, enabling systems to talk to each other in a consistent way. MCP (Model Context Protocol) is aiming to, as it sounds, provide context for AI models in a standard way. Without it, we’re headed towards tool-calling mayhem where multiple incompatible versions of “standardized” tool calls crop up simply because there’s no shared way for agents to organize, share, and invoke tools. MCP gives us a shared language and the democratization of tool calling.

One thing I’m personally excited about is how tool-calling standards like MCP can actually make Ai Systems safer. With easier access to well-tested tools more companies can avoid reinventing the wheel, which reduces security risks and minimizes the chance of malicious code. As Ai systems start scaling in 2025, these are valid concerns.

As I dove into MCP, I realized a huge gap in documentation. There’s plenty of high-level “what does it do” content, but when you actually want to understand how it works, the resources start to fall short—especially for those who aren’t native developers. It’s either high level explainers or deep in the source code.

In this piece, I’m going to break MCP down for a broader audience—making the concepts and functionality clear and digestible. If you’re able, follow along in the coding section, if not it will be well explained in natural language above the code snippets.

An Analogy to Understand MCP: The Restaurant

Let’s imagine the concept of MCP as a restaurant where we have:

The Host = The restaurant building (the environment where the agent runs)

The Server = The kitchen (where tools live)

The Client = The waiter (who sends tool requests)

The Agent = The customer (who decides what tool to use)

The Tools = The recipes (the code that gets executed)

The Components of MCP

Host
This is where the agent operates. In our analogy, it’s the restaurant building; in MCP, it’s wherever your agents or LLMs actually run. If you’re using Ollama locally, you’re the host. If you’re using Claude or GPT, then Anthropic or OpenAI are the hosts.

Client

This is the environment that sends tool call requests from the agent. Think of it as the waiter who takes your order and delivers it to the kitchen. In practical terms, it’s the application or interface where your agent runs. The client passes tool call requests to the Server using MCP.

Server

This is the kitchen where recipes, or tools, are housed. It centralizes tools so agents can access them easily. Servers can be local (spun up by users) or remote (hosted by companies offering tools). Tools on a server are typically either grouped by function or integration. For instance, all Slack-related tools can be on a “Slack server,” or all messaging tools can be grouped together on a “messaging server”. That decision is based on architectural and developer preferences.

Agent

The “brains” of the operation. Powered by an LLM, it decides which tools to call to complete a task. When it determines a tool is needed, it initiates a request to the server. The agent doesn’t need to natively understand MCP because it learns how to use it through the metadata associated with each of the tools. This metadata associated with each tool tells the agent the protocol for calling the tool and the execution method. But it is important to note that the platform or agent needs to support MCP so that it handles tool calls automatically. Otherwise it is up to the developer to write the complex translation logic of how to parse the metadata from the schema, form tool call requests in MCP format, map the requests to the correct function, execute the code, and return the result in MCP complaint format back to the agent.

Tools

These are the functions, such as calling APIs or custom code, that “does the work”. Tools live on servers and can be:

  • Custom tools you create and host on a local server.
  • Premade tools hosted by others on a remote server.
  • Premade code created by others but hosted by you on a local server.

How the components fit together

  1. Server Registers Tools
    Each tool is defined with a name, description, input/output schemas, a function handler (the code that runs) and registered to the server. This usually involves calling a method or API to tell the server “hey, here’s a new tool and this is how you use it”.
  2. Server Exposes Metadata
    When the server starts or an agent connects, it exposes the tool metadata (schemas, descriptions) via MCP.
  3. Agent Discovers Tools
    The agent queries the server (using MCP) to see what tools are available. It understands how to use each tool from the tool metadata. This typically happens on startup or when tools are added.
  4. Agent Plans Tool Use
    When the agent determines a tool is needed (based on user input or task context), it forms a tool call request in a standardized MCP JSON format which includes tool name, input parameters that match the tool’s input schema, and any other metadata. The client acts as the transport layer and sends the MCP formatted request to the server over HTTP.
  5. Translation Layer Executes
    The translation layer takes the agent’s standardized tool call (via MCP), maps the request to the corresponding function on the server, executes the function, formats the result back to MCP, and sends it back to the agent. A framework that abstracts MCP for you deos all of this without the developer needing to write the translation layer logic (which sounds like a headache).
Image by Sandi Besen

Code Example of A Re-Act Agent Using MCP Brave Search Server

In order to understand what MCP looks like when applied, let’s use the beeAI framework from IBM, which natively supports MCP and handles the translation logic for us.

 If you plan on running this code you will need to:

  1. Clone the beeai framework repo to gain access to the helper classes used in this code 
  2. Create a free Brave developer account and get your API key. There are free subscriptions available (credit card required). 
  3. Create an OpenAI developer account and create an API Key
  4. Add your Brave API key and OpenAI key to the .env file at the python folder level of the repo.
  5. Ensure you have npm installed and have set your path correctly.

Sample .env file

BRAVE_API_KEY= ""
BEEAI_LOG_LEVEL=INFO
OPENAI_API_KEY= ""

Sample mcp_agent.ipynb

1. Import the necessary libraries

import asyncio
import logging
import os
import sys
import traceback
from typing import Any
from beeai_framework.agents.react.runners.default.prompts import SystemPromptTemplate
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from beeai_framework import Tool
from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel, ChatModelParameters
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.errors import FrameworkError
from beeai_framework.logger import Logger
from beeai_framework.memory.token_memory import TokenMemory
from beeai_framework.tools.mcp_tools import MCPTool
from pathlib import Path
from beeai_framework.adapters.openai.backend.chat import OpenAIChatModel
from beeai_framework.backend.message import SystemMessa

2. Load the environment variables and set the system path (if needed)

import os
from dotenv import load_dotenv

# Absolute path to your .env file
# sometimes the system can have trouble locating the .env file
env_path = 
# Load it
load_dotenv(dotenv_path=env_path)

# Get current working directory
path =  #...beeai-framework/python'
# Append to sys.path
sys.path.append(path)

3. Configure the logger

# Configure logging - using DEBUG instead of trace
logger = Logger("app", level=logging.DEBUG)

4. Load helper functions like process_agent_events,observer, and create an instance of ConsoleReader

  • process_agent_events: Handles agent events and logs messages to the console based on the event type (e.g., error, retry, update). It ensures meaningful output for each event to help track agent activity.
  • observer: Listens for all events from an emitter and routes them to process_agent_events for processing and display.
  • ConsoleReader: Manages console input/output, allowing user interaction and formatted message display with color-coded roles.
#load console reader
from examples.helpers.io import ConsoleReader
#this is a helper function that makes the assitant chat easier to read
reader = ConsoleReader()

def process_agent_events(data: dict[str, Any], event: EventMeta) -> None:
  """Process agent events and log appropriately"""

  if event.name == "error":
      reader.write("Agent                         </div>
                                            <div class=
                            
                                Read More