Hands on MCP: Extending LLMs with Real-Time Data

In 2025, the pace at which new concepts and features appear in the field of generative AI makes it feel like decades have passed since the launch of the first version of ChatGPT. However, it's been less than three years. In late 2024, Anthropic, the creators of Claude, released the Model Context Protocol (MCP) as open source. Although it initially went unnoticed, today it’s present in almost every article, tool, or feature related to generative AI. MCP is mentioned everywhere. What is MCP? MCP (Model Context Protocol) is an open-source specification developed by Anthropic. According to its official documentation, MCP acts like the “USB-C for AI”: a unified interface that allows language models to connect with external tools in a standardized way. The real advantage of MCP lies in standardization. Technically, an LLM acting as a client can connect to an MCP server to access external data or functionality. Standard Components of the MCP Protocol The Model Context Protocol (MCP) is based on three fundamental components that allow servers to expose capabilities to clients: Resources: Structured data that provides additional context to the model. They may include file contents, database records, API responses, and more. Resources are controlled by the application and can be explicitly selected by the user or automatically by the client. More Info Prompts: Reusable templates that define common interactions with the model. They allow standardization and sharing of workflows, and can accept dynamic arguments, include resource context, and guide specific processes. More Info Tools: Executable functions that the model can invoke to perform actions, such as interacting with external systems, performing calculations, or executing commands. Tools are controlled by the model and can be automatically discovered and invoked. More Info “MCP provides a modular structure that clearly organizes the model’s input, knowledge, and ability to act.” Why is it relevant? Thanks to its modular structure, MCP allows: Separation of concerns: each block has a specific purpose. Greater contextual clarity: the model better understands its goals. More advanced agents: enables memory, tools, and intermediate reasoning. Scalability: allows building modular and sustainable architectures. You're not just writing a prompt. You're designing a protocol. Practical Example If you ask a model what your computer’s current CPU usage is, it probably won’t know. It has no access to your operating system, since its main function is to generate text, not interact with hardware. That’s where MCP comes in: it allows the model to access external information through tools defined by the server. How to Build an MCP Server in 4 Steps You can create an MCP server using Python and the fastmcp library. 1. Create server.py from typing import Dict import psutil from mcp.server.fastmcp import FastMCP mcp = FastMCP("sys-monitor") @mcp.tool() def get_system_stats() -> Dict[str, float]: cpu_percent = psutil.cpu_percent(interval=1) virtual_mem = psutil.virtual_memory() return { "cpu_percent": cpu_percent, "memory_total_gb": round(virtual_mem.total / (1024 ** 3), 2), "memory_used_gb": round(virtual_mem.used / (1024 ** 3), 2), "memory_percent": virtual_mem.percent } if __name__ == "__main__": mcp.run() 2. Create requirements.txt psutil>=5.9.0 fastmcp>=0.1.0 3. Create the Dockerfile FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . EXPOSE 8000 CMD ["python", "server.py"] 4. Build the Docker Image (Optional) Note: this setup will report CPU usage of the container, not the host system. For educational purposes, it's still valid. docker build -t cpusage . Connecting the MCP Server to a Model This example uses Claude Desktop, but it's also compatible with Amazon Q CLI or other LLMs that support MCP. Follow the official guide for integration. If your claude_desktop_config.json file is empty, add the following: { "mcpServers": { "cpusage": { "command": "docker", "args": [ "run", "-i", "--rm", "cpusage" ] } } } To run the server directly with Python: { "mcpServers": { "cpusage": { "command": "python", "args": ["path/to/server.py"] } } } You can also replace "python" with uv or another command if using a virtual environment. If other servers are already defined, simply add the corresponding block. Restart Claude Desktop. Done! Demo Video with a demo here How I'm Using It I’m using MCP in AWS-based projects that integrate models with databases and custom tools. This structure allows me to: Define available tools Provide persistent and updated con

May 9, 2025 - 17:06
 0
Hands on MCP: Extending LLMs with Real-Time Data

In 2025, the pace at which new concepts and features appear in the field of generative AI makes it feel like decades have passed since the launch of the first version of ChatGPT. However, it's been less than three years.

In late 2024, Anthropic, the creators of Claude, released the Model Context Protocol (MCP) as open source. Although it initially went unnoticed, today it’s present in almost every article, tool, or feature related to generative AI. MCP is mentioned everywhere.

What is MCP?

MCP (Model Context Protocol) is an open-source specification developed by Anthropic.

According to its official documentation, MCP acts like the “USB-C for AI”: a unified interface that allows language models to connect with external tools in a standardized way.

The real advantage of MCP lies in standardization.

Technically, an LLM acting as a client can connect to an MCP server to access external data or functionality.

Standard Components of the MCP Protocol

The Model Context Protocol (MCP) is based on three fundamental components that allow servers to expose capabilities to clients:

  • Resources: Structured data that provides additional context to the model. They may include file contents, database records, API responses, and more. Resources are controlled by the application and can be explicitly selected by the user or automatically by the client. More Info

  • Prompts: Reusable templates that define common interactions with the model. They allow standardization and sharing of workflows, and can accept dynamic arguments, include resource context, and guide specific processes. More Info

  • Tools: Executable functions that the model can invoke to perform actions, such as interacting with external systems, performing calculations, or executing commands. Tools are controlled by the model and can be automatically discovered and invoked. More Info

“MCP provides a modular structure that clearly organizes the model’s input, knowledge, and ability to act.”

Why is it relevant?

Thanks to its modular structure, MCP allows:

  • Separation of concerns: each block has a specific purpose.
  • Greater contextual clarity: the model better understands its goals.
  • More advanced agents: enables memory, tools, and intermediate reasoning.
  • Scalability: allows building modular and sustainable architectures.

You're not just writing a prompt. You're designing a protocol.

Practical Example

If you ask a model what your computer’s current CPU usage is, it probably won’t know. It has no access to your operating system, since its main function is to generate text, not interact with hardware.

That’s where MCP comes in: it allows the model to access external information through tools defined by the server.

How to Build an MCP Server in 4 Steps

You can create an MCP server using Python and the fastmcp library.

1. Create server.py

from typing import Dict
import psutil
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("sys-monitor")

@mcp.tool()
def get_system_stats() -> Dict[str, float]:
    cpu_percent = psutil.cpu_percent(interval=1)
    virtual_mem = psutil.virtual_memory()
    return {
        "cpu_percent": cpu_percent,
        "memory_total_gb": round(virtual_mem.total / (1024 ** 3), 2),
        "memory_used_gb": round(virtual_mem.used / (1024 ** 3), 2),
        "memory_percent": virtual_mem.percent
    }

if __name__ == "__main__":
    mcp.run()

2. Create requirements.txt

psutil>=5.9.0
fastmcp>=0.1.0

3. Create the Dockerfile

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "server.py"]

4. Build the Docker Image (Optional)

Note: this setup will report CPU usage of the container, not the host system. For educational purposes, it's still valid.

docker build -t cpusage .

Connecting the MCP Server to a Model

This example uses Claude Desktop, but it's also compatible with Amazon Q CLI or other LLMs that support MCP.

Follow the official guide for integration.

If your claude_desktop_config.json file is empty, add the following:

{
  "mcpServers": {
    "cpusage": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "cpusage"
      ]
    }
  }
}

To run the server directly with Python:

{
  "mcpServers": {
    "cpusage": {
      "command": "python",
      "args": ["path/to/server.py"]
    }
  }
}

You can also replace "python" with uv or another command if using a virtual environment.

If other servers are already defined, simply add the corresponding block.

Restart Claude Desktop. Done!

Demo

Video with a demo here

How I'm Using It

I’m using MCP in AWS-based projects that integrate models with databases and custom tools. This structure allows me to:

  • Define available tools
  • Provide persistent and updated context
  • Set clear operational boundaries

Additionally, with a bit of creativity, it’s easily portable to other platforms.

Conclusion

MCP is not just a different way to structure prompts: it’s a way to design the cognitive architecture of your agents.

If you're working with GenAI in 2025 and haven't explored MCP yet, now is the time.