A Code Implementation to Building a Context-Aware AI Assistant in Google Colab Using LangChain, LangGraph, Gemini Pro, and Model Context Protocol (MCP) Principles with Tool Integration Support
In this hands-on tutorial, we bring the core principles of the Model Context Protocol (MCP) to life by implementing a lightweight, context-aware AI assistant using LangChain, LangGraph, and Google’s Gemini language model. While full MCP integration typically involves dedicated servers and communication protocols, this simplified version demonstrates how the same ideas, context retrieval, tool invocation, […] The post A Code Implementation to Building a Context-Aware AI Assistant in Google Colab Using LangChain, LangGraph, Gemini Pro, and Model Context Protocol (MCP) Principles with Tool Integration Support appeared first on MarkTechPost.

In this hands-on tutorial, we bring the core principles of the Model Context Protocol (MCP) to life by implementing a lightweight, context-aware AI assistant using LangChain, LangGraph, and Google’s Gemini language model. While full MCP integration typically involves dedicated servers and communication protocols, this simplified version demonstrates how the same ideas, context retrieval, tool invocation, and dynamic interaction can be recreated in a single notebook using a modular agent architecture. The assistant can respond to natural language queries and selectively route them to external tools (like a custom knowledge base), mimicking how MCP clients interact with context providers in real-world setups.
!pip install langchain langchain-google-genai langgraph python-dotenv
!pip install google-generativeai
First, we install essential libraries. The first command installs LangChain, LangGraph, the Google Generative AI LangChain wrapper, and environment variable support via python-dotenv. The second command installs Google’s official generative AI client, which enables interaction with Gemini models.
import os
os.environ["GEMINI_API_KEY"] = "Your API Key"
Here, we set your Gemini API key as an environment variable so the model can securely access it without hardcoding it into your codebase. Replace “Your API Key” with your actual key from Google AI Studio.
from langchain.tools import BaseTool
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.messages import HumanMessage, AIMessage
from langgraph.prebuilt import create_react_agent
import os
model = ChatGoogleGenerativeAI(
model="gemini-2.0-flash-lite",
temperature=0.7,
google_api_key=os.getenv("GEMINI_API_KEY")
)
class SimpleKnowledgeBaseTool(BaseTool):
name: str = "simple_knowledge_base"
description: str = "Retrieves basic information about AI concepts."
def _run(self, query: str):
knowledge = {
"MCP": "Model Context Protocol (MCP) is an open standard by Anthropic designed to connect AI assistants with external data sources, enabling real-time, context-rich interactions.",
"RAG": "Retrieval-Augmented Generation (RAG) enhances LLM responses by dynamically retrieving relevant external documents."
}
return knowledge.get(query, "I don't have information on that topic.")
async def _arun(self, query: str):
return self._run(query)
kb_tool = SimpleKnowledgeBaseTool()
tools = [kb_tool]
graph = create_react_agent(model, tools)
In this block, we initialize the Gemini language model (gemini-2.0-flash-lite) using LangChain’s ChatGoogleGenerativeAI, with the API key securely loaded from environment variables. We then define a custom tool named SimpleKnowledgeBaseTool that simulates an external knowledge source by returning predefined answers to queries about AI concepts like “MCP” and “RAG.” This tool acts as a basic context provider, similar to how an MCP server would operate. Finally, we use LangGraph’s create_react_agent to build a ReAct-style agent that can reason through prompts and dynamically decide when to call tools, mimicking MCP’s tool-aware, context-rich interactions principle.
import nest_asyncio
import asyncio
nest_asyncio.apply()
async def chat_with_agent():
inputs = {"messages": []}
print("
read more