Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation

In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations […] The post Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation appeared first on MarkTechPost.

May 25, 2025 - 03:30
 0
Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation

In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly.

import subprocess
import sys


def install_packages():
    packages = [
        "langgraph",
        "langchain",
        "langchain-anthropic",
        "langchain-community",
        "requests",
        "python-dotenv",
        "duckduckgo-search"
    ]
   
    for package in packages:
        try:
            subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
            print(f"✓ Installed {package}")
        except subprocess.CalledProcessError:
            print(f"✗ Failed to install {package}")


print("Installing required packages...")
install_packages()
print("Installation complete!\n")

We automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly.

import os
import json
import math
import requests
from typing import Dict, List, Any, Annotated, TypedDict
from datetime import datetime
import operator


from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from duckduckgo_search import DDGS

We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions.

os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here"


ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")

We set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key (which you should replace with a valid key), while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times.

from typing import TypedDict


class AgentState(TypedDict):
    messages: Annotated[List[BaseMessage], operator.add]


@tool
def calculator(expression: str) -> str:
    """
    Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.
   
    Args:
        expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
   
    Returns:
        Result of the calculation as a string
    """
    try:
        allowed_names = {
            'abs': abs, 'round': round, 'min': min, 'max': max,
            'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
            'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
            'log': math.log, 'log10': math.log10, 'exp': math.exp,
            'pi': math.pi, 'e': math.e
        }
       
        expression = expression.replace('^', '**')  
       
        result = eval(expression, {"__builtins__": {}}, allowed_names)
        return f"Result: {result}"
    except Exception as e:
        return f"Error in calculation: {str(e)}"

We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution.

@tool
def web_search(query: str, num_results: int = 3) -> str:
    """
    Search the web for information using DuckDuckGo.
   
    Args:
        query: Search query string
        num_results: Number of results to return (default: 3, max: 10)
   
    Returns:
        Search results as formatted string
    """
    try:
        num_results = min(max(num_results, 1), 10)  
       
        with DDGS() as ddgs:
            results = list(ddgs.text(query, max_results=num_results))
       
        if not results:
            return f"No search results found for: {query}"
       
        formatted_results = f"Search results for '{query}':\n\n"
        for i, result in enumerate(results, 1):
            formatted_results += f"{i}. **{result['title']}**\n"
            formatted_results += f"   {result['body']}\n"
            formatted_results += f"   Source: {result['href']}\n\n"
       
        return formatted_results
    except Exception as e:
        return f"Error performing web search: {str(e)}"

We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility.

@tool
def weather_info(city: str) -> str:
    """
    Get current weather information for a city using OpenWeatherMap API.
    Note: This is a mock implementation for demo purposes.
   
    Args:
        city: Name of the city
   
    Returns:
        Weather information as a string
    """
    mock_weather = {
        "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
        "london": {"temp": 15, "condition": "Rainy", "humidity": 80},
        "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
        "paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
    }
   
    city_lower = city.lower()
    if city_lower in mock_weather:
        weather = mock_weather[city_lower]
        return f"Weather in {city}:\n" \
               f"Temperature: {weather['temp']}°C\n" \
               f"Condition: {weather['condition']}\n" \
               f"Humidity: {weather['humidity']}%"
    else:
        return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)"

We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API.

@tool
def text_analyzer(text: str) -> str:
    """
    Analyze text and provide statistics like word count, character count, etc.
   
    Args:
        text: Text to analyze
   
    Returns:
        Text analysis results
    """
    if not text.strip():
        return "Please provide text to analyze."
   
    words = text.split()
    sentences = text.split('.') + text.split('!') + text.split('?')
    sentences = [s.strip() for s in sentences if s.strip()]
   
    analysis = f"Text Analysis Results:\n"
    analysis += f"• Characters (with spaces): {len(text)}\n"
    analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}\n"
    analysis += f"• Words: {len(words)}\n"
    analysis += f"• Sentences: {len(sentences)}\n"
    analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}\n"
    analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}"
   
    return analysis

The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count (with and without spaces), word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit.

@tool
def current_time() -> str:
    """
    Get the current date and time.
   
    Returns:
        Current date and time as a formatted string
    """
    now = datetime.now()
    return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"

The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow.

tools = [calculator, web_search, weather_info, text_analyzer, current_time]


def create_llm():
    if ANTHROPIC_API_KEY:
        return ChatAnthropic(
            model="claude-3-haiku-20240307",  
            temperature=0.1,
            max_tokens=1024
        )
    else:
        class MockLLM:
            def invoke(self, messages):
                last_message = messages[-1].content if messages else ""
               
                if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
                    import re
                    numbers = re.findall(r'[\d\+\-\*/\.\(\)\s\w]+', last_message)
                    expr = numbers[0] if numbers else "2+2"
                    return AIMessage(content="I'll help you with that calculation.",
                                   tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
                elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']):
                    query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip()
                    if not query or len(query) < 3:
                        query = "python programming"
                    return AIMessage(content="I'll search for that information.",
                                   tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}])
                elif any(word in last_message.lower() for word in ['weather', 'temperature']):
                    city = "New York"
                    words = last_message.lower().split()
                    for i, word in enumerate(words):
                        if word == 'in' and i + 1 < len(words):
                            city = words[i + 1].title()
                            break
                    return AIMessage(content="I'll get the weather information.",
                                   tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}])
                elif any(word in last_message.lower() for word in ['time', 'date']):
                    return AIMessage(content="I'll get the current time.",
                                   tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}])
                elif any(word in last_message.lower() for word in ['analyze', 'analysis']):
                    text = last_message.replace('analyze this text:', '').replace('analyze', '').strip()
                    if not text:
                        text = "Sample text for analysis"
                    return AIMessage(content="I'll analyze that text for you.",
                                   tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}])
                else:
                    return AIMessage(content="Hello! I'm a multi-tool agent powered by Claude. I can help with:\n• Mathematical calculations\n• Web searches\n• Weather information\n• Text analysis\n• Current time/date\n\nWhat would you like me to help you with?")
           
            def bind_tools(self, tools):
                return self
       
        print("⚠  Note: Using mock LLM for demo. Add your ANTHROPIC_API_KEY for full functionality.")
        return MockLLM()


llm = create_llm()
llm_with_tools = llm.bind_tools(tools)

We initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed.

def agent_node(state: AgentState) -> Dict[str, Any]:
    """Main agent node that processes messages and decides on tool usage."""
    messages = state["messages"]
    response = llm_with_tools.invoke(messages)
    return {"messages": [response]}


def should_continue(state: AgentState) -> str:
    """Determine whether to continue with tool calls or end."""
    last_message = state["messages"][-1]
    if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
        return "tools"
    return END

We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model (with tools), and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow.

def create_agent_graph():
    tool_node = ToolNode(tools)
   
    workflow = StateGraph(AgentState)
   
    workflow.add_node("agent", agent_node)
    workflow.add_node("tools", tool_node)
   
    workflow.add_edge(START, "agent")
    workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
    workflow.add_edge("tools", "agent")
   
    memory = MemorySaver()
   
    app = workflow.compile(checkpointer=memory)
   
    return app


print("Creating LangGraph Multi-Tool Agent...")
agent = create_agent_graph()
print("✓ Agent created successfully!\n")

We construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application (app), enabling a structured, memory-aware multi-tool agent ready for deployment.

def test_agent():
    """Test the agent with various queries."""
    config = {"configurable": {"thread_id": "test-thread"}}
   
    test_queries = [
        "What's 15 * 7 + 23?",
        "Search for information about Python programming",
        "What's the weather like in Tokyo?",
        "What time is it?",
        "Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
    ]
   
    print("                        </div>
                                            <div class=
                            
                                Read More