Skip to content

Prebuilt

create_react_agent

from langgraph.prebuilt import create_react_agent

Creates a graph that works with a chat model that utilizes tool calling.

Parameters:

  • model (LanguageModelLike) –

    The LangChain chat model that supports tool calling.

  • tools (Union[ToolExecutor, Sequence[BaseTool]]) –

    A list of tools or a ToolExecutor instance.

  • messages_modifier (Optional[Union[SystemMessage, str, Callable, Runnable]], default: None ) –

    An optional messages modifier. This applies to messages BEFORE they are passed into the LLM. Can take a few different forms: - SystemMessage: this is added to the beginning of the list of messages. - str: This is converted to a SystemMessage and added to the beginning of the list of messages. - Callable: This function should take in a list of messages and the output is then passed to the language model. - Runnable: This runnable should take in a list of messages and the output is then passed to the language model.

  • checkpointer (Optional[BaseCheckpointSaver], default: None ) –

    An optional checkpoint saver object. This is useful for persisting the state of the graph (e.g., as chat memory).

  • interrupt_before (Optional[Sequence[str]], default: None ) –

    An optional list of node names to interrupt before. Should be one of the following: "agent", "tools". This is useful if you want to add a user confirmation or other interrupt before taking an action.

  • interrupt_after (Optional[Sequence[str]], default: None ) –

    An optional list of node names to interrupt after. Should be one of the following: "agent", "tools". This is useful if you want to return directly or run additional processing on an output.

  • debug (bool, default: False ) –

    A flag indicating whether to enable debug mode.

Returns:

  • CompiledGraph

    A compiled LangChain runnable that can be used for chat interactions.

Examples:

Use with a simple tool:

>>> from datetime import datetime
>>> from langchain_core.tools import tool
>>> from langchain_openai import ChatOpenAI
>>> from langgraph.prebuilt import create_react_agent
>>>
>>> @tool
... def check_weather(location: str, at_time: datetime | None = None) -> float:
...     '''Return the weather forecast for the specified location.'''
...     return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> model = ChatOpenAI(model="gpt-4o")
>>> graph = create_react_agent(model, tools=tools)
>>> inputs = {"messages": [("user", "what is the weather in sf")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()
('user', 'what is the weather in sf')
================================== Ai Message ==================================
Tool Calls:
check_weather (call_LUzFvKJRuaWQPeXvBOzwhQOu)
Call ID: call_LUzFvKJRuaWQPeXvBOzwhQOu
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny.
Add a system prompt for the LLM:

>>> system_prompt = "You are a helpful bot named Fred."
>>> graph = create_react_agent(model, tools, messages_modifier=system_prompt)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()
('user', "What's your name? And what's the weather in SF?")
================================== Ai Message ==================================
Hi, my name is Fred. Let me check the weather in San Francisco for you.
Tool Calls:
check_weather (call_lqhj4O0hXYkW9eknB4S41EXk)
Call ID: call_lqhj4O0hXYkW9eknB4S41EXk
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is currently sunny. If you need any more details or have other questions, feel free to ask!

Add a more complex prompt for the LLM:

>>> from langchain_core.prompts import ChatPromptTemplate
>>> prompt = ChatPromptTemplate.from_messages([
...     ("system", "You are a helpful bot named Fred."),
...     ("placeholder", "{messages}"),
...     ("user", "Remember, always be polite!"),
... ])
>>> def modify_messages(messages: list):
...     # You can do more complex modifications here
...     return prompt.invoke({"messages": messages})
>>>
>>> app = create_react_agent(model, tools, messages_modifier=modify_messages)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()

Add "chat memory" to the graph:

>>> from langgraph.checkpoint import MemorySaver
>>> graph = create_react_agent(model, tools, checkpointer=MemorySaver())
>>> config = {"configurable": {"thread_id": "thread-1"}}
>>> def print_stream(graph, inputs, config):
...     for s in graph.stream(inputs, config, stream_mode="values"):
...         message = s["messages"][-1]
...         if isinstance(message, tuple):
...             print(message)
...         else:
...             message.pretty_print()
>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> inputs2 = {"messages": [("user", "Cool, so then should i go biking today?")]}
>>> print_stream(graph, inputs2, config)
('user', "What's the weather in SF?")
================================== Ai Message ==================================
Tool Calls:
check_weather (call_ChndaktJxpr6EMPEB5JfOFYc)
Call ID: call_ChndaktJxpr6EMPEB5JfOFYc
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny. Enjoy your day!
================================ Human Message =================================
Cool, so then should i go biking today?
================================== Ai Message ==================================
Since the weather in San Francisco is sunny, it sounds like a great day for biking! Enjoy your ride!

Add an interrupt to let the user confirm before taking an action:

>>> graph = create_react_agent(
...     model, tools, interrupt_before=["tools"], checkpointer=MemorySaver()
>>> )
>>> config = {"configurable": {"thread_id": "thread-1"}}
>>> def print_stream(graph, inputs, config):
...     for s in graph.stream(inputs, config, stream_mode="values"):
...         message = s["messages"][-1]
...         if isinstance(message, tuple):
...             print(message)
...         else:
...             message.pretty_print()

>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> snapshot = graph.get_state(config)
>>> print("Next step: ", snapshot.next)
>>> print_stream(graph, None, config)

Add a timeout for a given step:

>>> import time
>>> @tool
... def check_weather(location: str, at_time: datetime | None = None) -> float:
...     '''Return the weather forecast for the specified location.'''
...     time.sleep(2)
...     return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> graph = create_react_agent(model, tools)
>>> graph.step_timeout = 1 # Seconds
>>> for s in graph.stream({"messages": [("user", "what is the weather in sf")]}):
...     print(s)
TimeoutError: Timed out at step 2
Source code in libs/langgraph/langgraph/prebuilt/chat_agent_executor.py
def create_react_agent(
    model: LanguageModelLike,
    tools: Union[ToolExecutor, Sequence[BaseTool]],
    messages_modifier: Optional[Union[SystemMessage, str, Callable, Runnable]] = None,
    checkpointer: Optional[BaseCheckpointSaver] = None,
    interrupt_before: Optional[Sequence[str]] = None,
    interrupt_after: Optional[Sequence[str]] = None,
    debug: bool = False,
) -> CompiledGraph:
    """Creates a graph that works with a chat model that utilizes tool calling.

    Args:
        model: The `LangChain` chat model that supports tool calling.
        tools: A list of tools or a ToolExecutor instance.
        messages_modifier: An optional
            messages modifier. This applies to messages BEFORE they are passed into the LLM.
            Can take a few different forms:
            - SystemMessage: this is added to the beginning of the list of messages.
            - str: This is converted to a SystemMessage and added to the beginning of the list of messages.
            - Callable: This function should take in a list of messages and the output is then passed to the language model.
            - Runnable: This runnable should take in a list of messages and the output is then passed to the language model.
        checkpointer: An optional checkpoint saver object. This is useful for persisting
            the state of the graph (e.g., as chat memory).
        interrupt_before: An optional list of node names to interrupt before.
            Should be one of the following: "agent", "tools".
            This is useful if you want to add a user confirmation or other interrupt before taking an action.
        interrupt_after: An optional list of node names to interrupt after.
            Should be one of the following: "agent", "tools".
            This is useful if you want to return directly or run additional processing on an output.
        debug: A flag indicating whether to enable debug mode.

    Returns:
        A compiled LangChain runnable that can be used for chat interactions.

    Examples:
        Use with a simple tool:

        ```pycon
        >>> from datetime import datetime
        >>> from langchain_core.tools import tool
        >>> from langchain_openai import ChatOpenAI
        >>> from langgraph.prebuilt import create_react_agent
        >>>
        >>> @tool
        ... def check_weather(location: str, at_time: datetime | None = None) -> float:
        ...     '''Return the weather forecast for the specified location.'''
        ...     return f"It's always sunny in {location}"
        >>>
        >>> tools = [check_weather]
        >>> model = ChatOpenAI(model="gpt-4o")
        >>> graph = create_react_agent(model, tools=tools)
        >>> inputs = {"messages": [("user", "what is the weather in sf")]}
        >>> for s in graph.stream(inputs, stream_mode="values"):
        ...     message = s["messages"][-1]
        ...     if isinstance(message, tuple):
        ...         print(message)
        ...     else:
        ...         message.pretty_print()
        ('user', 'what is the weather in sf')
        ================================== Ai Message ==================================
        Tool Calls:
        check_weather (call_LUzFvKJRuaWQPeXvBOzwhQOu)
        Call ID: call_LUzFvKJRuaWQPeXvBOzwhQOu
        Args:
            location: San Francisco
        ================================= Tool Message =================================
        Name: check_weather
        It's always sunny in San Francisco
        ================================== Ai Message ==================================
        The weather in San Francisco is sunny.
        ```
        Add a system prompt for the LLM:

        ```pycon
        >>> system_prompt = "You are a helpful bot named Fred."
        >>> graph = create_react_agent(model, tools, messages_modifier=system_prompt)
        >>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
        >>> for s in graph.stream(inputs, stream_mode="values"):
        ...     message = s["messages"][-1]
        ...     if isinstance(message, tuple):
        ...         print(message)
        ...     else:
        ...         message.pretty_print()
        ('user', "What's your name? And what's the weather in SF?")
        ================================== Ai Message ==================================
        Hi, my name is Fred. Let me check the weather in San Francisco for you.
        Tool Calls:
        check_weather (call_lqhj4O0hXYkW9eknB4S41EXk)
        Call ID: call_lqhj4O0hXYkW9eknB4S41EXk
        Args:
            location: San Francisco
        ================================= Tool Message =================================
        Name: check_weather
        It's always sunny in San Francisco
        ================================== Ai Message ==================================
        The weather in San Francisco is currently sunny. If you need any more details or have other questions, feel free to ask!
        ```

        Add a more complex prompt for the LLM:

        ```pycon
        >>> from langchain_core.prompts import ChatPromptTemplate
        >>> prompt = ChatPromptTemplate.from_messages([
        ...     ("system", "You are a helpful bot named Fred."),
        ...     ("placeholder", "{messages}"),
        ...     ("user", "Remember, always be polite!"),
        ... ])
        >>> def modify_messages(messages: list):
        ...     # You can do more complex modifications here
        ...     return prompt.invoke({"messages": messages})
        >>>
        >>> app = create_react_agent(model, tools, messages_modifier=modify_messages)
        >>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
        >>> for s in graph.stream(inputs, stream_mode="values"):
        ...     message = s["messages"][-1]
        ...     if isinstance(message, tuple):
        ...         print(message)
        ...     else:
        ...         message.pretty_print()
        ```

        Add "chat memory" to the graph:

        ```pycon
        >>> from langgraph.checkpoint import MemorySaver
        >>> graph = create_react_agent(model, tools, checkpointer=MemorySaver())
        >>> config = {"configurable": {"thread_id": "thread-1"}}
        >>> def print_stream(graph, inputs, config):
        ...     for s in graph.stream(inputs, config, stream_mode="values"):
        ...         message = s["messages"][-1]
        ...         if isinstance(message, tuple):
        ...             print(message)
        ...         else:
        ...             message.pretty_print()
        >>> inputs = {"messages": [("user", "What's the weather in SF?")]}
        >>> print_stream(graph, inputs, config)
        >>> inputs2 = {"messages": [("user", "Cool, so then should i go biking today?")]}
        >>> print_stream(graph, inputs2, config)
        ('user', "What's the weather in SF?")
        ================================== Ai Message ==================================
        Tool Calls:
        check_weather (call_ChndaktJxpr6EMPEB5JfOFYc)
        Call ID: call_ChndaktJxpr6EMPEB5JfOFYc
        Args:
            location: San Francisco
        ================================= Tool Message =================================
        Name: check_weather
        It's always sunny in San Francisco
        ================================== Ai Message ==================================
        The weather in San Francisco is sunny. Enjoy your day!
        ================================ Human Message =================================
        Cool, so then should i go biking today?
        ================================== Ai Message ==================================
        Since the weather in San Francisco is sunny, it sounds like a great day for biking! Enjoy your ride!
        ```

        Add an interrupt to let the user confirm before taking an action:

        ```pycon
        >>> graph = create_react_agent(
        ...     model, tools, interrupt_before=["tools"], checkpointer=MemorySaver()
        >>> )
        >>> config = {"configurable": {"thread_id": "thread-1"}}
        >>> def print_stream(graph, inputs, config):
        ...     for s in graph.stream(inputs, config, stream_mode="values"):
        ...         message = s["messages"][-1]
        ...         if isinstance(message, tuple):
        ...             print(message)
        ...         else:
        ...             message.pretty_print()

        >>> inputs = {"messages": [("user", "What's the weather in SF?")]}
        >>> print_stream(graph, inputs, config)
        >>> snapshot = graph.get_state(config)
        >>> print("Next step: ", snapshot.next)
        >>> print_stream(graph, None, config)
        ```

        Add a timeout for a given step:

        ```pycon
        >>> import time
        >>> @tool
        ... def check_weather(location: str, at_time: datetime | None = None) -> float:
        ...     '''Return the weather forecast for the specified location.'''
        ...     time.sleep(2)
        ...     return f"It's always sunny in {location}"
        >>>
        >>> tools = [check_weather]
        >>> graph = create_react_agent(model, tools)
        >>> graph.step_timeout = 1 # Seconds
        >>> for s in graph.stream({"messages": [("user", "what is the weather in sf")]}):
        ...     print(s)
        TimeoutError: Timed out at step 2
        ```
    """

    if isinstance(tools, ToolExecutor):
        tool_classes = tools.tools
    else:
        tool_classes = tools
    model = model.bind_tools(tool_classes)

    # Define the function that determines whether to continue or not
    def should_continue(state: AgentState):
        messages = state["messages"]
        last_message = messages[-1]
        # If there is no function call, then we finish
        if not last_message.tool_calls:
            return "end"
        # Otherwise if there is, we continue
        else:
            return "continue"

    # Add the message modifier, if exists
    if messages_modifier is None:
        model_runnable = model
    elif isinstance(messages_modifier, str):
        _system_message: BaseMessage = SystemMessage(content=messages_modifier)
        model_runnable = (lambda messages: [_system_message] + messages) | model
    elif isinstance(messages_modifier, SystemMessage):
        model_runnable = (lambda messages: [messages_modifier] + messages) | model
    elif isinstance(messages_modifier, (Callable, Runnable)):
        model_runnable = messages_modifier | model
    else:
        raise ValueError(
            f"Got unexpected type for `messages_modifier`: {type(messages_modifier)}"
        )

    # Define the function that calls the model
    def call_model(
        state: AgentState,
        config: RunnableConfig,
    ):
        messages = state["messages"]
        response = model_runnable.invoke(messages, config)
        if state["is_last_step"] and response.tool_calls:
            return {
                "messages": [
                    AIMessage(
                        id=response.id,
                        content="Sorry, need more steps to process this request.",
                    )
                ]
            }
        # We return a list, because this will get added to the existing list
        return {"messages": [response]}

    async def acall_model(state: AgentState, config: RunnableConfig):
        messages = state["messages"]
        response = await model_runnable.ainvoke(messages, config)
        if state["is_last_step"] and response.tool_calls:
            return {
                "messages": [
                    AIMessage(
                        id=response.id,
                        content="Sorry, need more steps to process this request.",
                    )
                ]
            }
        # We return a list, because this will get added to the existing list
        return {"messages": [response]}

    # Define a new graph
    workflow = StateGraph(AgentState)

    # Define the two nodes we will cycle between
    workflow.add_node("agent", RunnableLambda(call_model, acall_model))
    workflow.add_node("tools", ToolNode(tools))

    # Set the entrypoint as `agent`
    # This means that this node is the first one called
    workflow.set_entry_point("agent")

    # We now add a conditional edge
    workflow.add_conditional_edges(
        # First, we define the start node. We use `agent`.
        # This means these are the edges taken after the `agent` node is called.
        "agent",
        # Next, we pass in the function that will determine which node is called next.
        should_continue,
        # Finally we pass in a mapping.
        # The keys are strings, and the values are other nodes.
        # END is a special node marking that the graph should finish.
        # What will happen is we will call `should_continue`, and then the output of that
        # will be matched against the keys in this mapping.
        # Based on which one it matches, that node will then be called.
        {
            # If `tools`, then we call the tool node.
            "continue": "tools",
            # Otherwise we finish.
            "end": END,
        },
    )

    # We now add a normal edge from `tools` to `agent`.
    # This means that after `tools` is called, `agent` node is called next.
    workflow.add_edge("tools", "agent")

    # Finally, we compile it!
    # This compiles it into a LangChain Runnable,
    # meaning you can use it as you would any other runnable
    return workflow.compile(
        checkpointer=checkpointer,
        interrupt_before=interrupt_before,
        interrupt_after=interrupt_after,
        debug=debug,
    )

ToolNode

from langgraph.prebuilt import ToolNode

Bases: RunnableCallable

A node that runs the tools requested in the last AIMessage. It can be used either in StateGraph with a "messages" key or in MessageGraph. If multiple tool calls are requested, they will be run in parallel. The output will be a list of ToolMessages, one for each tool call.

The ToolNode is roughly analogous to:

tools_by_name = {tool.name: tool for tool in tools}
def tool_node(state: dict):
    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}
Important
  • The state MUST contain a list of messages.
  • The last message MUST be an AIMessage.
  • The AIMessage MUST have tool_calls populated.
Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
class ToolNode(RunnableCallable):
    """A node that runs the tools requested in the last AIMessage. It can be used
    either in StateGraph with a "messages" key or in MessageGraph. If multiple
    tool calls are requested, they will be run in parallel. The output will be
    a list of ToolMessages, one for each tool call.

    The `ToolNode` is roughly analogous to:

    ```python
    tools_by_name = {tool.name: tool for tool in tools}
    def tool_node(state: dict):
        result = []
        for tool_call in state["messages"][-1].tool_calls:
            tool = tools_by_name[tool_call["name"]]
            observation = tool.invoke(tool_call["args"])
            result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
        return {"messages": result}
    ```

    Important:
        - The state MUST contain a list of messages.
        - The last message MUST be an `AIMessage`.
        - The `AIMessage` MUST have `tool_calls` populated.
    """

    def __init__(
        self,
        tools: Sequence[Union[BaseTool, Callable]],
        *,
        name: str = "tools",
        tags: Optional[list[str]] = None,
    ) -> None:
        super().__init__(self._func, self._afunc, name=name, tags=tags, trace=False)
        self.tools_by_name: Dict[str, BaseTool] = {}
        for tool_ in tools:
            if not isinstance(tool_, BaseTool):
                tool_ = create_tool(tool_)
            self.tools_by_name[tool_.name] = tool_

    def _func(
        self, input: Union[list[AnyMessage], dict[str, Any]], config: RunnableConfig
    ) -> Any:
        if isinstance(input, list):
            output_type = "list"
            message: AnyMessage = input[-1]
        elif messages := input.get("messages", []):
            output_type = "dict"
            message = messages[-1]
        else:
            raise ValueError("No message found in input")

        if not isinstance(message, AIMessage):
            raise ValueError("Last message is not an AIMessage")

        def run_one(call: ToolCall):
            output = self.tools_by_name[call["name"]].invoke(call["args"], config)
            return ToolMessage(
                content=str_output(output), name=call["name"], tool_call_id=call["id"]
            )

        with get_executor_for_config(config) as executor:
            outputs = [*executor.map(run_one, message.tool_calls)]
            if output_type == "list":
                return outputs
            else:
                return {"messages": outputs}

    async def _afunc(
        self, input: Union[list[AnyMessage], dict[str, Any]], config: RunnableConfig
    ) -> Any:
        if isinstance(input, list):
            output_type = "list"
            message: AnyMessage = input[-1]
        elif messages := input.get("messages", []):
            output_type = "dict"
            message = messages[-1]
        else:
            raise ValueError("No message found in input")

        if not isinstance(message, AIMessage):
            raise ValueError("Last message is not an AIMessage")

        async def run_one(call: ToolCall):
            output = await self.tools_by_name[call["name"]].ainvoke(
                call["args"], config
            )
            return ToolMessage(
                content=str_output(output), name=call["name"], tool_call_id=call["id"]
            )

        outputs = await asyncio.gather(*(run_one(call) for call in message.tool_calls))
        if output_type == "list":
            return outputs
        else:
            return {"messages": outputs}

ToolExecutor

from langgraph.prebuilt import ToolExecutor

Bases: RunnableCallable

Executes a tool invocation.

Parameters:

  • tools (Sequence[BaseTool]) –

    A sequence of tools that can be invoked.

  • invalid_tool_msg_template (str, default: INVALID_TOOL_MSG_TEMPLATE ) –

    The template for the error message when an invalid tool is requested. Defaults to INVALID_TOOL_MSG_TEMPLATE.

Examples:

```pycon
>>> from langchain_core.tools import tool
>>> from langgraph.prebuilt.tool_executor import ToolExecutor, ToolInvocation
...
...
>>> @tool
... def search(query: str) -> str:
...     """Search engine."""
...     return f"Searching for: {query}"
...
...
>>> tools = [search]
>>> executor = ToolExecutor(tools)
...
>>> invocation = ToolInvocation(tool="search", tool_input="What is the capital of France?")
>>> result = executor.invoke(invocation)
>>> print(result)
"Searching for: What is the capital of France?"
```

```pycon
>>> invocation = ToolInvocation(
...     tool="nonexistent", tool_input="What is the capital of France?"
... )
>>> result = executor.invoke(invocation)
>>> print(result)
"nonexistent is not a valid tool, try one of [search]."
```
Source code in libs/langgraph/langgraph/prebuilt/tool_executor.py
class ToolExecutor(RunnableCallable):
    """Executes a tool invocation.

    Args:
        tools (Sequence[BaseTool]): A sequence of tools that can be invoked.
        invalid_tool_msg_template (str, optional): The template for the error message
            when an invalid tool is requested. Defaults to INVALID_TOOL_MSG_TEMPLATE.

    Examples:

        ```pycon
        >>> from langchain_core.tools import tool
        >>> from langgraph.prebuilt.tool_executor import ToolExecutor, ToolInvocation
        ...
        ...
        >>> @tool
        ... def search(query: str) -> str:
        ...     \"\"\"Search engine.\"\"\"
        ...     return f"Searching for: {query}"
        ...
        ...
        >>> tools = [search]
        >>> executor = ToolExecutor(tools)
        ...
        >>> invocation = ToolInvocation(tool="search", tool_input="What is the capital of France?")
        >>> result = executor.invoke(invocation)
        >>> print(result)
        "Searching for: What is the capital of France?"
        ```

        ```pycon
        >>> invocation = ToolInvocation(
        ...     tool="nonexistent", tool_input="What is the capital of France?"
        ... )
        >>> result = executor.invoke(invocation)
        >>> print(result)
        "nonexistent is not a valid tool, try one of [search]."
        ```
    """

    def __init__(
        self,
        tools: Sequence[Union[BaseTool, Callable]],
        *,
        invalid_tool_msg_template: str = INVALID_TOOL_MSG_TEMPLATE,
    ) -> None:
        super().__init__(self._execute, afunc=self._aexecute, trace=False)
        tools_ = [
            tool if isinstance(tool, BaseTool) else create_tool(tool) for tool in tools
        ]
        self.tools = tools_
        self.tool_map = {t.name: t for t in tools}
        self.invalid_tool_msg_template = invalid_tool_msg_template

    def _execute(
        self, tool_invocation: ToolInvocationInterface, config: RunnableConfig
    ) -> Any:
        if tool_invocation.tool not in self.tool_map:
            return self.invalid_tool_msg_template.format(
                requested_tool_name=tool_invocation.tool,
                available_tool_names_str=", ".join([t.name for t in self.tools]),
            )
        else:
            tool = self.tool_map[tool_invocation.tool]
            output = tool.invoke(tool_invocation.tool_input, config)
            return output

    async def _aexecute(
        self, tool_invocation: ToolInvocationInterface, config: RunnableConfig
    ) -> Any:
        if tool_invocation.tool not in self.tool_map:
            return self.invalid_tool_msg_template.format(
                requested_tool_name=tool_invocation.tool,
                available_tool_names_str=", ".join([t.name for t in self.tools]),
            )
        else:
            tool = self.tool_map[tool_invocation.tool]
            output = await tool.ainvoke(tool_invocation.tool_input, config)
            return output

ToolInvocation

from langgraph.prebuilt import ToolInvocation

Bases: Serializable

Information about how to invoke a tool.

Attributes:

  • tool (str) –

    The name of the Tool to execute.

  • tool_input (Union[str, dict]) –

    The input to pass in to the Tool.

Examples:

    invocation = ToolInvocation(
        tool="search",
        tool_input="What is the capital of France?"
    )
Source code in libs/langgraph/langgraph/prebuilt/tool_executor.py
class ToolInvocation(Serializable):
    """Information about how to invoke a tool.

    Attributes:
        tool (str): The name of the Tool to execute.
        tool_input (Union[str, dict]): The input to pass in to the Tool.

    Examples:

            invocation = ToolInvocation(
                tool="search",
                tool_input="What is the capital of France?"
            )
    """

    tool: str
    tool_input: Union[str, dict]

tools_condition

from langgraph.prebuilt import tools_condition

Use in the conditional_edge to route to the ToolNode if the last message

has tool calls. Otherwise, route to the end.

Parameters:

  • state (Union[list[AnyMessage], dict[str, Any]]) –

    The state to check for tool calls. Must have a list of messages (MessageGraph) or have the "messages" key (StateGraph).

Returns:

  • Literal['tools', '__end__']

    The next node to route to.

Examples:

Create a custom ReAct-style agent with tools.

>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_core.tools import tool
>>>
>>> from langgraph.graph import MessageGraph
>>> from langgraph.prebuilt import ToolNode, tools_condition
>>>
>>> @tool
>>> def divide(a: float, b: float) -> int:
>>>     """Return a / b."""
>>>     return a / b
>>>
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307")
>>> tools = [divide]
>>>
>>> graph_builder = MessageGraph()
>>> graph_builder.add_node("tools", ToolNode(tools))
>>> graph_builder.add_node("chatbot", llm.bind_tools(tools))
>>> graph_builder.add_edge("tools", "chatbot")
>>> graph_builder.add_conditional_edges(
...     "chatbot", tools_condition
... )
>>> graph_builder.set_entry_point("chatbot")
>>> graph = graph_builder.compile()
>>> graph.invoke([("user", "What's 329993 divided by 13662?")])

Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
def tools_condition(
    state: Union[list[AnyMessage], dict[str, Any]],
) -> Literal["tools", "__end__"]:
    """Use in the conditional_edge to route to the ToolNode if the last message

    has tool calls. Otherwise, route to the end.

    Args:
        state (Union[list[AnyMessage], dict[str, Any]]): The state to check for
            tool calls. Must have a list of messages (MessageGraph) or have the
            "messages" key (StateGraph).

    Returns:
        The next node to route to.


    Examples:
        Create a custom ReAct-style agent with tools.
        ```pycon
        >>> from langchain_anthropic import ChatAnthropic
        >>> from langchain_core.tools import tool
        >>>
        >>> from langgraph.graph import MessageGraph
        >>> from langgraph.prebuilt import ToolNode, tools_condition
        >>>
        >>> @tool
        >>> def divide(a: float, b: float) -> int:
        >>>     \"\"\"Return a / b.\"\"\"
        >>>     return a / b
        >>>
        >>> llm = ChatAnthropic(model="claude-3-haiku-20240307")
        >>> tools = [divide]
        >>>
        >>> graph_builder = MessageGraph()
        >>> graph_builder.add_node("tools", ToolNode(tools))
        >>> graph_builder.add_node("chatbot", llm.bind_tools(tools))
        >>> graph_builder.add_edge("tools", "chatbot")
        >>> graph_builder.add_conditional_edges(
        ...     "chatbot", tools_condition
        ... )
        >>> graph_builder.set_entry_point("chatbot")
        >>> graph = graph_builder.compile()
        >>> graph.invoke([("user", "What's 329993 divided by 13662?")])
        ```
    """
    if isinstance(state, list):
        ai_message = state[-1]
    elif messages := state.get("messages", []):
        ai_message = messages[-1]
    else:
        raise ValueError(f"No messages found in input state to tool_edge: {state}")
    if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
        return "tools"
    return "__end__"

ValidationNode

from langgraph.prebuilt import ValidationNode

Bases: RunnableCallable

A node that validates all tools requests from the last AIMessage.

It can be used either in StateGraph with a "messages" key or in MessageGraph.

Note

This node does not actually run the tools, it only validates the tool calls, which is useful for extraction and other use cases where you need to generate structured output that conforms to a complex schema without losing the original messages and tool IDs (for use in multi-turn conversations).

Parameters:

  • schemas (Sequence[Union[BaseTool, Type[BaseModel], Callable]]) –

    A list of schemas to validate the tool calls with. These can be any of the following: - A pydantic BaseModel class - A BaseTool instance (the args_schema will be used) - A function (a schema will be created from the function signature)

  • format_error (Optional[Callable[[BaseException, ToolCall, Type[BaseModel]], str]], default: None ) –

    A function that takes an exception, a ToolCall, and a schema and returns a formatted error string. By default, it returns the exception repr and a message to respond after fixing validation errors.

  • name (str, default: 'validation' ) –

    The name of the node.

  • tags (Optional[list[str]], default: None ) –

    A list of tags to add to the node.

Returns:

  • Union[Dict[str, List[ToolMessage]], Sequence[ToolMessage]]

    A list of ToolMessages with the validated content or error messages.

Examples:

Example usage for re-prompting the model to generate a valid response:

>>> from typing import Literal
...
>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_core.pydantic_v1 import BaseModel, validator
...
>>> from langgraph.graph import END, START, MessageGraph
>>> from langgraph.prebuilt import ValidationNode
...
...
>>> class SelectNumber(BaseModel):
...     a: int
...
...     @validator("a")
...     def a_must_be_meaningful(cls, v):
...         if v != 37:
...             raise ValueError("Only 37 is allowed")
...         return v
...
...
>>> builder = MessageGraph()
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307").bind_tools([SelectNumber])
>>> builder.add_node("model", llm)
>>> builder.add_node("validation", ValidationNode([SelectNumber]))
>>> builder.add_edge(START, "model")
...
...
>>> def should_validate(state: list) -> Literal["validation", "__end__"]:
...     if state[-1].tool_calls:
...         return "validation"
...     return END
...
...
>>> builder.add_conditional_edges("model", should_validate)
...
...
>>> def should_reprompt(state: list) -> Literal["model", "__end__"]:
...     for msg in state[::-1]:
...         # None of the tool calls were errors
...         if msg.type == "ai":
...             return END
...         if msg.additional_kwargs.get("is_error"):
...             return "model"
...     return END
...
...
>>> builder.add_conditional_edges("validation", should_reprompt)
...
...
>>> graph = builder.compile()
>>> res = graph.invoke(("user", "Select a number, any number"))
>>> # Show the retry logic
>>> for msg in res:
...     msg.pretty_print()
================================ Human Message =================================
Select a number, any number
================================== Ai Message ==================================
[{'id': 'toolu_01JSjT9Pq8hGmTgmMPc6KnvM', 'input': {'a': 42}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01JSjT9Pq8hGmTgmMPc6KnvM)
Call ID: toolu_01JSjT9Pq8hGmTgmMPc6KnvM
Args:
    a: 42
================================= Tool Message =================================
Name: SelectNumber
ValidationError(model='SelectNumber', errors=[{'loc': ('a',), 'msg': 'Only 37 is allowed', 'type': 'value_error'}])
Respond after fixing all validation errors.
================================== Ai Message ==================================
[{'id': 'toolu_01PkxSVxNxc5wqwCPW1FiSmV', 'input': {'a': 37}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01PkxSVxNxc5wqwCPW1FiSmV)
Call ID: toolu_01PkxSVxNxc5wqwCPW1FiSmV
Args:
    a: 37
================================= Tool Message =================================
Name: SelectNumber
{"a": 37}
Source code in libs/langgraph/langgraph/prebuilt/tool_validator.py
class ValidationNode(RunnableCallable):
    """A node that validates all tools requests from the last AIMessage.

    It can be used either in StateGraph with a "messages" key or in MessageGraph.

    !!! note

        This node does not actually **run** the tools, it only validates the tool calls,
        which is useful for extraction and other use cases where you need to generate
        structured output that conforms to a complex schema without losing the original
        messages and tool IDs (for use in multi-turn conversations).

    Args:
        schemas: A list of schemas to validate the tool calls with. These can be
            any of the following:
            - A pydantic BaseModel class
            - A BaseTool instance (the args_schema will be used)
            - A function (a schema will be created from the function signature)
        format_error: A function that takes an exception, a ToolCall, and a schema
            and returns a formatted error string. By default, it returns the
            exception repr and a message to respond after fixing validation errors.
        name: The name of the node.
        tags: A list of tags to add to the node.

    Returns:
        (Union[Dict[str, List[ToolMessage]], Sequence[ToolMessage]]): A list of ToolMessages with the validated content or error messages.

    Examples:
        Example usage for re-prompting the model to generate a valid response:
        >>> from typing import Literal
        ...
        >>> from langchain_anthropic import ChatAnthropic
        >>> from langchain_core.pydantic_v1 import BaseModel, validator
        ...
        >>> from langgraph.graph import END, START, MessageGraph
        >>> from langgraph.prebuilt import ValidationNode
        ...
        ...
        >>> class SelectNumber(BaseModel):
        ...     a: int
        ...
        ...     @validator("a")
        ...     def a_must_be_meaningful(cls, v):
        ...         if v != 37:
        ...             raise ValueError("Only 37 is allowed")
        ...         return v
        ...
        ...
        >>> builder = MessageGraph()
        >>> llm = ChatAnthropic(model="claude-3-haiku-20240307").bind_tools([SelectNumber])
        >>> builder.add_node("model", llm)
        >>> builder.add_node("validation", ValidationNode([SelectNumber]))
        >>> builder.add_edge(START, "model")
        ...
        ...
        >>> def should_validate(state: list) -> Literal["validation", "__end__"]:
        ...     if state[-1].tool_calls:
        ...         return "validation"
        ...     return END
        ...
        ...
        >>> builder.add_conditional_edges("model", should_validate)
        ...
        ...
        >>> def should_reprompt(state: list) -> Literal["model", "__end__"]:
        ...     for msg in state[::-1]:
        ...         # None of the tool calls were errors
        ...         if msg.type == "ai":
        ...             return END
        ...         if msg.additional_kwargs.get("is_error"):
        ...             return "model"
        ...     return END
        ...
        ...
        >>> builder.add_conditional_edges("validation", should_reprompt)
        ...
        ...
        >>> graph = builder.compile()
        >>> res = graph.invoke(("user", "Select a number, any number"))
        >>> # Show the retry logic
        >>> for msg in res:
        ...     msg.pretty_print()
        ================================ Human Message =================================
        Select a number, any number
        ================================== Ai Message ==================================
        [{'id': 'toolu_01JSjT9Pq8hGmTgmMPc6KnvM', 'input': {'a': 42}, 'name': 'SelectNumber', 'type': 'tool_use'}]
        Tool Calls:
        SelectNumber (toolu_01JSjT9Pq8hGmTgmMPc6KnvM)
        Call ID: toolu_01JSjT9Pq8hGmTgmMPc6KnvM
        Args:
            a: 42
        ================================= Tool Message =================================
        Name: SelectNumber
        ValidationError(model='SelectNumber', errors=[{'loc': ('a',), 'msg': 'Only 37 is allowed', 'type': 'value_error'}])
        Respond after fixing all validation errors.
        ================================== Ai Message ==================================
        [{'id': 'toolu_01PkxSVxNxc5wqwCPW1FiSmV', 'input': {'a': 37}, 'name': 'SelectNumber', 'type': 'tool_use'}]
        Tool Calls:
        SelectNumber (toolu_01PkxSVxNxc5wqwCPW1FiSmV)
        Call ID: toolu_01PkxSVxNxc5wqwCPW1FiSmV
        Args:
            a: 37
        ================================= Tool Message =================================
        Name: SelectNumber
        {"a": 37}

    """

    def __init__(
        self,
        schemas: Sequence[Union[BaseTool, Type[BaseModel], Callable]],
        *,
        format_error: Optional[
            Callable[[BaseException, ToolCall, Type[BaseModel]], str]
        ] = None,
        name: str = "validation",
        tags: Optional[list[str]] = None,
    ) -> None:
        super().__init__(self._func, None, name=name, tags=tags, trace=False)
        self._format_error = format_error or _default_format_error
        self.schemas_by_name: Dict[str, Type[BaseModel]] = {}
        for schema in schemas:
            if isinstance(schema, BaseTool):
                if schema.args_schema is None:
                    raise ValueError(
                        f"Tool {schema.name} does not have an args_schema defined."
                    )
                self.schemas_by_name[schema.name] = schema.args_schema
            elif isinstance(schema, type) and issubclass(
                schema, (BaseModel, BaseModelV2)
            ):
                self.schemas_by_name[schema.__name__] = cast(Type[BaseModel], schema)
            elif callable(schema):
                base_model = create_schema_from_function("Validation", schema)
                self.schemas_by_name[schema.__name__] = base_model
            else:
                raise ValueError(
                    f"Unsupported input to ValidationNode. Expected BaseModel, tool or function. Got: {type(schema)}."
                )

    def _get_message(
        self, input: Union[list[AnyMessage], dict[str, Any]]
    ) -> Tuple[str, AIMessage]:
        """Extract the last AIMessage from the input."""
        if isinstance(input, list):
            output_type = "list"
            messages: list = input
        elif messages := input.get("messages", []):
            output_type = "dict"
        else:
            raise ValueError("No message found in input")
        message: AnyMessage = messages[-1]
        if not isinstance(message, AIMessage):
            raise ValueError("Last message is not an AIMessage")
        return output_type, message

    def _func(
        self, input: Union[list[AnyMessage], dict[str, Any]], config: RunnableConfig
    ) -> Any:
        """Validate and run tool calls synchronously."""
        output_type, message = self._get_message(input)

        def run_one(call: ToolCall):
            schema = self.schemas_by_name[call["name"]]
            try:
                output = schema.validate(call["args"])
                return ToolMessage(
                    content=output.json(),
                    name=call["name"],
                    tool_call_id=cast(str, call["id"]),
                )
            except (ValidationError, ValidationErrorV2) as e:
                return ToolMessage(
                    content=self._format_error(e, call, schema),
                    name=call["name"],
                    tool_call_id=cast(str, call["id"]),
                    additional_kwargs={"is_error": True},
                )

        with get_executor_for_config(config) as executor:
            outputs = [*executor.map(run_one, message.tool_calls)]
            if output_type == "list":
                return outputs
            else:
                return {"messages": outputs}

Comments