Skip to content

Agents

Classes:

Functions:

  • create_react_agent

    Creates an agent graph that calls tools in a loop until a stopping condition is met.

AgentState

Bases: TypedDict

The state of the agent.

create_react_agent

create_react_agent(
    model: Union[str, LanguageModelLike],
    tools: Union[
        Sequence[Union[BaseTool, Callable]], ToolNode
    ],
    *,
    prompt: Optional[Prompt] = None,
    response_format: Optional[
        Union[
            StructuredResponseSchema,
            tuple[str, StructuredResponseSchema],
        ]
    ] = None,
    pre_model_hook: Optional[RunnableLike] = None,
    state_schema: Optional[StateSchemaType] = None,
    config_schema: Optional[Type[Any]] = None,
    checkpointer: Optional[Checkpointer] = None,
    store: Optional[BaseStore] = None,
    interrupt_before: Optional[list[str]] = None,
    interrupt_after: Optional[list[str]] = None,
    debug: bool = False,
    version: Literal["v1", "v2"] = "v1",
    name: Optional[str] = None
) -> CompiledGraph

Creates an agent graph that calls tools in a loop until a stopping condition is met.

For more details on using create_react_agent, visit Agents documentation.

Parameters:

  • model (Union[str, LanguageModelLike]) –

    The LangChain chat model that supports tool calling.

  • tools (Union[Sequence[Union[BaseTool, Callable]], ToolNode]) –

    A list of tools or a ToolNode instance. If an empty list is provided, the agent will consist of a single LLM node without tool calling.

  • prompt (Optional[Prompt], default: None ) –

    An optional prompt for the LLM. Can take a few different forms:

    • str: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"].
    • SystemMessage: this is added to the beginning of the list of messages in state["messages"].
    • Callable: This function should take in full graph state and the output is then passed to the language model.
    • Runnable: This runnable should take in full graph state and the output is then passed to the language model.
  • response_format (Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]], default: None ) –

    An optional schema for the final agent output.

    If provided, output will be formatted to match the given schema and returned in the 'structured_response' state key. If not provided, structured_response will not be present in the output state. Can be passed in as:

    - an OpenAI function/tool schema,
    - a JSON Schema,
    - a TypedDict class,
    - or a Pydantic class.
    - a tuple (prompt, schema), where schema is one of the above.
        The prompt will be used together with the model that is being used to generate the structured response.
    

    Important

    response_format requires the model to support .with_structured_output

    Note

    The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.

  • pre_model_hook (Optional[RunnableLike], default: None ) –

    An optional node to add before the agent node (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.). Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of

    # At least one of `messages` or `llm_input_messages` MUST be provided
    {
        # If provided, will UPDATE the `messages` in the state
        "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), ...],
        # If provided, will be used as the input to the LLM,
        # and will NOT UPDATE `messages` in the state
        "llm_input_messages": [...],
        # Any other state keys that need to be propagated
        ...
    }
    

    Important

    At least one of messages or llm_input_messages MUST be provided and will be used as an input to the agent node. The rest of the keys will be added to the graph state.

    Warning

    If you are returning messages in the pre-model hook, you should OVERWRITE the messages key by doing the following:

    {
        "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), *new_messages]
        ...
    }
    
  • state_schema (Optional[StateSchemaType], default: None ) –

    An optional state schema that defines graph state. Must have messages and remaining_steps keys. Defaults to AgentState that defines those two keys.

  • config_schema (Optional[Type[Any]], default: None ) –

    An optional schema for configuration. Use this to expose configurable parameters via agent.config_specs.

  • checkpointer (Optional[Checkpointer], default: None ) –

    An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).

  • store (Optional[BaseStore], default: None ) –

    An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).

  • interrupt_before (Optional[list[str]], default: None ) –

    An optional list of node names to interrupt before. Should be one of the following: "agent", "tools". This is useful if you want to add a user confirmation or other interrupt before taking an action.

  • interrupt_after (Optional[list[str]], default: None ) –

    An optional list of node names to interrupt after. Should be one of the following: "agent", "tools". This is useful if you want to return directly or run additional processing on an output.

  • debug (bool, default: False ) –

    A flag indicating whether to enable debug mode.

  • version (Literal['v1', 'v2'], default: 'v1' ) –

    Determines the version of the graph to create. Can be one of:

    • "v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node.
    • "v2": The tool node processes a tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.
  • name (Optional[str], default: None ) –

    An optional name for the CompiledStateGraph. This name will be automatically used when adding ReAct agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.

Returns:

  • CompiledGraph

    A compiled LangChain runnable that can be used for chat interactions.

The "agent" node calls the language model with the messages list (after applying the prompt). If the resulting AIMessage contains tool_calls, the graph will then call the "tools". The "tools" node executes the tools (1 tool per tool_call) and adds the responses to the messages list as ToolMessage objects. The agent node then calls the language model again. The process repeats until no more tool_calls are present in the response. The agent then returns the full list of messages as a dictionary containing the key "messages".

    sequenceDiagram
        participant U as User
        participant A as LLM
        participant T as Tools
        U->>A: Initial input
        Note over A: Prompt + LLM
        loop while tool_calls present
            A->>T: Execute tools
            T-->>A: ToolMessage for each tool_calls
        end
        A->>U: Return final state
Example
from langgraph.prebuilt import create_react_agent

def check_weather(location: str) -> str:
    '''Return the weather forecast for the specified location.'''
    return f"It's always sunny in {location}"

graph = create_react_agent(
    "anthropic:claude-3-7-sonnet-latest",
    tools=[check_weather],
    prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
    print(chunk)

ToolNode

Bases: RunnableCallable

A node that runs the tools called in the last AIMessage.

It can be used either in StateGraph with a "messages" state key (or a custom key passed via ToolNode's 'messages_key'). If multiple tool calls are requested, they will be run in parallel. The output will be a list of ToolMessages, one for each tool call.

Tool calls can also be passed directly as a list of ToolCall dicts.

Parameters:

  • tools (Sequence[Union[BaseTool, Callable]]) –

    A sequence of tools that can be invoked by the ToolNode.

  • name (str, default: 'tools' ) –

    The name of the ToolNode in the graph. Defaults to "tools".

  • tags (Optional[list[str]], default: None ) –

    Optional tags to associate with the node. Defaults to None.

  • handle_tool_errors (Union[bool, str, Callable[..., str], tuple[type[Exception], ...]], default: True ) –

    How to handle tool errors raised by tools inside the node. Defaults to True. Must be one of the following:

    • True: all errors will be caught and a ToolMessage with a default error message (TOOL_CALL_ERROR_TEMPLATE) will be returned.
    • str: all errors will be caught and a ToolMessage with the string value of 'handle_tool_errors' will be returned.
    • tuple[type[Exception], ...]: exceptions in the tuple will be caught and a ToolMessage with a default error message (TOOL_CALL_ERROR_TEMPLATE) will be returned.
    • Callable[..., str]: exceptions from the signature of the callable will be caught and a ToolMessage with the string value of the result of the 'handle_tool_errors' callable will be returned.
    • False: none of the errors raised by the tools will be caught
  • messages_key (str, default: 'messages' ) –

    The state key in the input that contains the list of messages. The same key will be used for the output from the ToolNode. Defaults to "messages".

The ToolNode is roughly analogous to:

tools_by_name = {tool.name: tool for tool in tools}
def tool_node(state: dict):
    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

Tool calls can also be passed directly to a ToolNode. This can be useful when using the Send API, e.g., in a conditional edge:

def example_conditional_edge(state: dict) -> List[Send]:
    tool_calls = state["messages"][-1].tool_calls
    # If tools rely on state or store variables (whose values are not generated
    # directly by a model), you can inject them into the tool calls.
    tool_calls = [
        tool_node.inject_tool_args(call, state, store)
        for call in last_message.tool_calls
    ]
    return [Send("tools", [tool_call]) for tool_call in tool_calls]
Important
  • The input state can be one of the following:
    • A dict with a messages key containing a list of messages.
    • A list of messages.
    • A list of tool calls.
  • If operating on a message list, the last message must be an AIMessage with tool_calls populated.

Methods:

inject_tool_args

inject_tool_args(
    tool_call: ToolCall,
    input: Union[
        list[AnyMessage], dict[str, Any], BaseModel
    ],
    store: Optional[BaseStore],
) -> ToolCall

Injects the state and store into the tool call.

Tool arguments with types annotated as InjectedState and InjectedStore are ignored in tool schemas for generation purposes. This method injects them into tool calls for tool invocation.

Parameters:

Returns:

  • ToolCall ( ToolCall ) –

    The tool call with injected state and store.

Classes:

  • InjectedState

    Annotation for a Tool arg that is meant to be populated with the graph state.

  • InjectedStore

    Annotation for a Tool arg that is meant to be populated with LangGraph store.

Functions:

  • tools_condition

    Use in the conditional_edge to route to the ToolNode if the last message

InjectedState

Bases: InjectedToolArg

Annotation for a Tool arg that is meant to be populated with the graph state.

Any Tool argument annotated with InjectedState will be hidden from a tool-calling model, so that the model doesn't attempt to generate the argument. If using ToolNode, the appropriate graph state field will be automatically injected into the model-generated tool args.

Parameters:

  • field (Optional[str], default: None ) –

    The key from state to insert. If None, the entire state is expected to be passed in.

Example
from typing import List
from typing_extensions import Annotated, TypedDict

from langchain_core.messages import BaseMessage, AIMessage
from langchain_core.tools import tool

from langgraph.prebuilt import InjectedState, ToolNode


class AgentState(TypedDict):
    messages: List[BaseMessage]
    foo: str

@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
    '''Do something with state.'''
    if len(state["messages"]) > 2:
        return state["foo"] + str(x)
    else:
        return "not enough messages"

@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
    '''Do something else with state.'''
    return foo + str(x + 1)

node = ToolNode([state_tool, foo_tool])

tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
state = {
    "messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
    "foo": "bar",
}
node.invoke(state)
[
    ToolMessage(content='not enough messages', name='state_tool', tool_call_id='1'),
    ToolMessage(content='bar2', name='foo_tool', tool_call_id='2')
]

InjectedStore

Bases: InjectedToolArg

Annotation for a Tool arg that is meant to be populated with LangGraph store.

Any Tool argument annotated with InjectedStore will be hidden from a tool-calling model, so that the model doesn't attempt to generate the argument. If using ToolNode, the appropriate store field will be automatically injected into the model-generated tool args. Note: if a graph is compiled with a store object, the store will be automatically propagated to the tools with InjectedStore args when using ToolNode.

Warning

InjectedStore annotation requires langchain-core >= 0.3.8

Example
from typing import Any
from typing_extensions import Annotated

from langchain_core.messages import AIMessage
from langchain_core.tools import tool

from langgraph.store.memory import InMemoryStore
from langgraph.prebuilt import InjectedStore, ToolNode

store = InMemoryStore()
store.put(("values",), "foo", {"bar": 2})

@tool
def store_tool(x: int, my_store: Annotated[Any, InjectedStore()]) -> str:
    '''Do something with store.'''
    stored_value = my_store.get(("values",), "foo").value["bar"]
    return stored_value + x

node = ToolNode([store_tool])

tool_call = {"name": "store_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
state = {
    "messages": [AIMessage("", tool_calls=[tool_call])],
}

node.invoke(state, store=store)
{
    "messages": [
        ToolMessage(content='3', name='store_tool', tool_call_id='1'),
    ]
}

tools_condition

tools_condition(
    state: Union[
        list[AnyMessage], dict[str, Any], BaseModel
    ],
    messages_key: str = "messages",
) -> Literal["tools", "__end__"]

Use in the conditional_edge to route to the ToolNode if the last message

has tool calls. Otherwise, route to the end.

Parameters:

  • state (Union[list[AnyMessage], dict[str, Any], BaseModel]) –

    The state to check for tool calls. Must have a list of messages (MessageGraph) or have the "messages" key (StateGraph).

Returns:

  • Literal['tools', '__end__']

    The next node to route to.

Examples:

Create a custom ReAct-style agent with tools.

>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_core.tools import tool
...
>>> from langgraph.graph import StateGraph
>>> from langgraph.prebuilt import ToolNode, tools_condition
>>> from langgraph.graph.message import add_messages
...
>>> from typing import Annotated
>>> from typing_extensions import TypedDict
...
>>> @tool
>>> def divide(a: float, b: float) -> int:
...     """Return a / b."""
...     return a / b
...
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307")
>>> tools = [divide]
...
>>> class State(TypedDict):
...     messages: Annotated[list, add_messages]
>>>
>>> graph_builder = StateGraph(State)
>>> graph_builder.add_node("tools", ToolNode(tools))
>>> graph_builder.add_node("chatbot", lambda state: {"messages":llm.bind_tools(tools).invoke(state['messages'])})
>>> graph_builder.add_edge("tools", "chatbot")
>>> graph_builder.add_conditional_edges(
...     "chatbot", tools_condition
... )
>>> graph_builder.set_entry_point("chatbot")
>>> graph = graph_builder.compile()
>>> graph.invoke({"messages": {"role": "user", "content": "What's 329993 divided by 13662?"}})

ValidationNode

Bases: RunnableCallable

A node that validates all tools requests from the last AIMessage.

It can be used either in StateGraph with a "messages" key or in MessageGraph.

Note

This node does not actually run the tools, it only validates the tool calls, which is useful for extraction and other use cases where you need to generate structured output that conforms to a complex schema without losing the original messages and tool IDs (for use in multi-turn conversations).

Parameters:

  • schemas (Sequence[Union[BaseTool, Type[BaseModel], Callable]]) –

    A list of schemas to validate the tool calls with. These can be any of the following: - A pydantic BaseModel class - A BaseTool instance (the args_schema will be used) - A function (a schema will be created from the function signature)

  • format_error (Optional[Callable[[BaseException, ToolCall, Type[BaseModel]], str]], default: None ) –

    A function that takes an exception, a ToolCall, and a schema and returns a formatted error string. By default, it returns the exception repr and a message to respond after fixing validation errors.

  • name (str, default: 'validation' ) –

    The name of the node.

  • tags (Optional[list[str]], default: None ) –

    A list of tags to add to the node.

Returns:

Example
Example usage for re-prompting the model to generate a valid response:
from typing import Literal, Annotated
from typing_extensions import TypedDict

from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, field_validator

from langgraph.graph import END, START, StateGraph
from langgraph.prebuilt import ValidationNode
from langgraph.graph.message import add_messages

class SelectNumber(BaseModel):
    a: int

    @field_validator("a")
    def a_must_be_meaningful(cls, v):
        if v != 37:
            raise ValueError("Only 37 is allowed")
        return v

builder = StateGraph(Annotated[list, add_messages])
llm = ChatAnthropic(model="claude-3-5-haiku-latest").bind_tools([SelectNumber])
builder.add_node("model", llm)
builder.add_node("validation", ValidationNode([SelectNumber]))
builder.add_edge(START, "model")

def should_validate(state: list) -> Literal["validation", "__end__"]:
    if state[-1].tool_calls:
        return "validation"
    return END

builder.add_conditional_edges("model", should_validate)

def should_reprompt(state: list) -> Literal["model", "__end__"]:
    for msg in state[::-1]:
        # None of the tool calls were errors
        if msg.type == "ai":
            return END
        if msg.additional_kwargs.get("is_error"):
            return "model"
    return END

builder.add_conditional_edges("validation", should_reprompt)

graph = builder.compile()
res = graph.invoke(("user", "Select a number, any number"))
# Show the retry logic
for msg in res:
    msg.pretty_print()

Classes:

  • HumanInterruptConfig

    Configuration that defines what actions are allowed for a human interrupt.

  • ActionRequest

    Represents a request for human action within the graph execution.

  • HumanInterrupt

    Represents an interrupt triggered by the graph that requires human intervention.

  • HumanResponse

    The response provided by a human to an interrupt, which is returned when graph execution resumes.

HumanInterruptConfig

Bases: TypedDict

Configuration that defines what actions are allowed for a human interrupt.

This controls the available interaction options when the graph is paused for human input.

Attributes:

  • allow_ignore (bool) –

    Whether the human can choose to ignore/skip the current step

  • allow_respond (bool) –

    Whether the human can provide a text response/feedback

  • allow_edit (bool) –

    Whether the human can edit the provided content/state

  • allow_accept (bool) –

    Whether the human can accept/approve the current state

ActionRequest

Bases: TypedDict

Represents a request for human action within the graph execution.

Contains the action type and any associated arguments needed for the action.

Attributes:

  • action (str) –

    The type or name of action being requested (e.g., "Approve XYZ action")

  • args (dict) –

    Key-value pairs of arguments needed for the action

HumanInterrupt

Bases: TypedDict

Represents an interrupt triggered by the graph that requires human intervention.

This is passed to the interrupt function when execution is paused for human input.

Attributes:

  • action_request (ActionRequest) –

    The specific action being requested from the human

  • config (HumanInterruptConfig) –

    Configuration defining what actions are allowed

  • description (Optional[str]) –

    Optional detailed description of what input is needed

Example
# Extract a tool call from the state and create an interrupt request
request = HumanInterrupt(
    action_request=ActionRequest(
        action="run_command",  # The action being requested
        args={"command": "ls", "args": ["-l"]}  # Arguments for the action
    ),
    config=HumanInterruptConfig(
        allow_ignore=True,    # Allow skipping this step
        allow_respond=True,   # Allow text feedback
        allow_edit=False,     # Don't allow editing
        allow_accept=True     # Allow direct acceptance
    ),
    description="Please review the command before execution"
)
# Send the interrupt request and get the response
response = interrupt([request])[0]

HumanResponse

Bases: TypedDict

The response provided by a human to an interrupt, which is returned when graph execution resumes.

Attributes:

  • type (Literal['accept', 'ignore', 'response', 'edit']) –

    The type of response: - "accept": Approves the current state without changes - "ignore": Skips/ignores the current step - "response": Provides text feedback or instructions - "edit": Modifies the current state/content

  • arg (Literal['accept', 'ignore', 'response', 'edit']) –

    The response payload: - None: For ignore/accept actions - str: For text responses - ActionRequest: For edit actions with updated content

Comments