Prebuilt¶
create_react_agent(model: LanguageModelLike, tools: Union[ToolExecutor, Sequence[BaseTool], ToolNode], *, state_schema: Optional[StateSchemaType] = None, messages_modifier: Optional[MessagesModifier] = None, state_modifier: Optional[StateModifier] = None, checkpointer: Optional[Checkpointer] = None, store: Optional[BaseStore] = None, interrupt_before: Optional[list[str]] = None, interrupt_after: Optional[list[str]] = None, debug: bool = False) -> CompiledGraph
¶
Creates a graph that works with a chat model that utilizes tool calling.
Parameters:
-
model
(
) –LanguageModelLike The
LangChain
chat model that supports tool calling. -
tools
(
) –Union [ToolExecutor ,Sequence [BaseTool ],ToolNode ]A list of tools, a ToolExecutor, or a ToolNode instance.
-
state_schema
(
, default:Optional [StateSchemaType ]None
) –An optional state schema that defines graph state. Must have
messages
andis_last_step
keys. Defaults toAgentState
that defines those two keys. -
messages_modifier
(
, default:Optional [MessagesModifier ]None
) –An optional messages modifier. This applies to messages BEFORE they are passed into the LLM.
Can take a few different forms:
- SystemMessage: this is added to the beginning of the list of messages.
- str: This is converted to a SystemMessage and added to the beginning of the list of messages.
- Callable: This function should take in a list of messages and the output is then passed to the language model.
- Runnable: This runnable should take in a list of messages and the output is then passed to the language model.
Warning
messages_modifier
parameter is deprecated as of version 0.1.9 and will be removed in 0.2.0 -
state_modifier
(
, default:Optional [StateModifier ]None
) –An optional state modifier. This takes full graph state BEFORE the LLM is called and prepares the input to LLM.
Can take a few different forms:
- SystemMessage: this is added to the beginning of the list of messages in state["messages"].
- str: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"].
- Callable: This function should take in full graph state and the output is then passed to the language model.
- Runnable: This runnable should take in full graph state and the output is then passed to the language model.
-
checkpointer
(
, default:Optional [Checkpointer ]None
) –An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
-
store
(
, default:Optional [BaseStore ]None
) –An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).
-
interrupt_before
(
, default:Optional [list [str ]]None
) –An optional list of node names to interrupt before. Should be one of the following: "agent", "tools". This is useful if you want to add a user confirmation or other interrupt before taking an action.
-
interrupt_after
(
, default:Optional [list [str ]]None
) –An optional list of node names to interrupt after. Should be one of the following: "agent", "tools". This is useful if you want to return directly or run additional processing on an output.
-
debug
(
, default:bool False
) –A flag indicating whether to enable debug mode.
Returns:
-
–CompiledGraph A compiled LangChain runnable that can be used for chat interactions.
The resulting graph looks like this:
stateDiagram-v2
[*] --> Start
Start --> Agent
Agent --> Tools : continue
Tools --> Agent
Agent --> End : end
End --> [*]
classDef startClass fill:#ffdfba;
classDef endClass fill:#baffc9;
classDef otherClass fill:#fad7de;
class Start startClass
class End endClass
class Agent,Tools otherClass
The "agent" node calls the language model with the messages list (after applying the messages modifier).
If the resulting AIMessage contains tool_calls
, the graph will then call the "tools".
The "tools" node executes the tools (1 tool per tool_call
) and adds the responses to the messages list
as ToolMessage
objects. The agent node then calls the language model again.
The process repeats until no more tool_calls
are present in the response.
The agent then returns the full list of messages as a dictionary containing the key "messages".
sequenceDiagram
participant U as User
participant A as Agent (LLM)
participant T as Tools
U->>A: Initial input
Note over A: Messages modifier + LLM
loop while tool_calls present
A->>T: Execute tools
T-->>A: ToolMessage for each tool_calls
end
A->>U: Return final state
Examples:
Use with a simple tool:
>>> from datetime import datetime
>>> from langchain_openai import ChatOpenAI
>>> from langgraph.prebuilt import create_react_agent
... def check_weather(location: str, at_time: datetime | None = None) -> str:
... '''Return the weather forecast for the specified location.'''
... return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> model = ChatOpenAI(model="gpt-4o")
>>> graph = create_react_agent(model, tools=tools)
>>> inputs = {"messages": [("user", "what is the weather in sf")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
... message = s["messages"][-1]
... if isinstance(message, tuple):
... print(message)
... else:
... message.pretty_print()
('user', 'what is the weather in sf')
================================== Ai Message ==================================
Tool Calls:
check_weather (call_LUzFvKJRuaWQPeXvBOzwhQOu)
Call ID: call_LUzFvKJRuaWQPeXvBOzwhQOu
Args:
location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny.
>>> system_prompt = "You are a helpful bot named Fred."
>>> graph = create_react_agent(model, tools, state_modifier=system_prompt)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
... message = s["messages"][-1]
... if isinstance(message, tuple):
... print(message)
... else:
... message.pretty_print()
('user', "What's your name? And what's the weather in SF?")
================================== Ai Message ==================================
Hi, my name is Fred. Let me check the weather in San Francisco for you.
Tool Calls:
check_weather (call_lqhj4O0hXYkW9eknB4S41EXk)
Call ID: call_lqhj4O0hXYkW9eknB4S41EXk
Args:
location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is currently sunny. If you need any more details or have other questions, feel free to ask!
Add a more complex prompt for the LLM:
>>> from langchain_core.prompts import ChatPromptTemplate
>>> prompt = ChatPromptTemplate.from_messages([
... ("system", "You are a helpful bot named Fred."),
... ("placeholder", "{messages}"),
... ("user", "Remember, always be polite!"),
... ])
>>> def format_for_model(state: AgentState):
... # You can do more complex modifications here
... return prompt.invoke({"messages": state["messages"]})
>>>
>>> graph = create_react_agent(model, tools, state_modifier=format_for_model)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
... message = s["messages"][-1]
... if isinstance(message, tuple):
... print(message)
... else:
... message.pretty_print()
Add complex prompt with custom graph state:
>>> from typing import TypedDict
>>> prompt = ChatPromptTemplate.from_messages(
... [
... ("system", "Today is {today}"),
... ("placeholder", "{messages}"),
... ]
... )
>>>
>>> class CustomState(TypedDict):
... today: str
... messages: Annotated[list[BaseMessage], add_messages]
... is_last_step: str
>>>
>>> graph = create_react_agent(model, tools, state_schema=CustomState, state_modifier=prompt)
>>> inputs = {"messages": [("user", "What's today's date? And what's the weather in SF?")], "today": "July 16, 2004"}
>>> for s in graph.stream(inputs, stream_mode="values"):
... message = s["messages"][-1]
... if isinstance(message, tuple):
... print(message)
... else:
... message.pretty_print()
Add thread-level "chat memory" to the graph:
>>> from langgraph.checkpoint.memory import MemorySaver
>>> graph = create_react_agent(model, tools, checkpointer=MemorySaver())
>>> config = {"configurable": {"thread_id": "thread-1"}}
>>> def print_stream(graph, inputs, config):
... for s in graph.stream(inputs, config, stream_mode="values"):
... message = s["messages"][-1]
... if isinstance(message, tuple):
... print(message)
... else:
... message.pretty_print()
>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> inputs2 = {"messages": [("user", "Cool, so then should i go biking today?")]}
>>> print_stream(graph, inputs2, config)
('user', "What's the weather in SF?")
================================== Ai Message ==================================
Tool Calls:
check_weather (call_ChndaktJxpr6EMPEB5JfOFYc)
Call ID: call_ChndaktJxpr6EMPEB5JfOFYc
Args:
location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny. Enjoy your day!
================================ Human Message =================================
Cool, so then should i go biking today?
================================== Ai Message ==================================
Since the weather in San Francisco is sunny, it sounds like a great day for biking! Enjoy your ride!
Add an interrupt to let the user confirm before taking an action:
>>> graph = create_react_agent(
... model, tools, interrupt_before=["tools"], checkpointer=MemorySaver()
>>> )
>>> config = {"configurable": {"thread_id": "thread-1"}}
>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> snapshot = graph.get_state(config)
>>> print("Next step: ", snapshot.next)
>>> print_stream(graph, None, config)
Add cross-thread memory to the graph:
>>> from langgraph.prebuilt import InjectedStore
>>> from langgraph.store.base import BaseStore
>>> def save_memory(memory: str, *, config: RunnableConfig, store: Annotated[BaseStore, InjectedStore()]) -> str:
... '''Save the given memory for the current user.'''
... # This is a **tool** the model can use to save memories to storage
... user_id = config.get("configurable", {}).get("user_id")
... namespace = ("memories", user_id)
... store.put(namespace, f"memory_{len(store.search(namespace))}", {"data": memory})
... return f"Saved memory: {memory}"
>>> def prepare_model_inputs(state: AgentState, config: RunnableConfig, store: BaseStore):
... # Retrieve user memories and add them to the system message
... # This function is called **every time** the model is prompted. It converts the state to a prompt
... user_id = config.get("configurable", {}).get("user_id")
... namespace = ("memories", user_id)
... memories = [m.value["data"] for m in store.search(namespace)]
... system_msg = f"User memories: {', '.join(memories)}"
... return [{"role": "system", "content": system_msg)] + state["messages"]
>>> from langgraph.checkpoint.memory import MemorySaver
>>> from langgraph.store.memory import InMemoryStore
>>> store = InMemoryStore()
>>> graph = create_react_agent(model, [save_memory], state_modifier=prepare_model_inputs, store=store, checkpointer=MemorySaver())
>>> config = {"configurable": {"thread_id": "thread-1", "user_id": "1"}}
>>> inputs = {"messages": [("user", "Hey I'm Will, how's it going?")]}
>>> print_stream(graph, inputs, config)
('user', "Hey I'm Will, how's it going?")
================================== Ai Message ==================================
Hello Will! It's nice to meet you. I'm doing well, thank you for asking. How are you doing today?
>>> inputs2 = {"messages": [("user", "I like to bike")]}
>>> print_stream(graph, inputs2, config)
================================ Human Message =================================
I like to bike
================================== Ai Message ==================================
That's great to hear, Will! Biking is an excellent hobby and form of exercise. It's a fun way to stay active and explore your surroundings. Do you have any favorite biking routes or trails you enjoy? Or perhaps you're into a specific type of biking, like mountain biking or road cycling?
>>> config = {"configurable": {"thread_id": "thread-2", "user_id": "1"}}
>>> inputs3 = {"messages": [("user", "Hi there! Remember me?")]}
>>> print_stream(graph, inputs3, config)
================================ Human Message =================================
Hi there! Remember me?
================================== Ai Message ==================================
User memories:
Hello! Of course, I remember you, Will! You mentioned earlier that you like to bike. It's great to hear from you again. How have you been? Have you been on any interesting bike rides lately?
Add a timeout for a given step:
>>> import time
... def check_weather(location: str, at_time: datetime | None = None) -> float:
... '''Return the weather forecast for the specified location.'''
... time.sleep(2)
... return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> graph = create_react_agent(model, tools)
>>> graph.step_timeout = 1 # Seconds
>>> for s in graph.stream({"messages": [("user", "what is the weather in sf")]}):
... print(s)
TimeoutError: Timed out at step 2
Source code in libs/langgraph/langgraph/prebuilt/chat_agent_executor.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 |
|
ToolNode
¶
Bases:
A node that runs the tools called in the last AIMessage.
It can be used either in StateGraph with a "messages" state key (or a custom key passed via ToolNode's 'messages_key'). If multiple tool calls are requested, they will be run in parallel. The output will be a list of ToolMessages, one for each tool call.
Parameters:
-
tools
(
) –Sequence [Union [BaseTool ,Callable ]]A sequence of tools that can be invoked by the ToolNode.
-
name
(
, default:str 'tools'
) –The name of the ToolNode in the graph. Defaults to "tools".
-
tags
(
, default:Optional [list [str ]]None
) –Optional tags to associate with the node. Defaults to None.
-
handle_tool_errors
(
, default:Union [bool ,str ,Callable [...,str ],tuple [type [Exception ], ...]]True
) –How to handle tool errors raised by tools inside the node. Defaults to True. Must be one of the following:
- True: all errors will be caught and a ToolMessage with a default error message (TOOL_CALL_ERROR_TEMPLATE) will be returned.
- str: all errors will be caught and a ToolMessage with the string value of 'handle_tool_errors' will be returned.
- tuple[type[Exception], ...]: exceptions in the tuple will be caught and a ToolMessage with a default error message (TOOL_CALL_ERROR_TEMPLATE) will be returned.
- Callable[..., str]: exceptions from the signature of the callable will be caught and a ToolMessage with the string value of the result of the 'handle_tool_errors' callable will be returned.
- False: none of the errors raised by the tools will be caught
-
messages_key
(
, default:str 'messages'
) –The state key in the input that contains the list of messages. The same key will be used for the output from the ToolNode. Defaults to "messages".
The ToolNode
is roughly analogous to:
tools_by_name = {tool.name: tool for tool in tools}
def tool_node(state: dict):
result = []
for tool_call in state["messages"][-1].tool_calls:
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
return {"messages": result}
Important
- The state MUST contain a list of messages.
- The last message MUST be an
AIMessage
. - The
AIMessage
MUST havetool_calls
populated.
Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 |
|
InjectedState
¶
Bases:
Annotation for a Tool arg that is meant to be populated with the graph state.
Any Tool argument annotated with InjectedState will be hidden from a tool-calling model, so that the model doesn't attempt to generate the argument. If using ToolNode, the appropriate graph state field will be automatically injected into the model-generated tool args.
Parameters:
-
field
(
, default:Optional [str ]None
) –The key from state to insert. If None, the entire state is expected to be passed in.
Example
from typing import List
from typing_extensions import Annotated, TypedDict
from langchain_core.messages import BaseMessage, AIMessage
from langchain_core.tools import tool
from langgraph.prebuilt import InjectedState, ToolNode
class AgentState(TypedDict):
messages: List[BaseMessage]
foo: str
@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
'''Do something with state.'''
if len(state["messages"]) > 2:
return state["foo"] + str(x)
else:
return "not enough messages"
@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
'''Do something else with state.'''
return foo + str(x + 1)
node = ToolNode([state_tool, foo_tool])
tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
state = {
"messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
"foo": "bar",
}
node.invoke(state)
Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
InjectedStore
¶
Bases:
Annotation for a Tool arg that is meant to be populated with LangGraph store.
Any Tool argument annotated with InjectedStore will be hidden from a tool-calling model, so that the model doesn't attempt to generate the argument. If using ToolNode, the appropriate store field will be automatically injected into the model-generated tool args. Note: if a graph is compiled with a store object, the store will be automatically propagated to the tools with InjectedStore args when using ToolNode.
Warning
InjectedStore
annotation requires langchain-core >= 0.3.8
Example
from typing import Any
from typing_extensions import Annotated
from langchain_core.messages import AIMessage
from langchain_core.tools import tool
from langgraph.store.memory import InMemoryStore
from langgraph.prebuilt import InjectedStore, ToolNode
store = InMemoryStore()
store.put(("values",), "foo", {"bar": 2})
@tool
def store_tool(x: int, my_store: Annotated[Any, InjectedStore()]) -> str:
'''Do something with store.'''
stored_value = my_store.get(("values",), "foo").value["bar"]
return stored_value + x
node = ToolNode([store_tool])
tool_call = {"name": "store_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
state = {
"messages": [AIMessage("", tool_calls=[tool_call])],
}
node.invoke(state, store=store)
Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
tools_condition(state: Union[list[AnyMessage], dict[str, Any], BaseModel], messages_key: str = 'messages') -> Literal['tools', '__end__']
¶
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
Parameters:
-
state
(
) –Union [list [AnyMessage ],dict [str ,Any ],BaseModel ]The state to check for tool calls. Must have a list of messages (MessageGraph) or have the "messages" key (StateGraph).
Returns:
-
–Literal ['tools', '__end__']The next node to route to.
Examples:
Create a custom ReAct-style agent with tools.
>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_core.tools import tool
...
>>> from langgraph.graph import StateGraph
>>> from langgraph.prebuilt import ToolNode, tools_condition
>>> from langgraph.graph.message import add_messages
...
>>> from typing import TypedDict, Annotated
...
>>> @tool
>>> def divide(a: float, b: float) -> int:
... """Return a / b."""
... return a / b
...
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307")
>>> tools = [divide]
...
>>> class State(TypedDict):
... messages: Annotated[list, add_messages]
>>>
>>> graph_builder = StateGraph(State)
>>> graph_builder.add_node("tools", ToolNode(tools))
>>> graph_builder.add_node("chatbot", lambda state: {"messages":llm.bind_tools(tools).invoke(state['messages'])})
>>> graph_builder.add_edge("tools", "chatbot")
>>> graph_builder.add_conditional_edges(
... "chatbot", tools_condition
... )
>>> graph_builder.set_entry_point("chatbot")
>>> graph = graph_builder.compile()
>>> graph.invoke({"messages": {"role": "user", "content": "What's 329993 divided by 13662?"}})
Source code in libs/langgraph/langgraph/prebuilt/tool_node.py
This module provides a ValidationNode class that can be used to validate tool calls in a langchain graph. It applies a pydantic schema to tool_calls in the models' outputs, and returns a ToolMessage with the validated content. If the schema is not valid, it returns a ToolMessage with the error message. The ValidationNode can be used in a StateGraph with a "messages" key or in a MessageGraph. If multiple tool calls are requested, they will be run in parallel.
ValidationNode
¶
Bases:
A node that validates all tools requests from the last AIMessage.
It can be used either in StateGraph with a "messages" key or in MessageGraph.
Note
This node does not actually run the tools, it only validates the tool calls, which is useful for extraction and other use cases where you need to generate structured output that conforms to a complex schema without losing the original messages and tool IDs (for use in multi-turn conversations).
Parameters:
-
schemas
(
) –Sequence [Union [BaseTool ,Type [BaseModel ],Callable ]]A list of schemas to validate the tool calls with. These can be any of the following: - A pydantic BaseModel class - A BaseTool instance (the args_schema will be used) - A function (a schema will be created from the function signature)
-
format_error
(
, default:Optional [Callable [[BaseException ,ToolCall ,Type [BaseModel ]],str ]]None
) –A function that takes an exception, a ToolCall, and a schema and returns a formatted error string. By default, it returns the exception repr and a message to respond after fixing validation errors.
-
name
(
, default:str 'validation'
) –The name of the node.
-
tags
(
, default:Optional [list [str ]]None
) –A list of tags to add to the node.
Returns:
-
–Union [Dict [str ,List [ToolMessage ]],Sequence [ToolMessage ]]A list of ToolMessages with the validated content or error messages.
Examples:
Example usage for re-prompting the model to generate a valid response:
>>> from typing import Literal, Annotated, TypedDict
...
>>> from langchain_anthropic import ChatAnthropic
>>> from pydantic import BaseModel, validator
...
>>> from langgraph.graph import END, START, StateGraph
>>> from langgraph.prebuilt import ValidationNode
>>> from langgraph.graph.message import add_messages
...
...
>>> class SelectNumber(BaseModel):
... a: int
...
... @validator("a")
... def a_must_be_meaningful(cls, v):
... if v != 37:
... raise ValueError("Only 37 is allowed")
... return v
...
...
>>> class State(TypedDict):
... messages: Annotated[list, add_messages]
...
>>> builder = StateGraph(State)
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307").bind_tools([SelectNumber])
>>> builder.add_node("model", llm)
>>> builder.add_node("validation", ValidationNode([SelectNumber]))
>>> builder.add_edge(START, "model")
...
...
>>> def should_validate(state: list) -> Literal["validation", "__end__"]:
... if state[-1].tool_calls:
... return "validation"
... return END
...
...
>>> builder.add_conditional_edges("model", should_validate)
...
...
>>> def should_reprompt(state: list) -> Literal["model", "__end__"]:
... for msg in state[::-1]:
... # None of the tool calls were errors
... if msg.type == "ai":
... return END
... if msg.additional_kwargs.get("is_error"):
... return "model"
... return END
...
...
>>> builder.add_conditional_edges("validation", should_reprompt)
...
...
>>> graph = builder.compile()
>>> res = graph.invoke(("user", "Select a number, any number"))
>>> # Show the retry logic
>>> for msg in res:
... msg.pretty_print()
================================ Human Message =================================
Select a number, any number
================================== Ai Message ==================================
[{'id': 'toolu_01JSjT9Pq8hGmTgmMPc6KnvM', 'input': {'a': 42}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01JSjT9Pq8hGmTgmMPc6KnvM)
Call ID: toolu_01JSjT9Pq8hGmTgmMPc6KnvM
Args:
a: 42
================================= Tool Message =================================
Name: SelectNumber
ValidationError(model='SelectNumber', errors=[{'loc': ('a',), 'msg': 'Only 37 is allowed', 'type': 'value_error'}])
Respond after fixing all validation errors.
================================== Ai Message ==================================
[{'id': 'toolu_01PkxSVxNxc5wqwCPW1FiSmV', 'input': {'a': 37}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01PkxSVxNxc5wqwCPW1FiSmV)
Call ID: toolu_01PkxSVxNxc5wqwCPW1FiSmV
Args:
a: 37
================================= Tool Message =================================
Name: SelectNumber
{"a": 37}
Source code in libs/langgraph/langgraph/prebuilt/tool_validator.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|