Quick Start¶
In this comprehensive quick start, we will build a support chatbot in LangGraph that can:
- Answer common questions by searching the web
- Maintain conversation state across calls
- Route complex queries to a human for review
- Use custom state to control its behavior
- Rewind and explore alternative conversation paths
We'll start with a basic chatbot and progressively add more sophisticated capabilities, introducing key LangGraph concepts along the way.
Setup¶
First, install the required packages:
%%capture --no-stderr
%pip install -U langgraph langsmith
# Used for this tutorial; not a requirement for LangGraph
%pip install -U langchain_anthropic
Next, set your API keys:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
Part 1: Build a Basic Chatbot¶
We'll first create a simple chatbot using LangGraph. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have a built rudimentary chatbot.
Start by creating a StateGraph
. A StateGraph
object defines the structure of our chatbot as a "state machine". We'll add nodes
to represent the llm and functions our chatbot can call and edges
to specify how the bot should transition between these functions.
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Note
The first thing you do when you define a graph is define the State
of the graph. The State
consists of the schema of the graph as well as reducer functions which specify how to apply updates to the state. In our example State
is a TypedDict
with a single key: messages
. The messages
key is annotated with the add_messages
reducer function, which tells LangGraph to append new messages to the existing list, rather than overwriting it. State keys without an annotation will be overwritten by each update, storing the most recent value. Check out this conceptual guide to learn more about state, reducers and other low-level concepts.
So now our graph knows two things:
- Every
node
we define will receive the currentState
as input and return a value that updates that state. messages
will be appended to the current list, rather than directly overwritten. This is communicated via the prebuiltadd_messages
function in theAnnotated
syntax.
Next, add a "chatbot
" node. Nodes represent units of work. They are typically regular python functions.
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-haiku-20240307")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)
Notice how the chatbot
node function takes the current State
as input and returns a dictionary containing an updated messages
list under the key "messages". This is the basic pattern for all LangGraph node functions.
The add_messages
function in our State
will append the llm's response messages to whatever messages are already in the state.
Next, add an entry
point. This tells our graph where to start its work each time we run it.
graph_builder.add_edge(START, "chatbot")
Similarly, set a finish
point. This instructs the graph "any time this node is run, you can exit."
graph_builder.add_edge("chatbot", END)
Finally, we'll want to be able to run our graph. To do so, call "compile()
" on the graph builder. This creates a "CompiledGraph
" we can use invoke on our state.
graph = graph_builder.compile()
You can visualize the graph using the get_graph
method and one of the "draw" methods, like draw_ascii
or draw_png
. The draw
methods each require additional dependencies.
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now let's run the chatbot!
Tip: You can exit the chat loop at any time by typing "quit", "exit", or "q".
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
User: what's langgraph all about?
Assistant: LangGraph is a new open-source deep learning framework that focuses on enabling efficient training and deployment of large language models. Some key things to know about LangGraph: 1. Efficient Training: LangGraph is designed to accelerate the training of large language models by leveraging advanced optimization techniques and parallelization strategies. 2. Modular Architecture: LangGraph has a modular architecture that allows for easy customization and extension of language models, making it flexible for a variety of NLP tasks. 3. Hardware Acceleration: The framework is optimized for both CPU and GPU hardware, allowing for efficient model deployment on a wide range of devices. 4. Scalability: LangGraph is designed to handle large-scale language models with billions of parameters, enabling the development of state-of-the-art NLP applications. 5. Open-Source: LangGraph is an open-source project, allowing developers and researchers to collaborate, contribute, and build upon the framework. 6. Performance: The goal of LangGraph is to provide superior performance and efficiency compared to existing deep learning frameworks, particularly for training and deploying large language models. Overall, LangGraph is a promising new deep learning framework that aims to address the challenges of building and deploying advanced natural language processing models at scale. It is an active area of research and development, with the potential to drive further advancements in the field of language AI.
User: hm that doesn't seem right...
Assistant: I'm sorry, I don't have enough context to determine what doesn't seem right. Could you please provide more details about what you're referring to? That would help me better understand and respond appropriately.
User: q
Goodbye!
Congratulations! You've built your first chatbot using LangGraph. This bot can engage in basic conversation by taking user input and generating responses using an LLM. You can inspect a LangSmith Trace for the call above at the provided link.
However, you may have noticed that the bot's knowledge is limited to what's in its training data. In the next part, we'll add a web search tool to expand the bot's knowledge and make it more capable.
Below is the full code for this section for your reference:
Full Code
from typing import Annotated from langchain_anthropic import ChatAnthropic from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.graph.message import add_messages class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) llm = ChatAnthropic(model="claude-3-haiku-20240307") def chatbot(state: State): return {"messages": [llm.invoke(state["messages"])]} # The first argument is the unique node name # The second argument is the function or object that will be called whenever # the node is used. graph_builder.add_node("chatbot", chatbot) graph_builder.set_entry_point("chatbot") graph_builder.set_finish_point("chatbot") graph = graph_builder.compile()
Part 2: Enhancing the Chatbot with Tools¶
To handle queries our chatbot can't answer "from memory", we'll integrate a web search tool. Our bot can use this tool to find relevant information and provide better responses.
Requirements¶
Before we start, make sure you have the necessary packages installed and API keys set up:
First, install the requirements to use the Tavily Search Engine, and set your TAVILY_API_KEY.
%%capture --no-stderr
%pip install -U tavily-python
%pip install -U langchain_community
_set_env("TAVILY_API_KEY")
Next, define the tool:
from langchain_community.tools.tavily_search import TavilySearchResults
tool = TavilySearchResults(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")
[{'url': 'https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141', 'content': 'Nodes: Nodes are the building blocks of your LangGraph. Each node represents a function or a computation step. You define nodes to perform specific tasks, such as processing input, making ...'}, {'url': 'https://js.langchain.com/docs/langgraph', 'content': "Assuming you have done the above Quick Start, you can build off it like:\nHere, we manually define the first tool call that we will make.\nNotice that it does that same thing as agent would have done (adds the agentOutcome key).\n LangGraph\n🦜🕸️LangGraph.js\n⚡ Building language agents as graphs ⚡\nOverview\u200b\nLangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.js.\n Therefore, we will use an object with one key (messages) with the value as an object: { value: Function, default?: () => any }\nThe default key must be a factory that returns the default value for that attribute.\n Streaming Node Output\u200b\nOne of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.\n What this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node.\n"}]
The results are page summaries our chat bot can use to answer questions.
Next, we'll start defining our graph. The following is all the same as in Part 1, except we have added bind_tools
on our LLM. This lets the LLM know the correct JSON format to use if it wants to use our search engine.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
Next we need to create a function to actually run the tools if they are called. We'll do this by adding the tools to a new node.
Below, implement a BasicToolNode
that checks the most recent message in the state and calls tools if the message contains tool_calls
. It relies on the LLM's tool_calling
support, which is available in Anthropic, OpenAI, Google Gemini, and a number of other LLM providers.
We will later replace this with LangGraph's prebuilt ToolNode to speed things up, but building it ourselves first is instructive.
import json
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
With the tool node added, we can define the conditional_edges
.
Recall that edges route the control flow from one node to the next. Conditional edges usually contain "if" statements to route to different nodes depending on the current graph state. These functions receive the current graph state
and return a string or list of strings indicating which node(s) to call next.
Below, call define a router function called route_tools
, that checks for tool_calls in the chatbot's output. Provide this function to the graph by calling add_conditional_edges
, which tells the graph that whenever the chatbot
node completes to check this function to see where to go next.
The condition will route to tools
if tool calls are present and "__end__
" if not.
Later, we will replace this with the prebuilt tools_condition to be more concise, but implementing it ourselves first makes things more clear.
from typing import Literal
def route_tools(
state: State,
) -> Literal["tools", "__end__"]:
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return "__end__"
# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "__end__" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", "__end__": "__end__"},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
Notice that conditional edges start from a single node. This tells the graph "any time the 'chatbot
' node runs, either go to 'tools' if it calls a tool, or end the loop if it responds directly.
Like the prebuilt tools_condition
, our function returns the "__end__
" string if no tool calls are made. When the graph transitions to __end__
, it has no more tasks to complete and ceases execution. Because the condition can return __end__
, we don't need to explicitly set a finish_point
this time. Our graph already has a way to finish!
Let's visualize the graph we've built. The following function has some additional dependencies to run that are unimportant for this tutorial.
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now we can ask the bot questions outside its training data.
from langchain_core.messages import BaseMessage
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
if isinstance(value["messages"][-1], BaseMessage):
print("Assistant:", value["messages"][-1].content)
User: what's langgraph all about?
Assistant: [{'id': 'toolu_01L1TABSBXsHPsebWiMPNqf1', 'input': {'query': 'langgraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Assistant: [{"url": "https://langchain-ai.github.io/langgraph/", "content": "LangGraph is framework agnostic (each node is a regular python function). It extends the core Runnable API (shared interface for streaming, async, and batch calls) to make it easy to: Seamless state management across multiple turns of conversation or tool usage. The ability to flexibly route between nodes based on dynamic criteria."}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\n It's important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\n LangGraph: Multi-Agent Workflows\nLinks\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \"\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\n As part of this launch, we're also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\n"}] Assistant: Based on the search results, LangGraph is a framework-agnostic Python and JavaScript library that extends the core Runnable API from the LangChain project to enable the creation of more complex workflows involving multiple agents or components. Some key things about LangGraph: - It makes it easier to manage state across multiple turns of conversation or tool usage, and to dynamically route between different nodes/components based on criteria. - It is integrated with the LangChain ecosystem, allowing you to take advantage of LangChain integrations and observability features. - It enables the creation of multi-agent workflows, where different components or agents can be chained together in more flexible and complex ways than the standard LangChain AgentExecutor. - The core idea is to provide a more powerful and flexible framework for building LLM-powered applications and workflows, beyond what is possible with just the core LangChain tools. Overall, LangGraph seems to be a useful addition to the LangChain toolkit, focused on enabling more advanced, multi-agent style applications and workflows powered by large language models.
User: neat!
Assistant: I'm afraid I don't have enough context to provide a substantive response to "neat!". As an AI assistant, I'm designed to have conversations and provide information to users, but I need more details or a specific question from you in order to give a helpful reply. Could you please rephrase your request or provide some additional context? I'd be happy to assist further once I understand what you're looking for.
User: what?
Assistant: I'm afraid I don't have enough context to provide a meaningful response to "what?". Could you please rephrase your request or provide more details about what you are asking? I'd be happy to try to assist you further once I have a clearer understanding of your query.
User: q
Goodbye!
Congrats! You've created a conversational agent in langgraph that can use a search engine to retrieve updated information when needed. Now it can handle a wider range of user queries. To inspect all the steps your agent just took, check out this LangSmith trace.
Our chatbot still can't remember past interactions on its own, limiting its ability to have coherent, multi-turn conversations. In the next part, we'll add memory to address this.
The full code for the graph we've created in this section is reproduced below, replacing our BasicToolNode
for the prebuilt ToolNode, and our route_tools
condition with the prebuilt tools_condition
Full Code
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-haiku-20240307") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) # Any time a tool is called, we return to the chatbot to decide the next step graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile()
Part 3: Adding Memory to the Chatbot¶
Our chatbot can now use tools to answer user questions, but it doesn't remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations.
LangGraph solves this problem through persistent checkpointing. If you provide a checkpointer
when compiling the graph and a thread_id
when calling your graph, LangGraph automatically saves the state after each step. When you invoke the graph again using the same thread_id
, the graph loads its saved state, allowing the chatbot to pick up where it left off.
We will see later that checkpointing is much more powerful than simple chat memory - it lets you save and resume complex state at any time for error recovery, human-in-the-loop workflows, time travel interactions, and more. But before we get too ahead of ourselves, let's add checkpointing to enable multi-turn conversations.
To get started, create a MemorySaver
checkpointer.
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Notice we're using an in-memory checkpointer. This is convenient for our tutorial (it saves it all in-memory). In a production application, you would likely change this to use SqliteSaver
or PostgresSaver
and connect to your own DB.
Next define the graph. Now that you've already built your own BasicToolNode
, we'll replace it with LangGraph's prebuilt ToolNode
and tools_condition
, since these do some nice things like parallel API execution. Apart from that, the following is all copied from Part 2.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
Finally, compile the graph with the provided checkpointer.
graph = graph_builder.compile(checkpointer=memory)
Notice the connectivity of the graph hasn't changed since Part 2. All we are doing is checkpointing the State
as the graph works through each node.
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now you can interact with your bot! First, pick a thread to use as the key for this conversation.
config = {"configurable": {"thread_id": "1"}}
Next, call your chat bot.
user_input = "Hi there! My name is Will."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Hi there! My name is Will. ================================== Ai Message ================================== It's nice to meet you, Will! I'm an AI assistant created by Anthropic. I'm here to help you with any questions or tasks you may have. Please let me know how I can assist you today.
Note: The config was provided as the second positional argument when calling our graph. It importantly is not nested within the graph inputs ({'messages': []}
).
Let's ask a followup: see if it remembers your name.
user_input = "Remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember my name? ================================== Ai Message ================================== Of course, your name is Will. It's nice to meet you again!
Notice that we aren't using an external list for memory: it's all handled by the checkpointer! You can inspect the full execution in this LangSmith trace to see what's going on.
Don't believe me? Try this using a different config.
# The only difference is we change the `thread_id` here to "2" instead of "1"
events = graph.stream(
{"messages": [("user", user_input)]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember my name? ================================== Ai Message ================================== I'm afraid I don't actually have the capability to remember your name. As an AI assistant, I don't have a persistent memory of our previous conversations or interactions. I respond based on the current context provided to me. Could you please restate your name or provide more information so I can try to assist you?
Notice that the only change we've made is to modify the thread_id
in the config. See this call's LangSmith trace for comparison.
By now, we have made a few checkpoints across two different threads. But what goes into a checkpoint? To inspect a graph's state
for a given config at any time, call get_state(config)
.
snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', id='aad97d7f-8845-4f9e-b723-2af3b7c97590'), AIMessage(content="It's nice to meet you, Will! I'm an AI assistant created by Anthropic. I'm here to help you with any questions or tasks you may have. Please let me know how I can assist you today.", response_metadata={'id': 'msg_01VCz7Y5jVmMZXibBtnECyvJ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 375, 'output_tokens': 49}}, id='run-66cf1695-5ba8-4fd8-a79d-ded9ee3c3b33-0'), HumanMessage(content='Remember my name?', id='ac1e9971-dbee-4622-9e63-5015dee05c20'), AIMessage(content="Of course, your name is Will. It's nice to meet you again!", response_metadata={'id': 'msg_01RsJ6GaQth7r9soxbF7TSpQ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 431, 'output_tokens': 19}}, id='run-890149d3-214f-44e8-9717-57ec4ef68224-0')]}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:23:20.430350+00:00'}}, parent_config=None)
snapshot.next # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)
()
The snapshot above contains the current state values, corresponding config, and the next
node to process. In our case, the graph has reached an __end__
state, so next
is empty.
Congratulations! Your chatbot can now maintain conversation state across sessions thanks to LangGraph's checkpointing system. This opens up exciting possibilities for more natural, contextual interactions. LangGraph's checkpointing even handles arbitrarily complex graph states, which is much more expressive and powerful than simple chat memory.
In the next part, we'll introduce human oversight to our bot to handle situations where it may need guidance or verification before proceeding.
Check out the code snippet below to review our graph from this section.
Full Code
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-haiku-20240307") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile(checkpointer=memory)
Part 4: Human-in-the-loop¶
Agents can be unreliable and may need human input to successfully accomplish tasks. Similarly, for some actions, you may want to require human approval before running to ensure that everything is running as intended.
LangGraph supports human-in-the-loop
workflows in a number of ways. In this section, we will use LangGraph's interrupt_before
functionality to always break the tool node.
First, start from our existing code. The following is copied from Part 3.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
memory = MemorySaver()
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
Now, compile the graph, specifying to interrupt_before
the tools
node.
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt __after__ tools, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'text': "Okay, let's look up some information on LangGraph:", 'type': 'text'}, {'id': 'toolu_01XoHVKTRbipJokQorfifzvh', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_01XoHVKTRbipJokQorfifzvh) Call ID: toolu_01XoHVKTRbipJokQorfifzvh Args: query: LangGraph
Let's inspect the graph state to confirm it worked.
snapshot = graph.get_state(config)
snapshot.next
('tools',)
Notice that unlike last time, the "next" node is set to 'tools'. We've interrupted here! Let's check the tool invocation.
existing_message = snapshot.values["messages"][-1]
existing_message.tool_calls
[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'toolu_01XoHVKTRbipJokQorfifzvh', 'type': 'tool_call'}]
This query seems reasonable. Nothing to filter here. The simplest thing the human can do is just let the graph continue executing. Let's do that below.
Next, continue the graph! Passing in None
will just let the graph continue where it left off, without adding anything new to the state.
# `None` will append nothing new to the current state, letting it resume as if it had never been interrupted
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://langchain-ai.github.io/langgraph/tutorials/", "content": "LangGraph is a framework for building language agents as graphs. Learn how to use LangGraph to create chatbots, code assistants, planning agents, reflection agents, and more with these notebooks."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph is a library for creating stateful, multi-actor applications with LLMs, using cycles, controllability, and persistence. Learn how to use LangGraph with examples, integration with LangChain, and streaming support."}] ================================== Ai Message ================================== Based on the search results, LangGraph seems to be a framework for building language-based AI agents and applications using language models. It provides a modular, graph-based approach for creating chatbots, code assistants, planning agents, and other language-centric applications. Some key things I learned about LangGraph: - It is designed to make it easier to build stateful, multi-actor applications using large language models (LLMs). - It provides features like cycles, controllability, and persistence to help manage the complexity of these types of applications. - LangGraph can be integrated with the LangChain library, which provides additional tools for building LLM-powered applications. - The framework includes examples and tutorials to help get started with using LangGraph. Overall, LangGraph seems like a promising approach for building more advanced, graph-based language applications on top of large language models. Let me know if you need any other details on LangGraph and how it works!
Review this call's LangSmith trace to see the exact work that was done in the above call. Notice that the state is loaded in the first step so that your chatbot can continue where it left off.
Congrats! You've used an interrupt
to add human-in-the-loop execution to your chatbot, allowing for human oversight and intervention when needed. This opens up the potential UIs you can create with your AI systems. Since we have already added a checkpointer, the graph can be paused indefinitely and resumed at any time as if nothing had happened.
Next, we'll explore how to further customize the bot's behavior using custom state updates.
Below is a copy of the code you used in this section. The only difference between this and the previous parts is the addition of the interrupt_before
argument.
Full Code
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-haiku-20240307") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") memory = MemorySaver() graph = graph_builder.compile( checkpointer=memory, # This is new! interrupt_before=["tools"], # Note: can also interrupt __after__ actions, if desired. # interrupt_after=["tools"] )
Part 5: Manually Updating the State¶
In the previous section, we showed how to interrupt a graph so that a human could inspect its actions. This lets the human read
the state, but if they want to change their agent's course, they'll need to have write
access.
Thankfully, LangGraph lets you manually update state! Updating the state lets you control the agent's trajectory by modifying its actions (even modifying the past!). This capability is particularly useful when you want to correct the agent's mistakes, explore alternative paths, or guide the agent towards a specific goal.
We'll show how to update a checkpointed state below. As before, first, define your graph. We'll reuse the exact same graph as before.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt **after** actions, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream({"messages": [("user", user_input)]}, config)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
existing_message.pretty_print()
================================== Ai Message ==================================
[{'id': 'toolu_01DTyDpJ1kKdNps5yxv3AGJd', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01DTyDpJ1kKdNps5yxv3AGJd)
Call ID: toolu_01DTyDpJ1kKdNps5yxv3AGJd
Args:
query: LangGraph
So far, all of this is an exact repeat of the previous section. The LLM just requested to use the search engine tool and our graph was interrupted. If we proceed as before, the tool will be called to search the web.
But what if the user wants to intercede? What if we think the chat bot doesn't need to use the tool?
Let's directly provide the correct response!
from langchain_core.messages import AIMessage, ToolMessage
answer = (
"LangGraph is a library for building stateful, multi-actor applications with LLMs."
)
new_messages = [
# The LLM API expects some ToolMessage to match its tool call. We'll satisfy that here.
ToolMessage(content=answer, tool_call_id=existing_message.tool_calls[0]["id"]),
# And then directly "put words in the LLM's mouth" by populating its response.
AIMessage(content=answer),
]
new_messages[-1].pretty_print()
graph.update_state(
# Which state to update
config,
# The updated values to provide. The messages in our `State` are "append-only", meaning this will be appended
# to the existing state. We will review how to update existing messages in the next section!
{"messages": new_messages},
)
print("\n\nLast 2 messages;")
print(graph.get_state(config).values["messages"][-2:])
================================== Ai Message ==================================
LangGraph is a library for building stateful, multi-actor applications with LLMs.
Last 2 messages;
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='14589ef1-15db-4a75-82a6-d57c40a216d0', tool_call_id='toolu_01DTyDpJ1kKdNps5yxv3AGJd'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='1c657bfb-7690-44c7-a26d-d0d22453013d')]
Now the graph is complete, since we've provided the final response message! Since state updates simulate a graph step, they even generate corresponding traces. Inspec the LangSmith trace of the update_state
call above to see what's going on.
Notice that our new messages are appended to the messages already in the state. Remember how we defined the State
type?
class State(TypedDict):
messages: Annotated[list, add_messages]
We annotated messages
with the pre-built add_messages
function. This instructs the graph to always append values to the existing list, rather than overwriting the list directly. The same logic is applied here, so the messages we passed to update_state
were appended in the same way!
The update_state
function operates as if it were one of the nodes in your graph! By default, the update operation uses the node that was last executed, but you can manually specify it below. Let's add an update and tell the graph to treat it as if it came from the "chatbot".
graph.update_state(
config,
{"messages": [AIMessage(content="I'm an AI expert!")]},
# Which node for this function to act as. It will automatically continue
# processing as if this node just ran.
as_node="chatbot",
)
{'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:27:57.350721+00:00'}}
Check out the LangSmith trace for this update call at the provided link. Notice from the trace that the graph continues into the tools_condition
edge. We just told the graph to treat the update as_node="chatbot"
. If we follow the diagram below and start from the chatbot
node, we naturally end up in the tools_condition
edge and then __end__
since our updated message lacks tool calls.
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Inspect the current state as before to confirm the checkpoint reflects our manual updates.
snapshot = graph.get_state(config)
print(snapshot.values["messages"][-3:])
print(snapshot.next)
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='14589ef1-15db-4a75-82a6-d57c40a216d0', tool_call_id='toolu_01DTyDpJ1kKdNps5yxv3AGJd'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='1c657bfb-7690-44c7-a26d-d0d22453013d'), AIMessage(content="I'm an AI expert!", id='acd668e3-ba31-42c0-843c-00d0994d5885')] ()
Notice: that we've continued to add AI messages to the state. Since we are acting as the chatbot
and responding with an AIMessage that doesn't contain tool_calls
, the graph knows that it has entered a finished state (next
is empty).
What if you want to overwrite existing messages?¶
The add_messages
function we used to annotate our graph's State
above controls how updates are made to the messages
key. This function looks at any message IDs in the new messages
list. If the ID matches a message in the existing state, add_messages
overwrites the existing message with the new content.
As an example, let's update the tool invocation to make sure we get good results from our search engine! First, start a new thread:
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "2"}} # we'll use thread_id = 2 here
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_013MvjoDHnv476ZGzyPFZhrR) Call ID: toolu_013MvjoDHnv476ZGzyPFZhrR Args: query: LangGraph
Next, let's update the tool invocation for our agent. Maybe we want to search for human-in-the-loop workflows in particular.
from langchain_core.messages import AIMessage
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
print("Original")
print("Message ID", existing_message.id)
print(existing_message.tool_calls[0])
new_tool_call = existing_message.tool_calls[0].copy()
new_tool_call["args"]["query"] = "LangGraph human-in-the-loop workflow"
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
# Important! The ID is how LangGraph knows to REPLACE the message in the state rather than APPEND this messages
id=existing_message.id,
)
print("Updated")
print(new_message.tool_calls[0])
print("Message ID", new_message.id)
graph.update_state(config, {"messages": [new_message]})
print("\n\nTool calls")
graph.get_state(config).values["messages"][-1].tool_calls
Original Message ID run-59283969-1076-45fe-bee8-ebfccab163c3-0 {'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'} Updated {'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'} Message ID run-59283969-1076-45fe-bee8-ebfccab163c3-0 Tool calls
[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'}]
Notice that we've modified the AI's tool invocation to search for "LangGraph human-in-the-loop workflow" instead of the simple "LangGraph".
Check out the LangSmith trace to see the state update call - you can see our new message has successfully updated the previous AI message.
Resume the graph by streaming with an input of None
and the existing config.
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/", "content": "Human-in-the-loop\u00b6 When creating LangGraph agents, it is often nice to add a human in the loop component. This can be helpful when giving them access to tools. ... from langgraph.graph import MessageGraph, END # Define a new graph workflow = MessageGraph # Define the two nodes we will cycle between workflow. add_node (\"agent\", call_model) ..."}, {"url": "https://langchain-ai.github.io/langgraph/how-tos/chat_agent_executor_with_function_calling/human-in-the-loop/", "content": "Human-in-the-loop. In this example we will build a ReAct Agent that has a human in the loop. We will use the human to approve specific actions. This examples builds off the base chat executor. It is highly recommended you learn about that executor before going through this notebook. You can find documentation for that example here."}] ================================== Ai Message ================================== Based on the search results, LangGraph appears to be a framework for building AI agents that can interact with humans in a conversational way. The key points I gathered are: - LangGraph allows for "human-in-the-loop" workflows, where a human can be involved in approving or reviewing actions taken by the AI agent. - This can be useful for giving the AI agent access to various tools and capabilities, with the human able to provide oversight and guidance. - The framework includes components like "MessageGraph" for defining the conversational flow between the agent and human. Overall, LangGraph seems to be a way to create conversational AI agents that can leverage human input and guidance, rather than operating in a fully autonomous way. Let me know if you need any clarification or have additional questions!
Check out the trace to see the tool call and later LLM response. Notice that now the graph queries the search engine using our updated query term - we were able to manually override the LLM's search here!
All of this is reflected in the graph's checkpointed memory, meaning if we continue the conversation, it will recall all the modified state.
events = graph.stream(
{
"messages": (
"user",
"Remember what I'm learning about?",
)
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember what I'm learning about? ================================== Ai Message ================================== Ah yes, now I remember - you mentioned earlier that you are learning about LangGraph. LangGraph is the framework I researched in my previous response, which is for building conversational AI agents that can incorporate human input and oversight. So based on our earlier discussion, it seems you are currently learning about and exploring the LangGraph system for creating human-in-the-loop AI agents. Please let me know if I have the right understanding now.
Congratulations! You've used interrupt_before
and update_state
to manually modify the state as a part of a human-in-the-loop workflow. Interruptions and state modifications let you control how the agent behaves. Combined with persistent checkpointing, it means you can pause
an action and resume
at any point. Your user doesn't have to be available when the graph interrupts!
The graph code for this section is identical to previous ones. The key snippets to remember are to add .compile(..., interrupt_before=[...])
(or interrupt_after
) if you want to explicitly pause the graph whenever it reaches a node. Then you can use update_state
to modify the checkpoint and control how the graph should proceed.
Part 6: Customizing State¶
So far, we've relied on a simple state (it's just a list of messages!). You can go far with this simple state, but if you want to define complex behavior without relying on the message list, you can add additional fields to the state. In this section, we will extend our chat bot with a new node to illustrate this.
In the examples above, we involved a human deterministically: the graph always interrupted whenever an tool was invoked. Suppose we wanted our chat bot to have the choice of relying on a human.
One way to do this is to create a passthrough "human" node, before which the graph will always stop. We will only execute this node if the LLM invokes a "human" tool. For our convenience, we will include an "ask_human" flag in our graph state that we will flip if the LLM calls this tool.
Below, define this new graph, with an updated State
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
Next, define a schema to show the model to let it decide to request assistance.
from langchain_core.pydantic_v1 import BaseModel
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
Next, define the chatbot node. The primary modification here is flip the ask_human
flag if we see that the chat bot has invoked the RequestAssistance
flag.
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
Next, create the graph builder and add the chatbot and tools nodes to the graph, same as before.
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
Next, create the "human" node
. This node
function is mostly a placeholder in our graph that will trigger an interrupt. If the human does not manually update the state during the interrupt
, it inserts a tool message so the LLM knows the user was requested but didn't respond. This node also unsets the ask_human
flag so the graph knows not to revisit the node unless further requests are made.
from langchain_core.messages import AIMessage, ToolMessage
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
Next, define the conditional logic. The select_next_node
will route to the human
node if the flag is set. Otherwise, it lets the prebuilt tools_condition
function choose the next node.
Recall that the tools_condition
function simply checks to see if the chatbot
has responded with any tool_calls
in its response message. If so, it routes to the action
node. Otherwise, it ends the graph.
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
Finally, add the simple directed edges and compile the graph. These edges instruct the graph to always flow from node a
->b
whenever a
finishes executing.
# The rest is the same
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
# We interrupt before 'human' here instead.
interrupt_before=["human"],
)
If you have the visualization dependencies installed, you can see the graph structure below:
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
The chat bot can either request help from a human (chatbot->select->human), invoke the search engine tool (chatbot->select->action), or directly respond (chatbot->select->end). Once an action or request has been made, the graph will transition back to the chatbot
node to continue operations.
Let's see this graph in action. We will request for expert assistance to illustrate our graph.
user_input = "I need some expert guidance for building this AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I need some expert guidance for building this AI agent. Could you request assistance for me? ================================== Ai Message ================================== [{'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc', 'input': {'request': 'I need some expert guidance for building this AI agent.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}] Tool Calls: RequestAssistance (toolu_017XaQuVsoAyfXeTfDyv55Pc) Call ID: toolu_017XaQuVsoAyfXeTfDyv55Pc Args: request: I need some expert guidance for building this AI agent.
Notice: the LLM has invoked the "RequestAssistance
" tool we provided it, and the interrupt has been set. Let's inspect the graph state to confirm.
snapshot = graph.get_state(config)
snapshot.next
('human',)
The graph state is indeed interrupted before the 'human'
node. We can act as the "expert" in this scenario and manually update the state by adding a new ToolMessage with our input.
Next, respond to the chatbot's request by:
- Creating a
ToolMessage
with our response. This will be passed back to thechatbot
. - Calling
update_state
to manually update the graph state.
ai_message = snapshot.values["messages"][-1]
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)
tool_message = create_response(human_response, ai_message)
graph.update_state(config, {"messages": [tool_message]})
{'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:31:39.973392+00:00'}}
You can inspect the state to confirm our response was added.
graph.get_state(config).values["messages"]
[HumanMessage(content='I need some expert guidance for building this AI agent. Could you request assistance for me?', id='ab75eb9d-cce7-4e44-8de7-b0b375a86972'), AIMessage(content=[{'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc', 'input': {'request': 'I need some expert guidance for building this AI agent.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}], response_metadata={'id': 'msg_0199PiK6kmVAbeo1qmephKDq', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 63}}, id='run-ff07f108-5055-4343-8910-2fa40ead3fb9-0', tool_calls=[{'name': 'RequestAssistance', 'args': {'request': 'I need some expert guidance for building this AI agent.'}, 'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc'}]), ToolMessage(content="We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.", id='19f2eb9f-a742-46aa-9047-60909c30e64a', tool_call_id='toolu_017XaQuVsoAyfXeTfDyv55Pc')]
Next, resume the graph by invoking it with None
as the inputs.
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents. ================================== Ai Message ================================== It looks like the experts have provided some guidance on how to build your AI agent. They suggested checking out LangGraph, which they say is more reliable and extensible than simple autonomous agents. Please let me know if you need any other assistance - I'm happy to help coordinate with the expert team further.
Notice that the chat bot has incorporated the updated state in its final response. Since everything was checkpointed, the "expert" human in the loop could perform the update at any time without impacting the graph's execution.
Congratulations! you've now added an additional node to your assistant graph to let the chat bot decide for itself whether or not it needs to interrupt execution. You did so by updating the graph State
with a new ask_human
field and modifying the interruption logic when compiling the graph. This lets you dynamically include a human in the loop while maintaining full memory every time you execute the graph.
We're almost done with the tutorial, but there is one more concept we'd like to review before finishing that connects checkpointing
and state updates
.
This section's code is reproduced below for your reference.
Full Code
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from langchain_core.pydantic_v1 import BaseModel from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] # This flag is new ask_human: bool class RequestAssistance(BaseModel): """Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions. To use this function, relay the user's 'request' so the expert can provide the right guidance. """ request: str tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-haiku-20240307") # We can bind the llm to a tool definition, a pydantic model, or a json schema llm_with_tools = llm.bind_tools(tools + [RequestAssistance]) def chatbot(state: State): response = llm_with_tools.invoke(state["messages"]) ask_human = False if ( response.tool_calls and response.tool_calls[0]["name"] == RequestAssistance.__name__ ): ask_human = True return {"messages": [response], "ask_human": ask_human} graph_builder = StateGraph(State) graph_builder.add_node("chatbot", chatbot) graph_builder.add_node("tools", ToolNode(tools=[tool])) def create_response(response: str, ai_message: AIMessage): return ToolMessage( content=response, tool_call_id=ai_message.tool_calls[0]["id"], ) def human_node(state: State): new_messages = [] if not isinstance(state["messages"][-1], ToolMessage): # Typically, the user will have updated the state during the interrupt. # If they choose not to, we will include a placeholder ToolMessage to # let the LLM continue. new_messages.append( create_response("No response from human.", state["messages"][-1]) ) return { # Append the new messages "messages": new_messages, # Unset the flag "ask_human": False, } graph_builder.add_node("human", human_node) def select_next_node(state: State): if state["ask_human"]: return "human" # Otherwise, we can route as before return tools_condition(state) graph_builder.add_conditional_edges( "chatbot", select_next_node, {"human": "human", "tools": "tools", "__end__": "__end__"}, ) graph_builder.add_edge("tools", "chatbot") graph_builder.add_edge("human", "chatbot") graph_builder.set_entry_point("chatbot") memory = MemorySaver() graph = graph_builder.compile( checkpointer=memory, interrupt_before=["human"], )
Part 7: Time Travel¶
In a typical chat bot workflow, the user interacts with the bot 1 or more times to accomplish a task. In the previous sections, we saw how to add memory and a human-in-the-loop to be able to checkpoint our graph state and manually override the state to control future responses.
But what if you want to let your user start from a previous response and "branch off" to explore a separate outcome? Or what if you want users to be able to "rewind" your assistant's work to fix some mistakes or try a different strategy (common in applications like autonomous software engineers)?
You can create both of these experiences and more using LangGraph's built-in "time travel" functionality.
In this section, you will "rewind" your graph by fetching a checkpoint using the graph's get_state_history
method. You can then resume execution at this previous point in time.
First, recall our chatbot graph. We don't need to make any changes from before:
from typing import Annotated, Literal
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import AIMessage, ToolMessage
from langchain_core.pydantic_v1 import BaseModel
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
def select_next_node(state: State) -> Literal["human", "tools", "__end__"]:
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["human"],
)
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Let's have our graph take a couple steps. Every step will be checkpointed in its state history:
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{
"messages": [
("user", "I'm learning LangGraph. Could you do some research on it for me?")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'text': "Okay, let me look into LangGraph for you. Here's what I found:", 'type': 'text'}, {'id': 'toolu_011AQ2FT4RupVka2LVMV3Gci', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_011AQ2FT4RupVka2LVMV3Gci) Call ID: toolu_011AQ2FT4RupVka2LVMV3Gci Args: query: LangGraph ================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://langchain-ai.github.io/langgraph/", "content": "LangGraph is framework agnostic (each node is a regular python function). It extends the core Runnable API (shared interface for streaming, async, and batch calls) to make it easy to: Seamless state management across multiple turns of conversation or tool usage. The ability to flexibly route between nodes based on dynamic criteria."}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\n It's important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\n LangGraph: Multi-Agent Workflows\nLinks\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \"\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\n As part of this launch, we're also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\n"}] ================================== Ai Message ================================== Based on the search results, here's what I've learned about LangGraph: - LangGraph is a framework-agnostic tool that extends the Runnable API to make it easier to manage state and routing between different nodes or agents in a conversational workflow. - It's part of the LangChain ecosystem, so it integrates with other LangChain tools and observability features. - LangGraph enables the creation of multi-agent workflows, where you can have different "nodes" or agents that can communicate and pass information to each other. - This allows for more complex conversational flows and the ability to chain together different capabilities, tools, or models. - The key benefits seem to be around state management, flexible routing between agents, and the ability to create more sophisticated and dynamic conversational workflows. Let me know if you need any clarification or have additional questions! I'm happy to do more research on LangGraph if you need further details.
events = graph.stream(
{
"messages": [
("user", "Ya that's helpful. Maybe I'll build an autonomous agent with it!")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Ya that's helpful. Maybe I'll build an autonomous agent with it! ================================== Ai Message ================================== [{'text': "That's great that you're interested in building an autonomous agent using LangGraph! Here are a few additional thoughts on how you could approach that:", 'type': 'text'}, {'id': 'toolu_01L3V9FhZG5Qx9jqRGfWGtS2', 'input': {'query': 'building autonomous agents with langgraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_01L3V9FhZG5Qx9jqRGfWGtS2) Call ID: toolu_01L3V9FhZG5Qx9jqRGfWGtS2 Args: query: building autonomous agents with langgraph ================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://github.com/langchain-ai/langgraphjs", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.js.It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam.The current interface exposed is one inspired by ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam.The current interface exposed is one inspired by NetworkX.. The main use is for adding cycles to your LLM ..."}] ================================== Ai Message ================================== The key things to keep in mind: 1. LangGraph is designed to help coordinate multiple "agents" or "actors" that can pass information back and forth. This allows you to build more complex, multi-step workflows. 2. You'll likely want to define different nodes or agents that handle specific tasks or capabilities. LangGraph makes it easy to route between these agents based on the state of the conversation. 3. Make sure to leverage the LangChain ecosystem - things like prompts, memory, agents, tools etc. LangGraph integrates with these to give you a powerful set of building blocks. 4. Pay close attention to state management - LangGraph helps you manage state across multiple interactions, which is crucial for an autonomous agent. 5. Consider how you'll handle things like user intent, context, and goal-driven behavior. LangGraph gives you the flexibility to implement these kinds of complex behaviors. Let me know if you have any other specific questions as you start prototyping your autonomous agent! I'm happy to provide more guidance.
Now that we've had the agent take a couple steps, we can replay
the full state history to see everything that occurred.
to_replay = None<