How to view and update past graph state¶
Once you start checkpointing your graphs, you can easily get or update the state of the agent at any point in time. This permits a few things:
- You can surface a state during an interrupt to a user to let them accept an action.
- You can rewind the graph to reproduce or avoid issues.
- You can modify the state to embed your agent into a larger system, or to let the user better control its actions.
The key methods used for this functionality are:
- get_state: fetch the values from the target config
- update_state: apply the given values to the target state
Note: this requires passing in a checkpointer.
Below is a quick example.
Setup¶
First we need to install the packages required
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic
Next, we need to set API keys for Anthropic (the LLM we will use)
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
Optionally, we can set API key for LangSmith tracing, which will give us best-in-class observability.
os.environ["LANGCHAIN_TRACING_V2"] = "true"
_set_env("LANGCHAIN_API_KEY")
Build the agent¶
We can now build the agent. We will build a relatively simple ReAct-style agent that does tool calling. We will use Anthropic's models and a fake tool (just for demo purposes).
# Set up the tool
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.graph import MessagesState, START
from langgraph.prebuilt import ToolNode
from langgraph.graph import END, StateGraph
from langgraph.checkpoint.memory import MemorySaver
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return [
"It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
]
tools = [search]
tool_node = ToolNode(tools)
# Set up the model
model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
model = model.bind_tools(tools)
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
# Define the function that calls the model
def call_model(state):
messages = state["messages"]
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END,
},
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Set up memory
memory = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add in `interrupt_before=["action"]`
# This will add a breakpoint before the `action` node is called
app = workflow.compile(checkpointer=memory)
Interacting with the Agent¶
We can now interact with the agent. Let's ask it for the weather in SF.
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "1"}}
input_message = HumanMessage(content="Use the search tool to look up the weather in SF")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message ================================= Use the search tool to look up the weather in SF ================================== Ai Message ================================== [{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}] Tool Calls: search (toolu_01Bpq6yiKqk9moPuGYKdLr8r) Call ID: toolu_01Bpq6yiKqk9moPuGYKdLr8r Args: query: weather in San Francisco ================================= Tool Message ================================= Name: search ["It's sunny in San Francisco, but you better look out if you're a Gemini \ud83d\ude08."] ================================== Ai Message ================================== Based on the search results, I can provide you with information about the weather in San Francisco: The current weather in San Francisco is sunny. This is great news for residents and visitors who want to enjoy outdoor activities or explore the city. However, there's an interesting and somewhat humorous addition to the weather report. It mentions, "but you better look out if you're a Gemini 😈." This appears to be a playful reference to astrology, suggesting that Geminis might have some challenges despite the good weather. Of course, this is not a scientific weather prediction and is likely just a fun addition to the report. To summarize: 1. The weather in San Francisco is currently sunny. 2. It's a good day for outdoor activities. 3. There's a playful astrological warning for Geminis, but this shouldn't be taken seriously in terms of actual weather conditions. Is there anything else you'd like to know about the weather in San Francisco or any other location?
Checking history¶
Let's browse the history of this thread, from start to finish.
all_states = []
for state in app.get_state_history(config):
print(state)
all_states.append(state)
print("--")
StateSnapshot(values={'messages': []}, next=('__start__',), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef355ac-b80d-6e18-bfff-c903ebc4bdfd'}}, metadata={'source': 'input', 'step': -1, 'writes': {'messages': [HumanMessage(content='Use the search tool to look up the weather in SF')]}}, created_at='2024-06-28T14:29:14.932371+00:00', parent_config=None) -- StateSnapshot(values={'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577')]}, next=('agent',), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef355ac-b810-60dc-8000-a9c67d8cc5e0'}}, metadata={'source': 'loop', 'step': 0, 'writes': None}, created_at='2024-06-28T14:29:14.933257+00:00', parent_config=None) -- StateSnapshot(values={'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577'), AIMessage(content=[{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}], response_metadata={'id': 'msg_011ae64fY2jEcfS8kgrt4Fn9', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 363, 'output_tokens': 81}}, id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0', tool_calls=[{'name': 'search', 'args': {'query': 'weather in San Francisco'}, 'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r'}], usage_metadata={'input_tokens': 363, 'output_tokens': 81, 'total_tokens': 444})]}, next=('action',), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef355ac-c6b1-6028-8001-82bd095f9a87'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'agent': {'messages': [AIMessage(content=[{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}], response_metadata={'id': 'msg_011ae64fY2jEcfS8kgrt4Fn9', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 363, 'output_tokens': 81}}, id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0', tool_calls=[{'name': 'search', 'args': {'query': 'weather in San Francisco'}, 'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r'}], usage_metadata={'input_tokens': 363, 'output_tokens': 81, 'total_tokens': 444})]}}}, created_at='2024-06-28T14:29:16.467180+00:00', parent_config=None) -- StateSnapshot(values={'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577'), AIMessage(content=[{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}], response_metadata={'id': 'msg_011ae64fY2jEcfS8kgrt4Fn9', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 363, 'output_tokens': 81}}, id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0', tool_calls=[{'name': 'search', 'args': {'query': 'weather in San Francisco'}, 'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r'}], usage_metadata={'input_tokens': 363, 'output_tokens': 81, 'total_tokens': 444}), ToolMessage(content='["It\'s sunny in San Francisco, but you better look out if you\'re a Gemini \\ud83d\\ude08."]', name='search', id='b5aa87cb-335a-4ee0-8809-381e33d0f02e', tool_call_id='toolu_01Bpq6yiKqk9moPuGYKdLr8r')]}, next=('agent',), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef355ac-c6ba-63a8-8002-48d076c3c4b7'}}, metadata={'source': 'loop', 'step': 2, 'writes': {'action': {'messages': [ToolMessage(content='["It\'s sunny in San Francisco, but you better look out if you\'re a Gemini \\ud83d\\ude08."]', name='search', id='b5aa87cb-335a-4ee0-8809-381e33d0f02e', tool_call_id='toolu_01Bpq6yiKqk9moPuGYKdLr8r')]}}}, created_at='2024-06-28T14:29:16.470958+00:00', parent_config=None) -- StateSnapshot(values={'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577'), AIMessage(content=[{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}], response_metadata={'id': 'msg_011ae64fY2jEcfS8kgrt4Fn9', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 363, 'output_tokens': 81}}, id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0', tool_calls=[{'name': 'search', 'args': {'query': 'weather in San Francisco'}, 'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r'}], usage_metadata={'input_tokens': 363, 'output_tokens': 81, 'total_tokens': 444}), ToolMessage(content='["It\'s sunny in San Francisco, but you better look out if you\'re a Gemini \\ud83d\\ude08."]', name='search', id='b5aa87cb-335a-4ee0-8809-381e33d0f02e', tool_call_id='toolu_01Bpq6yiKqk9moPuGYKdLr8r'), AIMessage(content='Based on the search results, I can provide you with information about the weather in San Francisco:\n\nThe current weather in San Francisco is sunny. This is great news for residents and visitors who want to enjoy outdoor activities or explore the city.\n\nHowever, there\'s an interesting and somewhat humorous addition to the weather report. It mentions, "but you better look out if you\'re a Gemini 😈." This appears to be a playful reference to astrology, suggesting that Geminis might have some challenges despite the good weather. Of course, this is not a scientific weather prediction and is likely just a fun addition to the report.\n\nTo summarize:\n1. The weather in San Francisco is currently sunny.\n2. It\'s a good day for outdoor activities.\n3. There\'s a playful astrological warning for Geminis, but this shouldn\'t be taken seriously in terms of actual weather conditions.\n\nIs there anything else you\'d like to know about the weather in San Francisco or any other location?', response_metadata={'id': 'msg_01NWeLrkQRLiGsVsxnepzq3p', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 217}}, id='run-09f1b7d5-50ec-4f00-a31b-c4dec858b312-0', usage_metadata={'input_tokens': 486, 'output_tokens': 217, 'total_tokens': 703})]}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef355ac-f0f5-6942-8003-b79974988738'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'agent': {'messages': [AIMessage(content='Based on the search results, I can provide you with information about the weather in San Francisco:\n\nThe current weather in San Francisco is sunny. This is great news for residents and visitors who want to enjoy outdoor activities or explore the city.\n\nHowever, there\'s an interesting and somewhat humorous addition to the weather report. It mentions, "but you better look out if you\'re a Gemini 😈." This appears to be a playful reference to astrology, suggesting that Geminis might have some challenges despite the good weather. Of course, this is not a scientific weather prediction and is likely just a fun addition to the report.\n\nTo summarize:\n1. The weather in San Francisco is currently sunny.\n2. It\'s a good day for outdoor activities.\n3. There\'s a playful astrological warning for Geminis, but this shouldn\'t be taken seriously in terms of actual weather conditions.\n\nIs there anything else you\'d like to know about the weather in San Francisco or any other location?', response_metadata={'id': 'msg_01NWeLrkQRLiGsVsxnepzq3p', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 217}}, id='run-09f1b7d5-50ec-4f00-a31b-c4dec858b312-0', usage_metadata={'input_tokens': 486, 'output_tokens': 217, 'total_tokens': 703})]}}}, created_at='2024-06-28T14:29:20.899258+00:00', parent_config=None) --
Replay a state¶
We can go back to any of these states and restart the agent from there! Let's go back to right before the tool call gets executed.
to_replay = all_states[2]
to_replay.values
{'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577'), AIMessage(content=[{'text': "Certainly! I'll use the search tool to look up the weather in San Francisco for you. Let me do that right away.", 'type': 'text'}, {'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r', 'input': {'query': 'weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}], response_metadata={'id': 'msg_011ae64fY2jEcfS8kgrt4Fn9', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 363, 'output_tokens': 81}}, id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0', tool_calls=[{'name': 'search', 'args': {'query': 'weather in San Francisco'}, 'id': 'toolu_01Bpq6yiKqk9moPuGYKdLr8r'}], usage_metadata={'input_tokens': 363, 'output_tokens': 81, 'total_tokens': 444})]}
to_replay.next
('action',)
To replay from this place we just need to pass its config back to the agent. Notice that it just resumes from right where it left all - making a tool call.
for event in app.stream(None, to_replay.config):
for v in event.values():
print(v)
{'messages': [ToolMessage(content='["It\'s sunny in San Francisco, but you better look out if you\'re a Gemini \\ud83d\\ude08."]', name='search', tool_call_id='toolu_01Bpq6yiKqk9moPuGYKdLr8r')]} {'messages': [AIMessage(content='Based on the search results, I can provide you with information about the weather in San Francisco:\n\nThe current weather in San Francisco is sunny. This is great news for residents and visitors who want to enjoy outdoor activities or explore the city.\n\nHowever, there\'s an interesting and somewhat humorous addition to the weather report. It mentions, "but you better look out if you\'re a Gemini 😈." This appears to be a playful reference to astrology, suggesting that Geminis might have some challenges despite the good weather. Of course, this is not a scientific weather prediction and is likely just a fun addition to the weather report.\n\nTo summarize:\n1. The weather in San Francisco is currently sunny.\n2. It\'s a good day for outdoor activities.\n3. There\'s a playful astrological reference for Geminis, but this shouldn\'t be taken as actual weather information.\n\nIs there anything else you\'d like to know about the weather in San Francisco or any other location?', response_metadata={'id': 'msg_01S4uzzxbGwvsk1vfoLJAD7Z', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 214}}, id='run-570bd10f-4007-4b33-8c29-44b9a19b6978-0', usage_metadata={'input_tokens': 486, 'output_tokens': 214, 'total_tokens': 700})]}
Branch off a past state¶
Using LangGraph's checkpointing, you can do more than just replay past states. You can branch off previous locations to let the agent explore alternate trajectories or to let a user "version control" changes in a workflow.
Let's show how to do this to edit the state at a particular point in time. Let's update the state to change the input to the tool
# Let's now get the last message in the state
# This is the one with the tool calls that we want to update
last_message = to_replay.values["messages"][-1]
# Let's now update the args for that tool call
last_message.tool_calls[0]["args"] = {"query": "current weather in SF"}
branch_config = app.update_state(
to_replay.config,
{"messages": [last_message]},
)
We can then invoke with this new branch_config
to resume running from here with changed state. We can see from the log that the tool was called with different input.
for event in app.stream(None, branch_config):
for v in event.values():
print(v)
{'messages': [ToolMessage(content='["It\'s sunny in San Francisco, but you better look out if you\'re a Gemini \\ud83d\\ude08."]', name='search', tool_call_id='toolu_01Bpq6yiKqk9moPuGYKdLr8r')]} {'messages': [AIMessage(content="Based on the search results, I can provide you with information about the current weather in San Francisco (SF):\n\nThe weather in San Francisco is currently sunny. This means it's a clear day with plenty of sunshine, which is great for outdoor activities or simply enjoying the city's beautiful views.\n\nIt's worth noting that San Francisco's weather can be quite variable, even within the city itself, due to its unique geography and microclimates. While it's sunny now, it's always a good idea to be prepared for potential changes, as the city is known for its foggy conditions, especially in certain areas and during specific times of the day.\n\nThe search result also includes a playful reference to astrology, mentioning Geminis. However, this is likely just a humorous addition and not related to the actual weather conditions.\n\nIs there any specific information about the weather in San Francisco that you'd like to know more about, such as temperature, wind conditions, or forecast for the coming days?", response_metadata={'id': 'msg_01AoFmmzZxbMLuu3npVXJKG7', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 211}}, id='run-ccb9e786-c23e-4017-bf29-8405f17cec9f-0', usage_metadata={'input_tokens': 486, 'output_tokens': 211, 'total_tokens': 697})]}
Alternatively, we could update the state to not even call a tool!
from langchain_core.messages import AIMessage
# Let's now get the last message in the state
# This is the one with the tool calls that we want to update
last_message = to_replay.values["messages"][-1]
# Let's now get the ID for the last message, and create a new message with that ID.
new_message = AIMessage(content="its warm!", id=last_message.id)
branch_config = app.update_state(
to_replay.config,
{"messages": [new_message]},
)
branch_state = app.get_state(branch_config)
branch_state.values
{'messages': [HumanMessage(content='Use the search tool to look up the weather in SF', id='9558b3e7-fa30-4e8b-9587-d58ae3491577'), AIMessage(content='its warm!', id='run-cfef25ca-d1be-4e79-8798-3bb9a7002287-0')]}
branch_state.next
()
You can see the snapshot was updated and now correctly reflects that there is no next step.