How to share state between threads¶
By default, state in a graph is scoped to that thread. LangGraph also allows you to specify a "scope" for a given key/value pair that exists between threads. This can be useful for storing information that is shared between threads. For instance, you may want to store information about a user's preferences expressed in one thread, and then use that information in another thread.
In this notebook we will go through an example of how to construct and use such a graph.
Setup¶
First, let's install the required packages and set our API keys
%%capture --no-stderr
%pip install -U langchain_openai langgraph
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
Create graph¶
In this example we will create a graph that will let us store information about a user's preferences. We will do so by defining a state key that will be scoped to a user_id, and allowing the model to populate this field as it deems fit (by providing the model with a tool to save information about the user).
Typing shared state keys
Shared state channels (keys) MUST be dictionaries (see info
channel in the AgentState example below)
from langgraph.graph.graph import START, END
from langgraph.graph.message import MessagesState
from langgraph.graph.state import StateGraph
from langgraph.store.memory import MemoryStore
from langgraph.managed.shared_value import SharedValue
from typing import TypedDict, Annotated, Any
import uuid
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
class AgentState(MessagesState):
# We use an info key to track information
# This is scoped to a user_id, so it will be information specific to each user
info: Annotated[dict, SharedValue.on("user_id")]
# We will give this as a tool to the agent
# This will let the agent call this tool to save a fact
class Info(TypedDict):
"""This tool should be called when you want to save a new fact about the user.
Attributes:
fact (str): A fact about the user.
topic (str): The topic related the fact is about, i.e. Food, Location, Movies, etc.
"""
fact: str
topic: str
# This is the prompt we give the agent
# We will pass known info into the prompt
# We will tell it to use the Info tool to save more
prompt = """You are helpful assistant.
Here is what you know about the user:
<info>
{info}
</info>
Help out the user. If the user tells you any information about themselves, save the information using the `Info` tool.
This means if the user provides any sort of fact about themselves, be it an opinion they have, a fact about themselves, etc. SAVE IT!
"""
# We give the model access to the Info tool
model = ChatOpenAI().bind_tools([Info])
# Our first node - this will call the model
def call_model(state):
# We get all facts and assemble them into a string
facts = [d['fact'] for d in state['info'].values()]
info = "\n".join(facts)
# Format system prompt
system_msg = prompt.format(info=info)
# Call model
response = model.invoke([{"role": "system", "content": system_msg}] + state['messages'])
return {"messages": [response]}
# Routing function to decide what to do next
# If no tool calls, then we end
# If tool calls, then we update memory
def route(state):
if len(state['messages'][-1].tool_calls) == 0:
return END
else:
return "update_memory"
# This function is responsible for updating the memory
def update_memory(state):
tool_calls = []
memories = {}
# Each tool call is a new memory to save
for tc in state['messages'][-1].tool_calls:
# We append ToolMessages (to pass back to the LLM)
# This is needed because OpenAI requires each tool call be followed by a ToolMessage
tool_calls.append({"role": "tool", "content": "Saved!", "tool_call_id": tc['id']})
# We create a new memory from this tool call
memories[str(uuid.uuid4())] = {"fact": tc['args']['fact'], "topic": tc['args']['topic']}
# Return the messages and memories to update the state with
return {"messages": tool_calls, "info": memories}
# This is the in memory checkpointer we will use
# We need this because we want to enable threads (conversations)
memory = MemorySaver()
# This is the in memory Key Value store
# This is needed to save the memories
kv = MemoryStore()
# Construct this relatively simple graph
graph = StateGraph(AgentState)
graph.add_node(call_model)
graph.add_node(update_memory)
graph.add_edge("update_memory", END)
graph.add_edge(START, "call_model")
graph.add_conditional_edges("call_model", route)
graph = graph.compile(checkpointer=memory, store=kv)
Run graph on one thread¶
We can now run the graph on one thread and give it some information
config = {"configurable": {"thread_id": "1", "user_id": "1"}}
# First let's just say hi to the AI
for update in graph.stream({"messages": [{"role": "user", "content": "hi"}]}, config, stream_mode="updates"):
print(update)
# Let's continue the conversation (by passing the same config) and tell the AI we like pepperoni pizza
for update in graph.stream({"messages": [{"role": "user", "content": "i like pepperoni pizza"}]}, config, stream_mode="updates"):
print(update)
# Let's continue the conversation even further (by passing the same config) and tell the AI we live in SF
for update in graph.stream({"messages": [{"role": "user", "content": "i also just moved to SF"}]}, config, stream_mode="updates"):
print(update)
{'call_model': {'messages': [AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 171, 'total_tokens': 181}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-fbbb73a4-7c94-4db1-8761-44ea2fe9feaf-0', usage_metadata={'input_tokens': 171, 'output_tokens': 10, 'total_tokens': 181})]}} {'call_model': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_zMUXZfhOCFYvZg5TwXyBzw16', 'function': {'arguments': '{"fact":"I like pepperoni pizza","topic":"Food"}', 'name': 'Info'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 193, 'total_tokens': 214}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7297f9fb-1d3e-480e-b125-ab269f648158-0', tool_calls=[{'name': 'Info', 'args': {'fact': 'I like pepperoni pizza', 'topic': 'Food'}, 'id': 'call_zMUXZfhOCFYvZg5TwXyBzw16', 'type': 'tool_call'}], usage_metadata={'input_tokens': 193, 'output_tokens': 21, 'total_tokens': 214})]}} {'update_memory': {'messages': [{'role': 'tool', 'content': 'Saved!', 'tool_call_id': 'call_zMUXZfhOCFYvZg5TwXyBzw16'}]}} {'call_model': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_GjshujJAeqoTuuBeHCD5YTPQ', 'function': {'arguments': '{"fact":"I just moved to SF","topic":"Location"}', 'name': 'Info'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 239, 'total_tokens': 260}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4abea1d6-7ccb-49b4-b805-0e04ebb542e3-0', tool_calls=[{'name': 'Info', 'args': {'fact': 'I just moved to SF', 'topic': 'Location'}, 'id': 'call_GjshujJAeqoTuuBeHCD5YTPQ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 239, 'output_tokens': 21, 'total_tokens': 260})]}} {'update_memory': {'messages': [{'role': 'tool', 'content': 'Saved!', 'tool_call_id': 'call_GjshujJAeqoTuuBeHCD5YTPQ'}]}}
Run graph on a different thread¶
We can now run the graph on a different thread and see that it remembers facts about the user (specifically that the user likes pepperoni pizza and lives in SF):
config = {"configurable": {"thread_id": "2", "user_id": "1"}}
for update in graph.stream({"messages": [{"role": "user", "content": "where and what should i eat for dinner? Can you list some restaurants?"}]}, config, stream_mode="updates"):
print(update)
{'call_model': {'messages': [AIMessage(content="Sure! Since you just moved to San Francisco, how about trying some popular local spots? Here are a few restaurant recommendations in SF:\n\n1. Tony's Pizza Napoletana - Known for their delicious pepperoni pizza!\n2. The Slanted Door - A popular Vietnamese restaurant in the city.\n3. Zuni Cafe - A classic American restaurant with a great ambiance.\n4. Tartine Bakery - Perfect for a casual dinner with amazing baked goods.\n5. State Bird Provisions - A unique dining experience with small plates and a lively atmosphere.\n\nFeel free to explore these options and enjoy your dinner! If you need more recommendations or information about a specific cuisine, let me know!", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 138, 'prompt_tokens': 197, 'total_tokens': 335}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-de8ad08c-0810-4bb5-b2e8-d3dc89522f8e-0', usage_metadata={'input_tokens': 197, 'output_tokens': 138, 'total_tokens': 335})]}}
Perfect! The AI recommended restaurants in SF, and included a pizza restaurant at the top of it's list.
Notice that the messages
in this new thread do NOT contain the messages from the previous thread since we didn't store them as shared values across the user_id
. However, the info
we saved in the previous thread was saved since we passed in the same user_id
in this new thread.
Let's now run the graph for another user to verify that the preferences of the first user are self contained:
config = {"configurable": {"thread_id": "3", "user_id": "2"}}
for update in graph.stream({"messages": [{"role": "user", "content": "where and what should i eat for dinner? Can you list some restaurants?"}]}, config, stream_mode="updates"):
print(update)
{'call_model': {'messages': [AIMessage(content='I can definitely help you with that! To provide you with personalized restaurant recommendations, could you please let me know your location or any specific preferences you have for dinner?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 34, 'prompt_tokens': 185, 'total_tokens': 219}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5a483acf-1289-4d7f-b707-97760a8c3620-0', usage_metadata={'input_tokens': 185, 'output_tokens': 34, 'total_tokens': 219})]}}
Perfect! The graph has forgotten all of the previous preferences and has to ask the user for it's location and dietary preferences.