Memory¶
LangGraph supports two types of memory essential for building conversational agents:
- Short-term memory: Tracks the ongoing conversation by maintaining message history within a session.
- Long-term memory: Stores user-specific or application-level data across sessions.
This guide demonstrates how to use both memory types with agents in LangGraph. For a deeper understanding of memory concepts, refer to the LangGraph memory documentation.
Terminology
In LangGraph:
- Short-term memory is also referred to as thread-level memory.
- Long-term memory is also called cross-thread memory.
A thread represents a sequence of related runs
grouped by the same thread_id
.
Short-term memory¶
Short-term memory enables agents to track multi-turn conversations. To use it, you must:
- Provide a
checkpointer
when creating the agent. Thecheckpointer
enables persistence of the agent's state. - Supply a
thread_id
in the config when running the agent. Thethread_id
is a unique identifier for the conversation session.
API Reference: create_react_agent | InMemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
checkpointer = InMemorySaver() # (1)!
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_weather],
checkpointer=checkpointer # (2)!
)
# Run the agent
config = {
"configurable": {
"thread_id": "1" # (3)!
}
}
sf_response = agent.invoke(
{"messages": "what is the weather in sf"},
config
)
# Continue the conversation using the same thread_id
ny_response = agent.invoke(
{"messages": "what about new york?"},
config # (4)!
)
- The
InMemorySaver
is a checkpointer that stores the agent's state in memory. In a production setting, you would typically use a database or other persistent storage. Please review the checkpointer documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready checkpointer for you. - The
checkpointer
is passed to the agent. This enables the agent to persist its state across invocations. Please note that - A unique
thread_id
is provided in the config. This ID is used to identify the conversation session. The value is controlled by the user and can be any string. - The agent will continue the conversation using the same
thread_id
. This will allow the agent to infer that the user is asking specifically about the weather in New York.
When the agent is invoked the second time with the same thread_id
, the original message history from the first conversation is automatically included, allowing the agent to infer that the user is asking specifically about the weather in New York.
LangGraph Platform providers a production-ready checkpointer
If you're using LangGraph Platform, during deployment your checkpointer will be automatically configured to use a production-ready database.
Message history summarization¶
Long conversations can exceed the LLM's context window. To handle this, you can summarize older messages by specifying a pre_model_hook
, such as the prebuilt SummarizationNode
:
API Reference: ChatAnthropic | count_tokens_approximately | create_react_agent | InMemorySaver
from langchain_anthropic import ChatAnthropic
from langmem.short_term import SummarizationNode
from langchain_core.messages.utils import count_tokens_approximately
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.checkpoint.memory import InMemorySaver
from typing import Any
model = ChatAnthropic(model="claude-3-7-sonnet-latest")
summarization_node = SummarizationNode( # (1)!
token_counter=count_tokens_approximately,
model=model,
max_tokens=384,
max_summary_tokens=128,
output_messages_key="llm_input_messages",
)
class State(AgentState):
# NOTE: we're adding this key to keep track of previous summary information
# to make sure we're not summarizing on every LLM call
context: dict[str, Any] # (2)!
checkpointer = InMemorySaver() # (3)!
agent = create_react_agent(
model=model,
tools=tools,
pre_model_hook=summarization_node, # (4)!
state_schema=State, # (5)!
checkpointer=checkpointer,
)
- The
InMemorySaver
is a checkpointer that stores the agent's state in memory. In a production setting, you would typically use a database or other persistent storage. Please review the checkpointer documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready checkpointer for you. - The
context
key is added to the agent's state. The key contains book-keeping information for the summarization node. It is used to keep track of the last summary information and ensure that the agent doesn't summarize on every LLM call, which can be inefficient. - The
checkpointer
is passed to the agent. This enables the agent to persist its state across invocations. - The
pre_model_hook
is set to theSummarizationNode
. This node will summarize the message history before sending it to the LLM. The summarization node will automatically handle the summarization process and update the agent's state with the new summary. You can replace this with a custom implementation if you prefer. Please see the create_react_agent API reference for more details. - The
state_schema
is set to theState
class, which is the custom state that contains an extracontext
key.
To learn more about using pre_model_hook
for managing message history, see this how-to guide
Long-term memory¶
Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information.
To use long-term memory, you need to:
- Configure a store to persist data across invocations.
- Use the
get_store
function to access the store from within tools or prompts.
Reading¶
from langgraph.config import get_store
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore
store = InMemoryStore() # (1)!
store.put( # (2)!
("users",), # (3)!
"user_123", # (4)!
{
"name": "John Smith",
"language": "English",
} # (5)!
)
def get_user_info(config: RunnableConfig) -> str:
"""Look up user info."""
# Same as that provided to `create_react_agent`
store = get_store() # (6)!
user_id = config.get("configurable", {}).get("user_id")
user_info = store.get(("users",), user_id) # (7)!
return str(user_info.value) if user_info else "Unknown user"
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_user_info],
store=store # (8)!
)
# Run the agent
agent.invoke(
{"messages": "look up user information"},
config={"configurable": {"user_id": "user_123"}}
)
- The
InMemoryStore
is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready store for you. - For this example, we write some sample data to the store using the
put
method. Please see the BaseStore.put API reference for more details. - The first argument is the namespace. This is used to group related data together. In this case, we are using the
users
namespace to group user data. - A key within the namespace. This example uses a user ID for the key.
- The data that we want to store for the given user.
- The
get_store
function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created. - The
get
method is used to retrieve data from the store. The first argument is the namespace, and the second argument is the key. This will return aStoreValue
object, which contains the value and metadata about the value. - The
store
is passed to the agent. This enables the agent to access the store when running tools. You can also use theget_store
function to access the store from anywhere in your code.
Writing¶
from typing import TypedDict
from langgraph.config import get_store
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore
store = InMemoryStore() # (1)!
class UserInfo(TypedDict): # (2)!
name: str
def save_user_info(user_info: UserInfo, config: RunnableConfig) -> str: # (3)!
"""Save user info."""
# Same as that provided to `create_react_agent`
store = get_store() # (4)!
user_id = config.get("configurable", {}).get("user_id")
store.put(("users",), user_id, user_info) # (5)!
return "Successfully saved user info."
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[save_user_info],
store=store
)
# Run the agent
agent.invoke(
{"messages": "My name is John Smith"},
config={"configurable": {"user_id": "user_123"}} # (6)!
)
# You can access the store directly to get the value
store.get(("users",), "user_123").value
- The
InMemoryStore
is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready store for you. - The
UserInfo
class is aTypedDict
that defines the structure of the user information. The LLM will use this to format the response according to the schema. - The
save_user_info
function is a tool that allows an agent to update user information. This could be useful for a chat application where the user wants to update their profile information. - The
get_store
function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created. - The
put
method is used to store data in the store. The first argument is the namespace, and the second argument is the key. This will store the user information in the store. - The
user_id
is passed in the config. This is used to identify the user whose information is being updated.
Prebuilt memory tools¶
LangMem is a LangChain-maintained library that offers tools for managing long-term memories in your agent. See the LangMem documentation for usage examples.