LangMem¶
LangMem helps agents learn and adapt from their interactions over time.
It provides tooling to extract important information from conversations, optimize agent behavior through prompt refinement, and maintain long-term memory.
It offers both functional primitives you can use with any storage system and native integration with LangGraph's storage layer.
This lets your agents continuously improve, personalize their responses, and maintain consistent behavior across sessions.
Key features¶
- ๐งฉ Core memory API that works with any storage system
- ๐ง Memory management tools that agents can use to record and search information during active conversations "in the hot path"
- โ๏ธ Background memory manager that automatically extracts, consolidates, and updates agent knowledge
- โก Native integration with LangGraph's Long-term Memory Store, available by default in all LangGraph Platform deployments
Installation¶
Configure your environment with an API key for your favorite LLM provider:
Creating an Agent¶
Here's how to create an agent that actively manages its own long-term memory in just a few lines:
# Import core components (1)
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore
from langmem import create_manage_memory_tool, create_search_memory_tool
# Set up storage (2)
store = InMemoryStore(
index={
"dims": 1536,
"embed": "openai:text-embedding-3-small",
}
)
# Create an agent with memory capabilities (3)
agent = create_react_agent(
"anthropic:claude-3-5-sonnet-latest",
tools=[
# Memory tools use LangGraph's BaseStore for persistence (4)
create_manage_memory_tool(namespace=("memories",)),
create_search_memory_tool(namespace=("memories",)),
],
store=store,
)
-
The memory tools work in any LangGraph app. Here we use
create_react_agent
to run an LLM with tools, but you can add these tools to your existing agents or build custom memory systems without agents. -
InMemoryStore
keeps memories in process memoryโthey'll be lost on restart. For production, use the AsyncPostgresStore or a similar DB-backed store to persist memories across server restarts. -
The memory tools (
create_manage_memory_tool
andcreate_search_memory_tool
) let you control what gets stored. The agent extracts key information from conversations, maintains memory consistency, and knows when to search past interactions. See Memory Tools for configuration options.
Then use the agent:
# Store a new memory (1)
agent.invoke(
{"messages": [{"role": "user", "content": "Remember that I prefer dark mode."}]}
)
# Retrieve the stored memory (2)
response = agent.invoke(
{"messages": [{"role": "user", "content": "What are my lighting preferences?"}]}
)
print(response["messages"][-1].content)
# Output: "You've told me that you prefer dark mode."
-
The agent gets to decide what and when to store the memory. No special commands neededโjust chat normally and the agent uses
create_manage_memory_tool
to store relevant details. -
The agent maintains context between chats. When you ask about previous interactions, the LLM can invoke
create_search_memory_tool
to search for memories with similar content. See Memory Tools to customize memory storage and retrieval, and see the hot path quickstart for a more complete example on how to include memories without the agent having to expliictly search.
The agent can now store important information from conversations, search its memory when relevant, and persist knowledge across conversations.
Next Steps¶
For more examples and detailed documentation:
- Hot Path Quickstart - Learn how to let your LangGraph agent manage its own memory "in the hot path"
- Background Quickstart - Learn how to use a memory manager "in the background"
- Core Concepts - Learn key ideas
- API Reference - Full function documentation
- Build RSI ๐