Persistence¶
Many AI applications need memory to share context across multiple interactions. In LangGraph, memory is provided for any StateGraph through Checkpointers.
When creating any LangGraph workflow, you can set them up to persist their state by doing using the following:
- A Checkpointer, such as the MemorySaver
- Call
compile(checkpointer=myCheckpointer)
when compiling the graph.
Example:
import { MemorySaver } from "@langchain/langgraph";
const workflow = new StateGraph({
channels: graphState,
});
/// ... Add nodes and edges
// Initialize any compatible CheckPointSaver
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });
This works for StateGraph and all its subclasses, such as MessageGraph.
Below is an example.
Note
In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer)
(API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.
Setup¶
This guide will use OpenAI's GPT-4o model. We will optionally set our API key for LangSmith tracing, which will give us best-in-class observability.
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Persistence: LangGraphJS";
Persistence: LangGraphJS
Define the state¶
The state is the interface for all of the nodes in our graph.
import { BaseMessage } from "@langchain/core/messages";
import { StateGraphArgs } from "@langchain/langgraph";
interface IState {
messages: BaseMessage[];
}
// This defines the agent state
const graphState: StateGraphArgs<IState>["channels"] = {
messages: {
value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
default: () => [],
},
};
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = new DynamicStructuredTool({
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
func: async ({}: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 ℃";
},
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
We can now wrap these tools in a simple ToolNode. This object will actually run the tools (functions) whenever they are invoked by our LLM.
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode<{ messages: BaseMessage[] }>(tools);
Set up the model¶
Now we will load the chat model.
- It should work with messages. We will represent all agent state in the form of messages, so it needs to be able to work well with them.
- It should work with tool calling, meaning it can return function arguments in its response.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o" });
After we've done this, we should make sure the model knows that it has these tools available to call. We can do this by calling bindTools.
const boundModel = model.bindTools(tools);
Define the graph¶
We can now put it all together. We will run it first without a checkpointer:
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
const routeMessage = (state: IState) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: IState,
config?: RunnableConfig,
) => {
const { messages } = state;
const response = await boundModel.invoke(messages, config);
return { messages: [response] };
};
const workflow = new StateGraph<IState>({
channels: graphState,
})
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const graph = workflow.compile();
let inputs = { messages: [["user", "Hi I'm Yu, niced to meet you."]] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
[ 'user', "Hi I'm Yu, niced to meet you." ] -----
Skipping write for channel branch:agent:routeMessage:undefined which has no readers
Nice to meet you, Yu! How can I assist you today? -----
inputs = { messages: [["user", "Remember my name?"]] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
[ 'user', 'Remember my name?' ] -----
Skipping write for channel branch:agent:routeMessage:undefined which has no readers
I cannot remember personalized details, including names, from previous interactions. However, I'd be happy to help you with any inquiries you have! How can I assist you today? -----
Add Memory¶
Let's try it again with a checkpointer. We will use the MemorySaver, which will "save" checkpoints in-memory.
import { MemorySaver } from "@langchain/langgraph";
// Here we only save in-memory
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });
let config = { configurable: { thread_id: "conversation-num-1" } };
inputs = { messages: [["user", "Hi I'm Jo, niced to meet you."]] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
[ 'user', "Hi I'm Jo, niced to meet you." ] -----
Skipping write for channel branch:agent:routeMessage:undefined which has no readers
Hi Jo, nice to meet you too! How can I assist you today? -----
inputs = { messages: [["user", "Remember my name?"]] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
[ 'user', 'Remember my name?' ] -----
Skipping write for channel branch:agent:routeMessage:undefined which has no readers
Yes, your name is Jo. How can I assist you today? -----
New Conversational Thread¶
If we want to start a new conversation, we can pass in a different
thread_id
. Poof! All the memories are gone (just kidding, they'll always
live in that other thread)!
config = { configurable: { thread_id: "conversation-2" } };
{ configurable: { thread_id: 'conversation-2' } }
inputs = { messages: [["user", "you forgot?"]] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
[ 'user', 'you forgot?' ] -----
Skipping write for channel branch:agent:routeMessage:undefined which has no readers
Could you please provide more context or clarify what you're referring to? Let me know how I can assist you further! -----