Node caching is useful in cases where you want to avoid repeating operations, like when doing something expensive (either in terms of time or cost). LangGraph lets you add individualized caching policies to nodes in a graph.
To configure a cache policy, pass the cachePolicy parameter to the addNode method. In the following example, we specify a cache policy with a time to live (TTL) of 120 seconds and default key serialization function. Then, to enable node-level caching for a graph, set the cache argument when compiling the graph. The example below uses InMemoryCache to set up a graph with in-memory cache.
import{StateGraph,Annotation,START}from"@langchain/langgraph";import{InMemoryCache}from"@langchain/langgraph-checkpoint";constStateAnnotation=Annotation.Root({items:Annotation<string[]>({default:()=>[],reducer:(acc,item)=>[...acc,...item],}),});constcache=newInMemoryCache();constgraph=newStateGraph(StateAnnotation).addNode("node",async()=>{// Simulate an expensive operationawaitnewPromise((resolve)=>setTimeout(resolve,3000));return{items:["Hello, how are you?"]};},{cachePolicy:{ttl:120}}).addEdge(START,"node").compile({cache});
The initial run will take 3 seconds since the cache is empty. Subsequent runs with the same input will be cached and yielded immediately.
You can also pass a custom key serialization function to the cachePolicy parameter. This can be used to skip certain fields from the serialization, such as message IDs, which may be random with each run.
import{StateGraph,MessagesAnnotation,START}from"@langchain/langgraph";import{InMemoryCache}from"@langchain/langgraph-checkpoint";import{BaseMessage}from"@langchain/core/messages";constcache=newInMemoryCache();constgraph=newStateGraph(MessagesAnnotation).addNode("node",async()=>{awaitnewPromise((resolve)=>setTimeout(resolve,3000));return{messages:[{type:"ai",content:"Hello, how are you?"}]};},{cachePolicy:{ttl:120,keyFunc([{messages}]:[{messages:BaseMessage[]}]){// Cache based on the content and relative position of the messagesreturnJSON.stringify(messages.map((m,idx)=>[idx,m.content]));},},}).addEdge(START,"node").compile({cache});
// First run will take 3 secondsconsole.time("First run");awaitgraph.invoke({messages:[{type:"human",content:"Hello!"}]});console.timeEnd("First run");// Second run will be cached and yield immediatelyconsole.time("Second run");awaitgraph.invoke({messages:[{type:"human",content:"Hello!"}]});console.timeEnd("Second run");