How to add semantic search to your agent's memory¶
This guide shows how to enable semantic search in your agent's memory store. This lets search for items in the store by semantic similarity.
First, install this guide's required dependencies.
Next, we need to set API keys for OpenAI (the LLM we will use)
Optionally, we can set API key for LangSmith tracing, which will give us best-in-class observability.
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_CALLBACKS_BACKGROUND="true"
export LANGCHAIN_API_KEY=your-api-key
Next, create the store with an index configuration. By default, stores are configured without semantic/vector search. You can opt in to indexing items when creating the store by providing an IndexConfig to the store's constructor. If your store class does not implement this interface, or if you do not pass in an index configuration, semantic search is disabled, and all index
arguments passed to put
will have no effect. Below is an example.
import { OpenAIEmbeddings } from "@langchain/openai";
import { InMemoryStore } from "@langchain/langgraph";
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const store = new InMemoryStore({
index: {
embeddings,
dims: 1536,
}
});
Now let's store some memories:
// Store some memories
await store.put(["user_123", "memories"], "1", {"text": "I love pizza"})
await store.put(["user_123", "memories"], "2", {"text": "I prefer Italian food"})
await store.put(["user_123", "memories"], "3", {"text": "I don't like spicy food"})
await store.put(["user_123", "memories"], "3", {"text": "I am studying econometrics"})
await store.put(["user_123", "memories"], "3", {"text": "I am a plumber"})
Search memories using natural language:
// Find memories about food preferences
const memories = await store.search(["user_123", "memories"], {
query: "I like food?",
limit: 5,
});
for (const memory of memories) {
console.log(`Memory: ${memory.value.text} (similarity: ${memory.score})`);
}
Memory: I prefer Italian food (similarity: 0.4657744498860293)
Memory: I love pizza (similarity: 0.3743831559964955)
Memory: I am a plumber (similarity: 0.17918150007138176)
Using in your agent¶
Add semantic search to any node by injecting the store:
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MessagesAnnotation, LangGraphRunnableConfig } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { getContextVariable } from "@langchain/core/context";
import { z } from "zod";
import { v4 as uuidv4 } from "uuid";
const addMemories = async (state: typeof MessagesAnnotation.State, config: LangGraphRunnableConfig) => {
const store = config.store;
// Search based on user's last message
const items = await store.search(
["user_123", "memories"],
{
// Assume it's not a complex message
query: state.messages[state.messages.length - 1].content as string,
limit: 2
}
);
const memories = items.length
? `## Memories of user\n${items.map(item => item.value.text).join("\n")}`
: "";
// Add retrieved memories to system message
return [
{ role: "system", content: `You are a helpful assistant.\n${memories}` },
...state.messages
];
};
const upsertMemoryTool = tool(async (
input,
config: LangGraphRunnableConfig
): Promise<string> => {
const store = config.store;
if (!store) {
throw new Error("No store provided to tool.");
}
const memoryId = getContextVariable("memoryId") || uuidv4();
await store.put(
["user_123", "memories"],
memoryId,
{ text: input.content }
);
return `Stored memory ${memoryId}`;
}, {
name: "upsert_memory",
schema: z.object({
content: z.string(),
}),
description: "Upsert a memory in the database.",
});
const agent = createReactAgent({
llm: new ChatOpenAI({ model: "gpt-4o-mini" }),
tools: [upsertMemoryTool],
stateModifier: addMemories,
store: store,
});
If we run the agent, we can see that it remembers that we added a memory about liking Italian food.
const stream = await agent.stream({
messages: [{
role: "user",
content: "I'm hungry",
}],
}, {
streamMode: "messages",
});
for await (const [message, _metadata] of stream) {
console.log(message.content);
}
Advanced Usage¶
Multi-vector indexing¶
Store and search different aspects of memories separately to improve recall or omit certain fields from being indexed.
import { InMemoryStore } from "@langchain/langgraph";
// Configure store to embed both memory content and emotional context
const multiVectorStore = new InMemoryStore({
index: {
embeddings: embeddings,
dims: 1536,
fields: ["memory", "emotional_context"],
},
});
// Store memories with different content/emotion pairs
await multiVectorStore.put(["user_123", "memories"], "mem1", {
memory: "Had pizza with friends at Mario's",
emotional_context: "felt happy and connected",
this_isnt_indexed: "I prefer ravioli though",
});
await multiVectorStore.put(["user_123", "memories"], "mem2", {
memory: "Ate alone at home",
emotional_context: "felt a bit lonely",
this_isnt_indexed: "I like pie",
});
// Search focusing on emotional state - matches mem2
const results = await multiVectorStore.search(["user_123", "memories"], {
query: "times they felt isolated",
limit: 1,
});
console.log("Expect mem 2");
for (const r of results) {
console.log(`Item: ${r.key}; Score(${r.score})`);
console.log(`Memory: ${r.value.memory}`);
console.log(`Emotion: ${r.value.emotional_context}`);
}
Expect mem 2
Item: mem2; Score(0.5895009051069847)
Memory: Ate alone at home
Emotion: felt a bit lonely
Override fields at storage time¶
You can override which fields to embed when storing a specific memory using put(..., { index: [...fields] })
, regardless of the store's default configuration.
import { InMemoryStore } from "@langchain/langgraph";
const overrideStore = new InMemoryStore({
index: {
embeddings: embeddings,
dims: 1536,
// Default to embed memory field
fields: ["memory"],
}
});
// Store one memory with default indexing
await overrideStore.put(["user_123", "memories"], "mem1", {
memory: "I love spicy food",
context: "At a Thai restaurant",
});
// Store another overriding which fields to embed
await overrideStore.put(["user_123", "memories"], "mem2", {
memory: "I love spicy food",
context: "At a Thai restaurant",
// Override: only embed the context
index: ["context"]
});
// Search about food - matches mem1 (using default field)
console.log("Expect mem1");
const results2 = await overrideStore.search(["user_123", "memories"], {
query: "what food do they like",
limit: 1,
});
for (const r of results2) {
console.log(`Item: ${r.key}; Score(${r.score})`);
console.log(`Memory: ${r.value.memory}`);
}
// Search about restaurant atmosphere - matches mem2 (using overridden field)
console.log("Expect mem2");
const results3 = await overrideStore.search(["user_123", "memories"], {
query: "restaurant environment",
limit: 1,
});
for (const r of results3) {
console.log(`Item: ${r.key}; Score(${r.score})`);
console.log(`Memory: ${r.value.memory}`);
}