How to add semantic search to your LangGraph deployment¶
This guide explains how to add semantic search to your LangGraph deployment's cross-thread store, so that your agent can search for memories and other documents by semantic similarity.
Once configured, you can use semantic search in your LangGraph nodes. The store requires a namespace tuple to organize memories:
defsearch_memory(state:State,*,store:BaseStore):# Search the store using semantic similarity# The namespace tuple helps organize different types of memories# e.g., ("user_facts", "preferences") or ("conversation", "summaries")results=store.search(namespace=("memory","facts"),# Organize memories by typequery="your search query",limit=3# number of results to return)returnresults
The deployment will look for the function in the specified path. The function must be async and accept a list of strings:
# path/to/embedding_function.pyfromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefaembed_texts(texts:list[str])->list[list[float]]:"""Custom embedding function that must: 1. Be async 2. Accept a list of strings 3. Return a list of float arrays (embeddings) """response=awaitclient.embeddings.create(model="text-embedding-3-small",input=texts)return[e.embeddingforeinresponse.data]
You can also query the store using the LangGraph SDK. Since the SDK uses async operations:
fromlanggraph_sdkimportget_clientasyncdefsearch_store():client=get_client()results=awaitclient.store.search_items(("memory","facts"),query="your search query",limit=3# number of results to return)returnresults# Use in an async contextresults=awaitsearch_store()