Skip to content

Storage

Base classes and types for persistent key-value stores.

Stores provide long-term memory that persists across threads and conversations. Supports hierarchical namespaces, key-value storage, and optional vector search.

Core types
  • BaseStore: Store interface with sync/async operations
  • Item: Stored key-value pairs with metadata
  • Op: Get/Put/Search/List operations

NamespacePath = tuple[Union[str, Literal['*']], ...] module-attribute

A tuple representing a namespace path that can include wildcards.

Examples
("users",)  # Exact users namespace
("documents", "*")  # Any sub-namespace under documents
("cache", "*", "v1")  # Any cache category with v1 version

NamespaceMatchType = Literal['prefix', 'suffix'] module-attribute

Specifies how to match namespace paths.

Values

"prefix": Match from the start of the namespace "suffix": Match from the end of the namespace

Item

Represents a stored item with metadata.

Parameters:

  • value (dict[str, Any]) –

    The stored data as a dictionary. Keys are filterable.

  • key (str) –

    Unique identifier within the namespace.

  • namespace (tuple[str, ...]) –

    Hierarchical path defining the collection in which this document resides. Represented as a tuple of strings, allowing for nested categorization. For example: ("documents", 'user123')

  • created_at (datetime) –

    Timestamp of item creation.

  • updated_at (datetime) –

    Timestamp of last update.

SearchItem

Bases: Item

Represents an item returned from a search operation with additional metadata.

__init__(namespace: tuple[str, ...], key: str, value: dict[str, Any], created_at: datetime, updated_at: datetime, score: Optional[float] = None) -> None

Initialize a result item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path to the item.

  • key (str) –

    Unique identifier within the namespace.

  • value (dict[str, Any]) –

    The stored value.

  • created_at (datetime) –

    When the item was first created.

  • updated_at (datetime) –

    When the item was last updated.

  • score (Optional[float], default: None ) –

    Relevance/similarity score if from a ranked operation.

GetOp

Bases: NamedTuple

Operation to retrieve a specific item by its namespace and key.

This operation allows precise retrieval of stored items using their full path (namespace) and unique identifier (key) combination.

Examples

Basic item retrieval:

GetOp(namespace=("users", "profiles"), key="user123")
GetOp(namespace=("cache", "embeddings"), key="doc456")

namespace: tuple[str, ...] instance-attribute

Hierarchical path that uniquely identifies the item's location.

Examples
("users",)  # Root level users namespace
("users", "profiles")  # Profiles within users namespace

key: str instance-attribute

Unique identifier for the item within its specific namespace.

Examples
"user123"  # For a user profile
"doc456"  # For a document

SearchOp

Bases: NamedTuple

Operation to search for items within a specified namespace hierarchy.

This operation supports both structured filtering and natural language search within a given namespace prefix. It provides pagination through limit and offset parameters.

Note

Natural language search support depends on your store implementation.

Examples

Search with filters and pagination:

SearchOp(
    namespace_prefix=("documents",),
    filter={"type": "report", "status": "active"},
    limit=5,
    offset=10
)

Natural language search:

SearchOp(
    namespace_prefix=("users", "content"),
    query="technical documentation about APIs",
    limit=20
)

namespace_prefix: tuple[str, ...] instance-attribute

Hierarchical path prefix defining the search scope.

Examples
()  # Search entire store
("documents",)  # Search all documents
("users", "content")  # Search within user content

filter: Optional[dict[str, Any]] = None class-attribute instance-attribute

Key-value pairs for filtering results based on exact matches or comparison operators.

The filter supports both exact matches and operator-based comparisons.

Supported Operators
  • $eq: Equal to (same as direct value comparison)
  • $ne: Not equal to
  • $gt: Greater than
  • $gte: Greater than or equal to
  • $lt: Less than
  • $lte: Less than or equal to
Examples

Simple exact match:

{"status": "active"}

Comparison operators:

{"score": {"$gt": 4.99}}  # Score greater than 4.99

Multiple conditions:

{
    "score": {"$gte": 3.0},
    "color": "red"
}

limit: int = 10 class-attribute instance-attribute

Maximum number of items to return in the search results.

offset: int = 0 class-attribute instance-attribute

Number of matching items to skip for pagination.

query: Optional[str] = None class-attribute instance-attribute

Natural language search query for semantic search capabilities.

Examples
  • "technical documentation about REST APIs"
  • "machine learning papers from 2023"

MatchCondition

Bases: NamedTuple

Represents a pattern for matching namespaces in the store.

This class combines a match type (prefix or suffix) with a namespace path pattern that can include wildcards to flexibly match different namespace hierarchies.

Examples

Prefix matching:

MatchCondition(match_type="prefix", path=("users", "profiles"))

Suffix matching with wildcard:

MatchCondition(match_type="suffix", path=("cache", "*"))

Simple suffix matching:

MatchCondition(match_type="suffix", path=("v1",))

match_type: NamespaceMatchType instance-attribute

Type of namespace matching to perform.

path: NamespacePath instance-attribute

Namespace path pattern that can include wildcards.

ListNamespacesOp

Bases: NamedTuple

Operation to list and filter namespaces in the store.

This operation allows exploring the organization of data, finding specific collections, and navigating the namespace hierarchy.

Examples

List all namespaces under the "documents" path:

ListNamespacesOp(
    match_conditions=(MatchCondition(match_type="prefix", path=("documents",)),),
    max_depth=2
)

List all namespaces that end with "v1":

ListNamespacesOp(
    match_conditions=(MatchCondition(match_type="suffix", path=("v1",)),),
    limit=50
)

match_conditions: Optional[tuple[MatchCondition, ...]] = None class-attribute instance-attribute

Optional conditions for filtering namespaces.

Examples

All user namespaces:

(MatchCondition(match_type="prefix", path=("users",)),)

All namespaces that start with "docs" and end with "draft":

(
    MatchCondition(match_type="prefix", path=("docs",)),
    MatchCondition(match_type="suffix", path=("draft",))
) 

max_depth: Optional[int] = None class-attribute instance-attribute

Maximum depth of namespace hierarchy to return.

Note

Namespaces deeper than this level will be truncated.

limit: int = 100 class-attribute instance-attribute

Maximum number of namespaces to return.

offset: int = 0 class-attribute instance-attribute

Number of namespaces to skip for pagination.

PutOp

Bases: NamedTuple

Operation to store, update, or delete an item in the store.

This class represents a single operation to modify the store's contents, whether adding new items, updating existing ones, or removing them.

namespace: tuple[str, ...] instance-attribute

Hierarchical path that identifies the location of the item.

The namespace acts as a folder-like structure to organize items. Each element in the tuple represents one level in the hierarchy.

Examples

Root level documents

("documents",)

User-specific documents

("documents", "user123")

Nested cache structure

("cache", "embeddings", "v1")

key: str instance-attribute

Unique identifier for the item within its namespace.

The key must be unique within the specific namespace to avoid conflicts. Together with the namespace, it forms a complete path to the item.

Example

If namespace is ("documents", "user123") and key is "report1", the full path would effectively be "documents/user123/report1"

value: Optional[dict[str, Any]] instance-attribute

The data to store, or None to mark the item for deletion.

The value must be a dictionary with string keys and JSON-serializable values. Setting this to None signals that the item should be deleted.

Example

{ "field1": "string value", "field2": 123, "nested": {"can": "contain", "any": "serializable data"} }

index: Optional[Union[Literal[False], list[str]]] = None class-attribute instance-attribute

Controls how the item's fields are indexed for search operations.

The item remains accessible through direct get() operations regardless of indexing. When indexed, fields can be searched using natural language queries through vector similarity search (if supported by the store implementation).

Path Syntax
  • Simple field access: "field"
  • Nested fields: "parent.child.grandchild"
  • Array indexing:
  • Specific index: "array[0]"
  • Last element: "array[-1]"
  • All elements (each individually): "array[*]"
Examples
  • None - Use store defaults (whole item)
  • list[str] - List of fields to index
[
    "metadata.title",                    # Nested field access
    "context[*].content",                # Index content from all context as separate vectors
    "authors[0].name",                   # First author's name
    "revisions[-1].changes",             # Most recent revision's changes
    "sections[*].paragraphs[*].text",    # All text from all paragraphs in all sections
    "metadata.tags[*]",                  # All tags in metadata
]

InvalidNamespaceError

Bases: ValueError

Provided namespace is invalid.

IndexConfig

Bases: TypedDict

Configuration for indexing documents for semantic search in the store.

If not provided to the store, the store will not support vector search. In that case, all index arguments to put() and aput() operations will be ignored.

dims: int instance-attribute

Number of dimensions in the embedding vectors.

Common embedding models have the following dimensions
  • openai:text-embedding-3-large: 3072
  • openai:text-embedding-3-small: 1536
  • openai:text-embedding-ada-002: 1536
  • cohere:embed-english-v3.0: 1024
  • cohere:embed-english-light-v3.0: 384
  • cohere:embed-multilingual-v3.0: 1024
  • cohere:embed-multilingual-light-v3.0: 384

embed: Union[Embeddings, EmbeddingsFunc, AEmbeddingsFunc] instance-attribute

Optional function to generate embeddings from text.

Can be specified in three ways
  1. A LangChain Embeddings instance
  2. A synchronous embedding function (EmbeddingsFunc)
  3. An asynchronous embedding function (AEmbeddingsFunc)
Examples

Using LangChain's initialization with InMemoryStore:

from langchain.embeddings import init_embeddings
from langgraph.store.memory import InMemoryStore

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": init_embeddings("openai:text-embedding-3-small")
    }
)

Using a custom embedding function with InMemoryStore:

from openai import OpenAI
from langgraph.store.memory import InMemoryStore

client = OpenAI()

def embed_texts(texts: list[str]) -> list[list[float]]:
    response = client.embeddings.create(
        model="text-embedding-3-small",
        input=texts
    )
    return [e.embedding for e in response.data]

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": embed_texts
    }
)

Using an asynchronous embedding function with InMemoryStore:

from openai import AsyncOpenAI
from langgraph.store.memory import InMemoryStore

client = AsyncOpenAI()

async def aembed_texts(texts: list[str]) -> list[list[float]]:
    response = await client.embeddings.create(
        model="text-embedding-3-small",
        input=texts
    )
    return [e.embedding for e in response.data]

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": aembed_texts
    }
)

fields: Optional[list[str]] instance-attribute

Fields to extract text from for embedding generation.

Controls which parts of stored items are embedded for semantic search. Follows JSON path syntax:

- ["$"]: Embeds the entire JSON object as one vector  (default)
- ["field1", "field2"]: Embeds specific top-level fields
- ["parent.child"]: Embeds nested fields using dot notation
- ["array[*].field"]: Embeds field from each array element separately
Note

You can always override this behavior when storing an item using the index parameter in the put or aput operations.

Examples
# Embed entire document (default)
fields=["$"]

# Embed specific fields
fields=["text", "summary"]

# Embed nested fields
fields=["metadata.title", "content.body"]

# Embed from arrays
fields=["messages[*].content"]  # Each message content separately
fields=["context[0].text"]      # First context item's text
Note
  • Fields missing from a document are skipped
  • Array notation creates separate embeddings for each element
  • Complex nested paths are supported (e.g., "a.b[*].c.d")

BaseStore

Bases: ABC

Abstract base class for persistent key-value stores.

Stores enable persistence and memory that can be shared across threads, scoped to user IDs, assistant IDs, or other arbitrary namespaces. Some implementations may support semantic search capabilities through an optional index configuration.

Note

Semantic search capabilities vary by implementation and are typically disabled by default. Stores that support this feature can be configured by providing an index configuration at creation time. Without this configuration, semantic search is disabled and any index arguments to storage operations will have no effect.

batch(ops: Iterable[Op]) -> list[Result] abstractmethod

Execute multiple operations synchronously in a single batch.

Parameters:

  • ops (Iterable[Op]) –

    An iterable of operations to execute.

Returns:

  • list[Result]

    A list of results, where each result corresponds to an operation in the input.

  • list[Result]

    The order of results matches the order of input operations.

abatch(ops: Iterable[Op]) -> list[Result] abstractmethod async

Execute multiple operations asynchronously in a single batch.

Parameters:

  • ops (Iterable[Op]) –

    An iterable of operations to execute.

Returns:

  • list[Result]

    A list of results, where each result corresponds to an operation in the input.

  • list[Result]

    The order of results matches the order of input operations.

get(namespace: tuple[str, ...], key: str) -> Optional[Item]

Retrieve a single item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

Returns:

  • Optional[Item]

    The retrieved item or None if not found.

search(namespace_prefix: tuple[str, ...], /, *, query: Optional[str] = None, filter: Optional[dict[str, Any]] = None, limit: int = 10, offset: int = 0) -> list[SearchItem]

Search for items within a namespace prefix.

Parameters:

  • namespace_prefix (tuple[str, ...]) –

    Hierarchical path prefix to search within.

  • query (Optional[str], default: None ) –

    Optional query for natural language search.

  • filter (Optional[dict[str, Any]], default: None ) –

    Key-value pairs to filter results.

  • limit (int, default: 10 ) –

    Maximum number of items to return.

  • offset (int, default: 0 ) –

    Number of items to skip before returning results.

Returns:

  • list[SearchItem]

    List of items matching the search criteria.

Examples

Basic filtering:

# Search for documents with specific metadata
results = store.search(
    ("docs",),
    filter={"type": "article", "status": "published"}
)

Natural language search (requires vector store implementation):

# Initialize store with embedding configuration
store = YourStore( # e.g., InMemoryStore, AsyncPostgresStore
    index={
        "dims": 1536,  # embedding dimensions
        "embed": your_embedding_function,  # function to create embeddings
        "fields": ["text"]  # fields to embed. Defaults to ["$"]
    }
)

# Search for semantically similar documents
results = store.search(
    ("docs",),
    query="machine learning applications in healthcare",
    filter={"type": "research_paper"},
    limit=5
)

Note: Natural language search support depends on your store implementation and requires proper embedding configuration.

put(namespace: tuple[str, ...], key: str, value: dict[str, Any], index: Optional[Union[Literal[False], list[str]]] = None) -> None

Store or update an item in the store.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item, represented as a tuple of strings. Example: ("documents", "user123")

  • key (str) –

    Unique identifier within the namespace. Together with namespace forms the complete path to the item.

  • value (dict[str, Any]) –

    Dictionary containing the item's data. Must contain string keys and JSON-serializable values.

  • index (Optional[Union[Literal[False], list[str]]], default: None ) –

    Controls how the item's fields are indexed for search:

    • None (default): Use fields you configured when creating the store (if any) If you do not initialize the store with indexing capabilities, the index parameter will be ignored
    • False: Disable indexing for this item
    • list[str]: List of field paths to index, supporting:
      • Nested fields: "metadata.title"
      • Array access: "chapters[*].content" (each indexed separately)
      • Specific indices: "authors[0].name"
Note

Indexing support depends on your store implementation. If you do not initialize the store with indexing capabilities, the index parameter will be ignored.

Examples

Store item. Indexing depends on how you configure the store.

store.put(("docs",), "report", {"memory": "Will likes ai"})

Do not index item for semantic search. Still accessible through get() and search() operations but won't have a vector representation.

store.put(("docs",), "report", {"memory": "Will likes ai"}, index=False)

Index specific fields for search.

store.put(("docs",), "report", {"memory": "Will likes ai"}, index=["memory"])

delete(namespace: tuple[str, ...], key: str) -> None

Delete an item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

list_namespaces(*, prefix: Optional[NamespacePath] = None, suffix: Optional[NamespacePath] = None, max_depth: Optional[int] = None, limit: int = 100, offset: int = 0) -> list[tuple[str, ...]]

List and filter namespaces in the store.

Used to explore the organization of data, find specific collections, or navigate the namespace hierarchy.

Parameters:

  • prefix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that start with this path.

  • suffix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that end with this path.

  • max_depth (Optional[int], default: None ) –

    Return namespaces up to this depth in the hierarchy. Namespaces deeper than this level will be truncated.

  • limit (int, default: 100 ) –

    Maximum number of namespaces to return (default 100).

  • offset (int, default: 0 ) –

    Number of namespaces to skip for pagination (default 0).

Returns:

  • list[tuple[str, ...]]

    List[Tuple[str, ...]]: A list of namespace tuples that match the criteria.

  • list[tuple[str, ...]]

    Each tuple represents a full namespace path up to max_depth.

???+ example "Examples": Setting max_depth=3. Given the namespaces:

# Example if you have the following namespaces:
# ("a", "b", "c")
# ("a", "b", "d", "e")
# ("a", "b", "d", "i")
# ("a", "b", "f")
# ("a", "c", "f")
store.list_namespaces(prefix=("a", "b"), max_depth=3)
# [("a", "b", "c"), ("a", "b", "d"), ("a", "b", "f")]

aget(namespace: tuple[str, ...], key: str) -> Optional[Item] async

Asynchronously retrieve a single item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

Returns:

  • Optional[Item]

    The retrieved item or None if not found.

asearch(namespace_prefix: tuple[str, ...], /, *, query: Optional[str] = None, filter: Optional[dict[str, Any]] = None, limit: int = 10, offset: int = 0) -> list[SearchItem] async

Asynchronously search for items within a namespace prefix.

Parameters:

  • namespace_prefix (tuple[str, ...]) –

    Hierarchical path prefix to search within.

  • query (Optional[str], default: None ) –

    Optional query for natural language search.

  • filter (Optional[dict[str, Any]], default: None ) –

    Key-value pairs to filter results.

  • limit (int, default: 10 ) –

    Maximum number of items to return.

  • offset (int, default: 0 ) –

    Number of items to skip before returning results.

Returns:

  • list[SearchItem]

    List of items matching the search criteria.

Examples

Basic filtering:

# Search for documents with specific metadata
results = await store.asearch(
    ("docs",),
    filter={"type": "article", "status": "published"}
)

Natural language search (requires vector store implementation):

# Initialize store with embedding configuration
store = YourStore( # e.g., InMemoryStore, AsyncPostgresStore
    index={
        "dims": 1536,  # embedding dimensions
        "embed": your_embedding_function,  # function to create embeddings
        "fields": ["text"]  # fields to embed
    }
)

# Search for semantically similar documents
results = await store.asearch(
    ("docs",),
    query="machine learning applications in healthcare",
    filter={"type": "research_paper"},
    limit=5
)

Note: Natural language search support depends on your store implementation and requires proper embedding configuration.

aput(namespace: tuple[str, ...], key: str, value: dict[str, Any], index: Optional[Union[Literal[False], list[str]]] = None) -> None async

Asynchronously store or update an item in the store.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item, represented as a tuple of strings. Example: ("documents", "user123")

  • key (str) –

    Unique identifier within the namespace. Together with namespace forms the complete path to the item.

  • value (dict[str, Any]) –

    Dictionary containing the item's data. Must contain string keys and JSON-serializable values.

  • index (Optional[Union[Literal[False], list[str]]], default: None ) –

    Controls how the item's fields are indexed for search:

    • None (default): Use fields you configured when creating the store (if any) If you do not initialize the store with indexing capabilities, the index parameter will be ignored
    • False: Disable indexing for this item
    • list[str]: List of field paths to index, supporting:
      • Nested fields: "metadata.title"
      • Array access: "chapters[*].content" (each indexed separately)
      • Specific indices: "authors[0].name"
Note

Indexing support depends on your store implementation. If you do not initialize the store with indexing capabilities, the index parameter will be ignored.

Examples

Store item. Indexing depends on how you configure the store.

await store.aput(("docs",), "report", {"memory": "Will likes ai"})

Do not index item for semantic search. Still accessible through get() and search() operations but won't have a vector representation.

await store.aput(("docs",), "report", {"memory": "Will likes ai"}, index=False)

Index specific fields for search (if store configured to index items):

await store.aput(
    ("docs",),
    "report",
    {
        "memory": "Will likes ai",
        "context": [{"content": "..."}, {"content": "..."}]
    },
    index=["memory", "context[*].content"]
)

adelete(namespace: tuple[str, ...], key: str) -> None async

Asynchronously delete an item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

alist_namespaces(*, prefix: Optional[NamespacePath] = None, suffix: Optional[NamespacePath] = None, max_depth: Optional[int] = None, limit: int = 100, offset: int = 0) -> list[tuple[str, ...]] async

List and filter namespaces in the store asynchronously.

Used to explore the organization of data, find specific collections, or navigate the namespace hierarchy.

Parameters:

  • prefix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that start with this path.

  • suffix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that end with this path.

  • max_depth (Optional[int], default: None ) –

    Return namespaces up to this depth in the hierarchy. Namespaces deeper than this level will be truncated to this depth.

  • limit (int, default: 100 ) –

    Maximum number of namespaces to return (default 100).

  • offset (int, default: 0 ) –

    Number of namespaces to skip for pagination (default 0).

Returns:

  • list[tuple[str, ...]]

    List[Tuple[str, ...]]: A list of namespace tuples that match the criteria.

  • list[tuple[str, ...]]

    Each tuple represents a full namespace path up to max_depth.

Examples

Setting max_depth=3 with existing namespaces:

# Given the following namespaces:
# ("a", "b", "c")
# ("a", "b", "d", "e")
# ("a", "b", "d", "i")
# ("a", "b", "f")
# ("a", "c", "f")

await store.alist_namespaces(prefix=("a", "b"), max_depth=3)
# Returns: [("a", "b", "c"), ("a", "b", "d"), ("a", "b", "f")]

ensure_embeddings(embed: Union[Embeddings, EmbeddingsFunc, AEmbeddingsFunc, None]) -> Embeddings

Ensure that an embedding function conforms to LangChain's Embeddings interface.

This function wraps arbitrary embedding functions to make them compatible with LangChain's Embeddings interface. It handles both synchronous and asynchronous functions.

Parameters:

  • embed (Union[Embeddings, EmbeddingsFunc, AEmbeddingsFunc, None]) –

    Either an existing Embeddings instance, or a function that converts text to embeddings. If the function is async, it will be used for both sync and async operations.

Returns:

  • Embeddings

    An Embeddings instance that wraps the provided function(s).

Examples

Wrap a synchronous embedding function:

def my_embed_fn(texts):
    return [[0.1, 0.2] for _ in texts]

embeddings = ensure_embeddings(my_embed_fn)
result = embeddings.embed_query("hello")  # Returns [0.1, 0.2]

Wrap an asynchronous embedding function:

async def my_async_fn(texts):
    return [[0.1, 0.2] for _ in texts]

embeddings = ensure_embeddings(my_async_fn)
result = await embeddings.aembed_query("hello")  # Returns [0.1, 0.2]

get_text_at_path(obj: Any, path: Union[str, list[str]]) -> list[str]

Extract text from an object using a path expression or pre-tokenized path.

Parameters:

  • obj (Any) –

    The object to extract text from

  • path (Union[str, list[str]]) –

    Either a path string or pre-tokenized path list.

Path types handled

  • Simple paths: "field1.field2"
  • Array indexing: "[0]", "[*]", "[-1]"
  • Wildcards: "*"
  • Multi-field selection: "{field1,field2}"
  • Nested paths in multi-field: "{field1,nested.field2}"

tokenize_path(path: str) -> list[str]

Tokenize a path into components.

Types handled

  • Simple paths: "field1.field2"
  • Array indexing: "[0]", "[*]", "[-1]"
  • Wildcards: "*"
  • Multi-field selection: "{field1,field2}"

AsyncPostgresStore

Bases: AsyncBatchedBaseStore, BasePostgresStore[Conn]

Asynchronous Postgres-backed store with optional vector search using pgvector.

Examples

Basic setup and key-value storage:

from langgraph.store.postgres import AsyncPostgresStore

async with AsyncPostgresStore.from_conn_string(
    "postgresql://user:pass@localhost:5432/dbname"
) as store:
    await store.setup()

    # Store and retrieve data
    await store.aput(("users", "123"), "prefs", {"theme": "dark"})
    item = await store.aget(("users", "123"), "prefs")

Vector search using LangChain embeddings:

from langchain.embeddings import init_embeddings
from langgraph.store.postgres import AsyncPostgresStore

async with AsyncPostgresStore.from_conn_string(
    "postgresql://user:pass@localhost:5432/dbname",
    index={
        "dims": 1536,
        "embed": init_embeddings("openai:text-embedding-3-small"),
        "fields": ["text"]  # specify which fields to embed. Default is the whole serialized value
    }
) as store:
    await store.setup()  # Do this once to run migrations

    # Store documents
    await store.aput(("docs",), "doc1", {"text": "Python tutorial"})
    await store.aput(("docs",), "doc2", {"text": "TypeScript guide"})
    # Don't index the following
    await store.aput(("docs",), "doc3", {"text": "Other guide"}, index=False)

    # Search by similarity
    results = await store.asearch(("docs",), query="python programming")

Using connection pooling for better performance:

from langgraph.store.postgres import AsyncPostgresStore, PoolConfig

async with AsyncPostgresStore.from_conn_string(
    "postgresql://user:pass@localhost:5432/dbname",
    pool_config=PoolConfig(
        min_size=5,
        max_size=20
    )
) as store:
    await store.setup()
    # Use store with connection pooling...

Warning

Make sure to: 1. Call setup() before first use to create necessary tables and indexes 2. Have the pgvector extension available to use vector search 3. Use Python 3.10+ for async functionality

Note

Semantic search is disabled by default. You can enable it by providing an index configuration when creating the store. Without this configuration, all index arguments passed to put or aputwill have no effect.

from_conn_string(conn_string: str, *, pipeline: bool = False, pool_config: Optional[PoolConfig] = None, index: Optional[PostgresIndexConfig] = None) -> AsyncIterator[AsyncPostgresStore] async classmethod

Create a new AsyncPostgresStore instance from a connection string.

Parameters:

  • conn_string (str) –

    The Postgres connection info string.

  • pipeline (bool, default: False ) –

    Whether to use AsyncPipeline (only for single connections)

  • pool_config (Optional[PoolConfig], default: None ) –

    Configuration for the connection pool. If provided, will create a connection pool and use it instead of a single connection. This overrides the pipeline argument.

  • index (Optional[PostgresIndexConfig], default: None ) –

    The embedding config.

Returns:

  • AsyncPostgresStore ( AsyncIterator[AsyncPostgresStore] ) –

    A new AsyncPostgresStore instance.

setup() -> None async

Set up the store database asynchronously.

This method creates the necessary tables in the Postgres database if they don't already exist and runs database migrations. It MUST be called directly by the user the first time the store is used.

PostgresStore

Bases: BaseStore, BasePostgresStore[Conn]

Postgres-backed store with optional vector search using pgvector.

Examples

Basic setup and key-value storage:

from langgraph.store.postgres import PostgresStore

store = PostgresStore(
    connection_string="postgresql://user:pass@localhost:5432/dbname"
)
store.setup()

# Store and retrieve data
store.put(("users", "123"), "prefs", {"theme": "dark"})
item = store.get(("users", "123"), "prefs")

Vector search using LangChain embeddings:

from langchain.embeddings import init_embeddings
from langgraph.store.postgres import PostgresStore

store = PostgresStore(
    connection_string="postgresql://user:pass@localhost:5432/dbname",
    index={
        "dims": 1536,
        "embed": init_embeddings("openai:text-embedding-3-small"),
        "fields": ["text"]  # specify which fields to embed. Default is the whole serialized value
    }
)
store.setup() # Do this once to run migrations

# Store documents
store.put(("docs",), "doc1", {"text": "Python tutorial"})
store.put(("docs",), "doc2", {"text": "TypeScript guide"})
store.put(("docs",), "doc2", {"text": "Other guide"}, index=False) # don't index

# Search by similarity
results = store.search(("docs",), query="python programming")

Note

Semantic search is disabled by default. You can enable it by providing an index configuration when creating the store. Without this configuration, all index arguments passed to put or aputwill have no effect.

Warning

Make sure to call setup() before first use to create necessary tables and indexes. The pgvector extension must be available to use vector search.

get(namespace: tuple[str, ...], key: str) -> Optional[Item]

Retrieve a single item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

Returns:

  • Optional[Item]

    The retrieved item or None if not found.

search(namespace_prefix: tuple[str, ...], /, *, query: Optional[str] = None, filter: Optional[dict[str, Any]] = None, limit: int = 10, offset: int = 0) -> list[SearchItem]

Search for items within a namespace prefix.

Parameters:

  • namespace_prefix (tuple[str, ...]) –

    Hierarchical path prefix to search within.

  • query (Optional[str], default: None ) –

    Optional query for natural language search.

  • filter (Optional[dict[str, Any]], default: None ) –

    Key-value pairs to filter results.

  • limit (int, default: 10 ) –

    Maximum number of items to return.

  • offset (int, default: 0 ) –

    Number of items to skip before returning results.

Returns:

  • list[SearchItem]

    List of items matching the search criteria.

Examples

Basic filtering:

# Search for documents with specific metadata
results = store.search(
    ("docs",),
    filter={"type": "article", "status": "published"}
)

Natural language search (requires vector store implementation):

# Initialize store with embedding configuration
store = YourStore( # e.g., InMemoryStore, AsyncPostgresStore
    index={
        "dims": 1536,  # embedding dimensions
        "embed": your_embedding_function,  # function to create embeddings
        "fields": ["text"]  # fields to embed. Defaults to ["$"]
    }
)

# Search for semantically similar documents
results = store.search(
    ("docs",),
    query="machine learning applications in healthcare",
    filter={"type": "research_paper"},
    limit=5
)

Note: Natural language search support depends on your store implementation and requires proper embedding configuration.

put(namespace: tuple[str, ...], key: str, value: dict[str, Any], index: Optional[Union[Literal[False], list[str]]] = None) -> None

Store or update an item in the store.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item, represented as a tuple of strings. Example: ("documents", "user123")

  • key (str) –

    Unique identifier within the namespace. Together with namespace forms the complete path to the item.

  • value (dict[str, Any]) –

    Dictionary containing the item's data. Must contain string keys and JSON-serializable values.

  • index (Optional[Union[Literal[False], list[str]]], default: None ) –

    Controls how the item's fields are indexed for search:

    • None (default): Use fields you configured when creating the store (if any) If you do not initialize the store with indexing capabilities, the index parameter will be ignored
    • False: Disable indexing for this item
    • list[str]: List of field paths to index, supporting:
      • Nested fields: "metadata.title"
      • Array access: "chapters[*].content" (each indexed separately)
      • Specific indices: "authors[0].name"
Note

Indexing support depends on your store implementation. If you do not initialize the store with indexing capabilities, the index parameter will be ignored.

Examples

Store item. Indexing depends on how you configure the store.

store.put(("docs",), "report", {"memory": "Will likes ai"})

Do not index item for semantic search. Still accessible through get() and search() operations but won't have a vector representation.

store.put(("docs",), "report", {"memory": "Will likes ai"}, index=False)

Index specific fields for search.

store.put(("docs",), "report", {"memory": "Will likes ai"}, index=["memory"])

delete(namespace: tuple[str, ...], key: str) -> None

Delete an item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

list_namespaces(*, prefix: Optional[NamespacePath] = None, suffix: Optional[NamespacePath] = None, max_depth: Optional[int] = None, limit: int = 100, offset: int = 0) -> list[tuple[str, ...]]

List and filter namespaces in the store.

Used to explore the organization of data, find specific collections, or navigate the namespace hierarchy.

Parameters:

  • prefix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that start with this path.

  • suffix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that end with this path.

  • max_depth (Optional[int], default: None ) –

    Return namespaces up to this depth in the hierarchy. Namespaces deeper than this level will be truncated.

  • limit (int, default: 100 ) –

    Maximum number of namespaces to return (default 100).

  • offset (int, default: 0 ) –

    Number of namespaces to skip for pagination (default 0).

Returns:

  • list[tuple[str, ...]]

    List[Tuple[str, ...]]: A list of namespace tuples that match the criteria.

  • list[tuple[str, ...]]

    Each tuple represents a full namespace path up to max_depth.

???+ example "Examples": Setting max_depth=3. Given the namespaces:

# Example if you have the following namespaces:
# ("a", "b", "c")
# ("a", "b", "d", "e")
# ("a", "b", "d", "i")
# ("a", "b", "f")
# ("a", "c", "f")
store.list_namespaces(prefix=("a", "b"), max_depth=3)
# [("a", "b", "c"), ("a", "b", "d"), ("a", "b", "f")]

aget(namespace: tuple[str, ...], key: str) -> Optional[Item] async

Asynchronously retrieve a single item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

Returns:

  • Optional[Item]

    The retrieved item or None if not found.

asearch(namespace_prefix: tuple[str, ...], /, *, query: Optional[str] = None, filter: Optional[dict[str, Any]] = None, limit: int = 10, offset: int = 0) -> list[SearchItem] async

Asynchronously search for items within a namespace prefix.

Parameters:

  • namespace_prefix (tuple[str, ...]) –

    Hierarchical path prefix to search within.

  • query (Optional[str], default: None ) –

    Optional query for natural language search.

  • filter (Optional[dict[str, Any]], default: None ) –

    Key-value pairs to filter results.

  • limit (int, default: 10 ) –

    Maximum number of items to return.

  • offset (int, default: 0 ) –

    Number of items to skip before returning results.

Returns:

  • list[SearchItem]

    List of items matching the search criteria.

Examples

Basic filtering:

# Search for documents with specific metadata
results = await store.asearch(
    ("docs",),
    filter={"type": "article", "status": "published"}
)

Natural language search (requires vector store implementation):

# Initialize store with embedding configuration
store = YourStore( # e.g., InMemoryStore, AsyncPostgresStore
    index={
        "dims": 1536,  # embedding dimensions
        "embed": your_embedding_function,  # function to create embeddings
        "fields": ["text"]  # fields to embed
    }
)

# Search for semantically similar documents
results = await store.asearch(
    ("docs",),
    query="machine learning applications in healthcare",
    filter={"type": "research_paper"},
    limit=5
)

Note: Natural language search support depends on your store implementation and requires proper embedding configuration.

aput(namespace: tuple[str, ...], key: str, value: dict[str, Any], index: Optional[Union[Literal[False], list[str]]] = None) -> None async

Asynchronously store or update an item in the store.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item, represented as a tuple of strings. Example: ("documents", "user123")

  • key (str) –

    Unique identifier within the namespace. Together with namespace forms the complete path to the item.

  • value (dict[str, Any]) –

    Dictionary containing the item's data. Must contain string keys and JSON-serializable values.

  • index (Optional[Union[Literal[False], list[str]]], default: None ) –

    Controls how the item's fields are indexed for search:

    • None (default): Use fields you configured when creating the store (if any) If you do not initialize the store with indexing capabilities, the index parameter will be ignored
    • False: Disable indexing for this item
    • list[str]: List of field paths to index, supporting:
      • Nested fields: "metadata.title"
      • Array access: "chapters[*].content" (each indexed separately)
      • Specific indices: "authors[0].name"
Note

Indexing support depends on your store implementation. If you do not initialize the store with indexing capabilities, the index parameter will be ignored.

Examples

Store item. Indexing depends on how you configure the store.

await store.aput(("docs",), "report", {"memory": "Will likes ai"})

Do not index item for semantic search. Still accessible through get() and search() operations but won't have a vector representation.

await store.aput(("docs",), "report", {"memory": "Will likes ai"}, index=False)

Index specific fields for search (if store configured to index items):

await store.aput(
    ("docs",),
    "report",
    {
        "memory": "Will likes ai",
        "context": [{"content": "..."}, {"content": "..."}]
    },
    index=["memory", "context[*].content"]
)

adelete(namespace: tuple[str, ...], key: str) -> None async

Asynchronously delete an item.

Parameters:

  • namespace (tuple[str, ...]) –

    Hierarchical path for the item.

  • key (str) –

    Unique identifier within the namespace.

alist_namespaces(*, prefix: Optional[NamespacePath] = None, suffix: Optional[NamespacePath] = None, max_depth: Optional[int] = None, limit: int = 100, offset: int = 0) -> list[tuple[str, ...]] async

List and filter namespaces in the store asynchronously.

Used to explore the organization of data, find specific collections, or navigate the namespace hierarchy.

Parameters:

  • prefix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that start with this path.

  • suffix (Optional[Tuple[str, ...]], default: None ) –

    Filter namespaces that end with this path.

  • max_depth (Optional[int], default: None ) –

    Return namespaces up to this depth in the hierarchy. Namespaces deeper than this level will be truncated to this depth.

  • limit (int, default: 100 ) –

    Maximum number of namespaces to return (default 100).

  • offset (int, default: 0 ) –

    Number of namespaces to skip for pagination (default 0).

Returns:

  • list[tuple[str, ...]]

    List[Tuple[str, ...]]: A list of namespace tuples that match the criteria.

  • list[tuple[str, ...]]

    Each tuple represents a full namespace path up to max_depth.

Examples

Setting max_depth=3 with existing namespaces:

# Given the following namespaces:
# ("a", "b", "c")
# ("a", "b", "d", "e")
# ("a", "b", "d", "i")
# ("a", "b", "f")
# ("a", "c", "f")

await store.alist_namespaces(prefix=("a", "b"), max_depth=3)
# Returns: [("a", "b", "c"), ("a", "b", "d"), ("a", "b", "f")]

from_conn_string(conn_string: str, *, pipeline: bool = False, pool_config: Optional[PoolConfig] = None, index: Optional[PostgresIndexConfig] = None) -> Iterator[PostgresStore] classmethod

Create a new PostgresStore instance from a connection string.

Parameters:

  • conn_string (str) –

    The Postgres connection info string.

  • pipeline (bool, default: False ) –

    whether to use Pipeline

  • pool_config (Optional[PoolArgs], default: None ) –

    Configuration for the connection pool. If provided, will create a connection pool and use it instead of a single connection. This overrides the pipeline argument.

  • index (Optional[PostgresIndexConfig], default: None ) –

    The index configuration for the store.

Returns:

  • PostgresStore ( Iterator[PostgresStore] ) –

    A new PostgresStore instance.

setup() -> None

Set up the store database.

This method creates the necessary tables in the Postgres database if they don't already exist and runs database migrations. It MUST be called directly by the user the first time the store is used.

Comments