Skip to content

Pregel

Pregel

Bases: PregelProtocol

Pregel manages the runtime behavior for LangGraph applications.

Overview

Pregel combines actors and channels into a single application. Actors read data from channels and write data to channels. Pregel organizes the execution of the application into multiple steps, following the Pregel Algorithm/Bulk Synchronous Parallel model.

Each step consists of three phases:

  • Plan: Determine which actors to execute in this step. For example, in the first step, select the actors that subscribe to the special input channels; in subsequent steps, select the actors that subscribe to channels updated in the previous step.
  • Execution: Execute all selected actors in parallel, until all complete, or one fails, or a timeout is reached. During this phase, channel updates are invisible to actors until the next step.
  • Update: Update the channels with the values written by the actors in this step.

Repeat until no actors are selected for execution, or a maximum number of steps is reached.

Actors

An actor is a PregelNode. It subscribes to channels, reads data from them, and writes data to them. It can be thought of as an actor in the Pregel algorithm. PregelNodes implement LangChain's Runnable interface.

Channels

Channels are used to communicate between actors (PregelNodes). Each channel has a value type, an update type, and an update function – which takes a sequence of updates and modifies the stored value. Channels can be used to send data from one chain to another, or to send data from a chain to itself in a future step. LangGraph provides a number of built-in channels:

Basic channels: LastValue and Topic
  • LastValue: The default channel, stores the last value sent to the channel, useful for input and output values, or for sending data from one step to the next
  • Topic: A configurable PubSub Topic, useful for sending multiple values between actors, or for accumulating output. Can be configured to deduplicate values, and/or to accumulate values over the course of multiple steps.
Advanced channels: Context and BinaryOperatorAggregate
  • Context: exposes the value of a context manager, managing its lifecycle. Useful for accessing external resources that require setup and/or teardown. eg. client = Context(httpx.Client)
  • BinaryOperatorAggregate: stores a persistent value, updated by applying a binary operator to the current value and each update sent to the channel, useful for computing aggregates over multiple steps. eg. total = BinaryOperatorAggregate(int, operator.add)

Examples

Most users will interact with Pregel via a StateGraph (Graph API) or via an entrypoint (Functional API).

However, for advanced use cases, Pregel can be used directly. If you're not sure whether you need to use Pregel directly, then the answer is probably no – you should use the Graph API or Functional API instead. These are higher-level interfaces that will compile down to Pregel under the hood.

Here are some examples to give you a sense of how it works:

Single node application
from langgraph.channels import EphemeralValue
from langgraph.pregel import Pregel, Channel, ChannelWriteEntry

node1 = (
    Channel.subscribe_to("a")
    | (lambda x: x + x)
    | Channel.write_to("b")
)

app = Pregel(
    nodes={"node1": node1},
    channels={
        "a": EphemeralValue(str),
        "b": EphemeralValue(str),
    },
    input_channels=["a"],
    output_channels=["b"],
)

app.invoke({"a": "foo"})
{'b': 'foofoo'}
Using multiple nodes and multiple output channels
from langgraph.channels import LastValue, EphemeralValue
from langgraph.pregel import Pregel, Channel, ChannelWriteEntry

node1 = (
    Channel.subscribe_to("a")
    | (lambda x: x + x)
    | Channel.write_to("b")
)

node2 = (
    Channel.subscribe_to("b")
    | (lambda x: x + x)
    | Channel.write_to("c")
)


app = Pregel(
    nodes={"node1": node1, "node2": node2},
    channels={
        "a": EphemeralValue(str),
        "b": LastValue(str),
        "c": EphemeralValue(str),
    },
    input_channels=["a"],
    output_channels=["b", "c"],
)

app.invoke({"a": "foo"})
{'b': 'foofoo', 'c': 'foofoofoofoo'}
Using a Topic channel
from langgraph.channels import LastValue, EphemeralValue, Topic
from langgraph.pregel import Pregel, Channel, ChannelWriteEntry

node1 = (
    Channel.subscribe_to("a")
    | (lambda x: x + x)
    | {
        "b": Channel.write_to("b"),
        "c": Channel.write_to("c")
    }
)

node2 = (
    Channel.subscribe_to("b")
    | (lambda x: x + x)
    | {
        "c": Channel.write_to("c"),
    }
)


app = Pregel(
    nodes={"node1": node1, "node2": node2},
    channels={
        "a": EphemeralValue(str),
        "b": EphemeralValue(str),
        "c": Topic(str, accumulate=True),
    },
    input_channels=["a"],
    output_channels=["c"],
)

app.invoke({"a": "foo"})
{'c': ['foofoo', 'foofoofoofoo']}
Using a BinaryOperatorAggregate channel
from langgraph.channels import EphemeralValue, BinaryOperatorAggregate
from langgraph.pregel import Pregel, Channel


node1 = (
    Channel.subscribe_to("a")
    | (lambda x: x + x)
    | {
        "b": Channel.write_to("b"),
        "c": Channel.write_to("c")
    }
)

node2 = (
    Channel.subscribe_to("b")
    | (lambda x: x + x)
    | {
        "c": Channel.write_to("c"),
    }
)


def reducer(current, update):
    if current:
        return current + " | " + "update"
    else:
        return update

app = Pregel(
    nodes={"node1": node1, "node2": node2},
    channels={
        "a": EphemeralValue(str),
        "b": EphemeralValue(str),
        "c": BinaryOperatorAggregate(str, operator=reducer),
    },
    input_channels=["a"],
    output_channels=["c"]
)

app.invoke({"a": "foo"})
{'c': 'foofoo | foofoofoofoo'}
Introducing a cycle

This example demonstrates how to introduce a cycle in the graph, by having a chain write to a channel it subscribes to. Execution will continue until a None value is written to the channel.

from langgraph.channels import EphemeralValue
from langgraph.pregel import Pregel, Channel, ChannelWrite, ChannelWriteEntry

example_node = (
    Channel.subscribe_to("value")
    | (lambda x: x + x if len(x) < 10 else None)
    | ChannelWrite(writes=[ChannelWriteEntry(channel="value", skip_none=True)])
)

app = Pregel(
    nodes={"example_node": example_node},
    channels={
        "value": EphemeralValue(str),
    },
    input_channels=["value"],
    output_channels=["value"]
)

app.invoke({"value": "a"})
{'value': 'aaaaaaaaaaaaaaaa'}

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

The type of output this Runnable produces specified as a pydantic model.

retry_policy class-attribute instance-attribute

retry_policy: Optional[Sequence[RetryPolicy]] = None

Retry policies to use when running tasks. Set to None to disable.

stream_mode class-attribute instance-attribute

stream_mode: StreamMode = stream_mode

Mode to stream output, defaults to 'values'.

stream_eager class-attribute instance-attribute

stream_eager: bool = stream_eager

Whether to force emitting stream events eagerly, automatically turned on for stream_mode "messages" and "custom".

stream_channels class-attribute instance-attribute

stream_channels: Optional[Union[str, Sequence[str]]] = (
    stream_channels
)

Channels to stream, defaults to all channels not in reserved channels

step_timeout class-attribute instance-attribute

step_timeout: Optional[float] = step_timeout

Maximum time to wait for a step to complete, in seconds. Defaults to None.

debug instance-attribute

debug: bool = debug if debug is not None else get_debug()

Whether to print debug information during execution. Defaults to False.

checkpointer class-attribute instance-attribute

checkpointer: Checkpointer = checkpointer

Checkpointer used to save and load graph state. Defaults to None.

store class-attribute instance-attribute

store: Optional[BaseStore] = store

Memory store to use for SharedValues. Defaults to None.

get_name

get_name(
    suffix: Optional[str] = None,
    *,
    name: Optional[str] = None
) -> str

Get the name of the Runnable.

get_prompts

get_prompts(
    config: Optional[RunnableConfig] = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: Union[
        Runnable[Any, Other],
        Callable[[Any], Other],
        Callable[[Iterator[Any]], Iterator[Other]],
        Mapping[
            str,
            Union[
                Runnable[Any, Other],
                Callable[[Any], Other],
                Any,
            ],
        ],
    ],
) -> RunnableSerializable[Input, Other]

Compose this Runnable with another object to create a RunnableSequence.

__ror__

__ror__(
    other: Union[
        Runnable[Other, Any],
        Callable[[Other], Any],
        Callable[[Iterator[Other]], Iterator[Any]],
        Mapping[
            str,
            Union[
                Runnable[Other, Any],
                Callable[[Other], Any],
                Any,
            ],
        ],
    ],
) -> RunnableSerializable[Other, Output]

Compose this Runnable with another object to create a RunnableSequence.

pipe

pipe(
    *others: Union[
        Runnable[Any, Other], Callable[[Any], Other]
    ],
    name: Optional[str] = None
) -> RunnableSerializable[Input, Other]

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example

.. code-block:: python

from langchain_core.runnables import RunnableLambda

def add_one(x: int) -> int:
    return x + 1

def mul_two(x: int) -> int:
    return x * 2

runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4

sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]

pick

pick(
    keys: Union[str, list[str]],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key

.. code-block:: python

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick list of keys

.. code-block:: python

from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")

chain = RunnableMap(
    str=as_str,
    json=as_json,
    bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

assign

assign(
    **kwargs: Union[
        Runnable[dict[str, Any], Any],
        Callable[[dict[str, Any]], Any],
        Mapping[
            str,
            Union[
                Runnable[dict[str, Any], Any],
                Callable[[dict[str, Any]], Any],
            ],
        ],
    ],
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

Returns a new Runnable.

.. code-block:: python

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

batch

batch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> list[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> Iterator[tuple[int, Union[Output, Exception]]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

abatch async

abatch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> list[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

  • inputs (list[Input]) –

    A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, list[RunnableConfig]]], default: None ) –

    A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

  • return_exceptions (bool, default: False ) –

    Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Returns:

  • list[Output]

    A list of outputs from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> AsyncIterator[tuple[int, Union[Output, Exception]]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

  • inputs (Sequence[Input]) –

    A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]], default: None ) –

    A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None. Defaults to None.

  • return_exceptions (bool, default: False ) –

    Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • AsyncIterator[tuple[int, Union[Output, Exception]]]

    A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any
) -> Union[
    AsyncIterator[RunLogPatch], AsyncIterator[RunLog]
]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

  • input (Any) –

    The input to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable.

  • diff (bool, default: True ) –

    Whether to yield diffs between each step or the current state.

  • with_streamed_output_list (bool, default: True ) –

    Whether to yield the streamed_output list.

  • include_names (Optional[Sequence[str]], default: None ) –

    Only include logs with these names.

  • include_types (Optional[Sequence[str]], default: None ) –

    Only include logs with these types.

  • include_tags (Optional[Sequence[str]], default: None ) –

    Only include logs with these tags.

  • exclude_names (Optional[Sequence[str]], default: None ) –

    Exclude logs with these names.

  • exclude_types (Optional[Sequence[str]], default: None ) –

    Exclude logs with these types.

  • exclude_tags (Optional[Sequence[str]], default: None ) –

    Exclude logs with these tags.

  • kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]

    A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | event | name | chunk | input | output | +======================+==================+=================================+===============================================+=================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_start | format_docs | | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_stream | format_docs | "hello world!, goodbye world!" | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | "hello world!, goodbye world!" | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

.. code-block:: python

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

.. code-block:: python

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

.. code-block:: python

template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

async def reverse(s: str) -> str:
    return s[::-1]

chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

.. code-block:: python

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

  • input (Any) –

    The input to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable.

  • version (Literal['v1', 'v2'], default: 'v2' ) –

    The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in v2.

  • include_names (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching names.

  • include_types (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching types.

  • include_tags (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching tags.

  • exclude_names (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching names.

  • exclude_types (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching types.

  • exclude_tags (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching tags.

  • kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

Yields:

  • AsyncIterator[StreamEvent]

    An async stream of StreamEvents.

Raises:

  • NotImplementedError

    If the version is not v1 or v2.

transform

transform(
    input: Iterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any]
) -> Iterator[Output]

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

  • input (Iterator[Input]) –

    An iterator of inputs to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable. Defaults to None.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • Output

    The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any]
) -> AsyncIterator[Output]

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

  • input (AsyncIterator[Input]) –

    An async iterator of inputs to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable. Defaults to None.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • AsyncIterator[Output]

    The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

  • kwargs (Any, default: {} ) –

    The arguments to bind to the Runnable.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the arguments bound.

Example:

.. code-block:: python

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model='llama2')

# Without bind.
chain = (
    llm
    | StrOutputParser()
)

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = (
    llm.bind(stop=["three"])
    | StrOutputParser()
)

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_listeners

with_listeners(
    *,
    on_start: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_end: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_error: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

on_start: Called before the Runnable starts running, with the Run object. on_end: Called after the Runnable finishes running, with the Run object. on_error: Called if the Runnable throws an error, with the Run object.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

  • on_start (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called before the Runnable starts running. Defaults to None.

  • on_end (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called after the Runnable finishes running. Defaults to None.

  • on_error (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called if the Runnable throws an error. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time

def test_runnable(time_to_sleep : int):
    time.sleep(time_to_sleep)

def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)

def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)

chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start,
    on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: Optional[AsyncListener] = None,
    on_end: Optional[AsyncListener] = None,
    on_error: Optional[AsyncListener] = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable, returning a new Runnable.

on_start: Asynchronously called before the Runnable starts running. on_end: Asynchronously called after the Runnable finishes running. on_error: Asynchronously called if the Runnable throws an error.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

  • on_start (Optional[AsyncListener], default: None ) –

    Asynchronously called before the Runnable starts running. Defaults to None.

  • on_end (Optional[AsyncListener], default: None ) –

    Asynchronously called after the Runnable finishes running. Defaults to None.

  • on_error (Optional[AsyncListener], default: None ) –

    Asynchronously called if the Runnable throws an error. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep : int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj : Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj : Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: Optional[type[Input]] = None,
    output_type: Optional[type[Output]] = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

  • input_type (Optional[type[Input]], default: None ) –

    The input type to bind to the Runnable. Defaults to None.

  • output_type (Optional[type[Output]], default: None ) –

    The output type to bind to the Runnable. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: Optional[
        ExponentialJitterParams
    ] = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

  • retry_if_exception_type (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to retry on. Defaults to (Exception,).

  • wait_exponential_jitter (bool, default: True ) –

    Whether to add jitter to the wait time between retries. Defaults to True.

  • stop_after_attempt (int, default: 3 ) –

    The maximum number of attempts to make before giving up. Defaults to 3.

  • exponential_jitter_params (Optional[ExponentialJitterParams], default: None ) –

    Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

Returns:

  • Runnable[Input, Output]

    A new Runnable that retries the original Runnable on exceptions.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
         pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert (count == 2)

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke() with each input.

Returns:

  • Runnable[list[Input], list[Output]]

    A new Runnable that maps a list of inputs to a list of outputs.

Example:

.. code-block:: python

        from langchain_core.runnables import RunnableLambda

        def _lambda(x: int) -> int:
            return x + 1

        runnable = RunnableLambda(_lambda)
        print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: Optional[str] = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

  • fallbacks (Sequence[Runnable[Input, Output]]) –

    A sequence of runnables to try if the original Runnable fails.

  • exceptions_to_handle (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to handle. Defaults to (Exception,).

  • exception_key (Optional[str], default: None ) –

    If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

Returns:

  • RunnableWithFallbacks[Input, Output]

    A new Runnable that will try the original Runnable, and then each

  • RunnableWithFallbacks[Input, Output]

    fallback in order, upon failures.

Example:

.. code-block:: python

    from typing import Iterator

    from langchain_core.runnables import RunnableGenerator


    def _generate_immediate_error(input: Iterator) -> Iterator[str]:
        raise ValueError()
        yield ""


    def _generate(input: Iterator) -> Iterator[str]:
        yield from "foo bar"


    runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
        [RunnableGenerator(_generate)]
        )
    print(''.join(runnable.stream({}))) #foo bar

Parameters:

  • fallbacks (Sequence[Runnable[Input, Output]]) –

    A sequence of runnables to try if the original Runnable fails.

  • exceptions_to_handle (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to handle.

  • exception_key (Optional[str], default: None ) –

    If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

Returns:

  • RunnableWithFallbacks[Input, Output]

    A new Runnable that will try the original Runnable, and then each

  • RunnableWithFallbacks[Input, Output]

    fallback in order, upon failures.

as_tool

as_tool(
    args_schema: Optional[type[BaseModel]] = None,
    *,
    name: Optional[str] = None,
    description: Optional[str] = None,
    arg_types: Optional[dict[str, type]] = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

  • args_schema (Optional[type[BaseModel]], default: None ) –

    The schema for the tool. Defaults to None.

  • name (Optional[str], default: None ) –

    The name of the tool. Defaults to None.

  • description (Optional[str], default: None ) –

    The description of the tool. Defaults to None.

  • arg_types (Optional[dict[str, type]], default: None ) –

    A dictionary of argument names to types. Defaults to None.

Returns:

  • BaseTool

    A BaseTool instance.

Typed dict input:

.. code-block:: python

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda

class Args(TypedDict):
    a: int
    b: list[int]

def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

.. code-block:: python

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

.. code-block:: python

from typing import Any
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

def f(x: str) -> str:
    return x + "a"

def g(x: str) -> str:
    return x + "z"

runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

.. versionadded:: 0.2.14

get_state

get_state(
    config: RunnableConfig, *, subgraphs: bool = False
) -> StateSnapshot

Get the current state of the graph.

aget_state async

aget_state(
    config: RunnableConfig, *, subgraphs: bool = False
) -> StateSnapshot

Get the current state of the graph.

bulk_update_state

bulk_update_state(
    config: RunnableConfig,
    supersteps: Sequence[Sequence[StateUpdate]],
) -> RunnableConfig

Apply updates to the graph state in bulk. Requires a checkpointer to be set.

Parameters:

  • config (RunnableConfig) –

    The config to apply the updates to.

  • supersteps (Sequence[Sequence[StateUpdate]]) –

    A list of supersteps, each including a list of updates to apply sequentially to a graph state. Each update is a tuple of the form (values, as_node).

Raises:

  • ValueError

    If no checkpointer is set or no updates are provided.

  • InvalidUpdateError

    If an invalid update is provided.

Returns:

  • RunnableConfig ( RunnableConfig ) –

    The updated config.

abulk_update_state async

abulk_update_state(
    config: RunnableConfig,
    supersteps: Sequence[Sequence[StateUpdate]],
) -> RunnableConfig

Apply updates to the graph state in bulk. Requires a checkpointer to be set.

Parameters:

  • config (RunnableConfig) –

    The config to apply the updates to.

  • supersteps (Sequence[Sequence[StateUpdate]]) –

    A list of supersteps, each including a list of updates to apply sequentially to a graph state. Each update is a tuple of the form (values, as_node).

Raises:

  • ValueError

    If no checkpointer is set or no updates are provided.

  • InvalidUpdateError

    If an invalid update is provided.

Returns:

  • RunnableConfig ( RunnableConfig ) –

    The updated config.

update_state

update_state(
    config: RunnableConfig,
    values: Optional[Union[dict[str, Any], Any]],
    as_node: Optional[str] = None,
) -> RunnableConfig

Update the state of the graph with the given values, as if they came from node as_node. If as_node is not provided, it will be set to the last node that updated the state, if not ambiguous.

aupdate_state async

aupdate_state(
    config: RunnableConfig,
    values: dict[str, Any] | Any,
    as_node: Optional[str] = None,
) -> RunnableConfig

Update the state of the graph asynchronously with the given values, as if they came from node as_node. If as_node is not provided, it will be set to the last node that updated the state, if not ambiguous.

stream

stream(
    input: Union[dict[str, Any], Any],
    config: Optional[RunnableConfig] = None,
    *,
    stream_mode: Optional[
        Union[StreamMode, list[StreamMode]]
    ] = None,
    output_keys: Optional[Union[str, Sequence[str]]] = None,
    interrupt_before: Optional[
        Union[All, Sequence[str]]
    ] = None,
    interrupt_after: Optional[
        Union[All, Sequence[str]]
    ] = None,
    checkpoint_during: Optional[bool] = None,
    debug: Optional[bool] = None,
    subgraphs: bool = False
) -> Iterator[Union[dict[str, Any], Any]]

Stream graph steps for a single input.

Parameters:

  • input (Union[dict[str, Any], Any]) –

    The input to the graph.

  • config (Optional[RunnableConfig], default: None ) –

    The configuration to use for the run.

  • stream_mode (Optional[Union[StreamMode, list[StreamMode]]], default: None ) –

    The mode to stream output, defaults to self.stream_mode. Options are:

    • "values": Emit all values in the state after each step. When used with functional API, values are emitted once at the end of the workflow.
    • "updates": Emit only the node or task names and updates returned by the nodes or tasks after each step. If multiple updates are made in the same step (e.g. multiple nodes are run) then those updates are emitted separately.
    • "custom": Emit custom data from inside nodes or tasks using StreamWriter.
    • "messages": Emit LLM messages token-by-token together with metadata for any LLM invocations inside nodes or tasks.
    • "debug": Emit debug events with as much information as possible for each step.
  • output_keys (Optional[Union[str, Sequence[str]]], default: None ) –

    The keys to stream, defaults to all non-context channels.

  • interrupt_before (Optional[Union[All, Sequence[str]]], default: None ) –

    Nodes to interrupt before, defaults to all nodes in the graph.

  • interrupt_after (Optional[Union[All, Sequence[str]]], default: None ) –

    Nodes to interrupt after, defaults to all nodes in the graph.

  • checkpoint_during (Optional[bool], default: None ) –

    Whether to checkpoint intermediate steps, defaults to True. If False, only the final checkpoint is saved.

  • debug (Optional[bool], default: None ) –

    Whether to print debug information during execution, defaults to False.

  • subgraphs (bool, default: False ) –

    Whether to stream subgraphs, defaults to False.

Yields:

  • Union[dict[str, Any], Any]

    The output of each step in the graph. The output shape depends on the stream_mode.

Examples:

Using different stream modes with a graph:

>>> import operator
>>> from typing_extensions import Annotated, TypedDict
>>> from langgraph.graph import StateGraph, START
...
>>> class State(TypedDict):
...     alist: Annotated[list, operator.add]
...     another_list: Annotated[list, operator.add]
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", lambda _state: {"another_list": ["hi"]})
>>> builder.add_node("b", lambda _state: {"alist": ["there"]})
>>> builder.add_edge("a", "b")
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()
With stream_mode="values":

>>> for event in graph.stream({"alist": ['Ex for stream_mode="values"']}, stream_mode="values"):
...     print(event)
{'alist': ['Ex for stream_mode="values"'], 'another_list': []}
{'alist': ['Ex for stream_mode="values"'], 'another_list': ['hi']}
{'alist': ['Ex for stream_mode="values"', 'there'], 'another_list': ['hi']}
With stream_mode="updates":

>>> for event in graph.stream({"alist": ['Ex for stream_mode="updates"']}, stream_mode="updates"):
...     print(event)
{'a': {'another_list': ['hi']}}
{'b': {'alist': ['there']}}
With stream_mode="debug":

>>> for event in graph.stream({"alist": ['Ex for stream_mode="debug"']}, stream_mode="debug"):
...     print(event)
{'type': 'task', 'timestamp': '2024-06-23T...+00:00', 'step': 1, 'payload': {'id': '...', 'name': 'a', 'input': {'alist': ['Ex for stream_mode="debug"'], 'another_list': []}, 'triggers': ['start:a']}}
{'type': 'task_result', 'timestamp': '2024-06-23T...+00:00', 'step': 1, 'payload': {'id': '...', 'name': 'a', 'result': [('another_list', ['hi'])]}}
{'type': 'task', 'timestamp': '2024-06-23T...+00:00', 'step': 2, 'payload': {'id': '...', 'name': 'b', 'input': {'alist': ['Ex for stream_mode="debug"'], 'another_list': ['hi']}, 'triggers': ['a']}}
{'type': 'task_result', 'timestamp': '2024-06-23T...+00:00', 'step': 2, 'payload': {'id': '...', 'name': 'b', 'result': [('alist', ['there'])]}}

With stream_mode="custom":

>>> from langgraph.types import StreamWriter
...
>>> def node_a(state: State, writer: StreamWriter):
...     writer({"custom_data": "foo"})
...     return {"alist": ["hi"]}
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", node_a)
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()
...
>>> for event in graph.stream({"alist": ['Ex for stream_mode="custom"']}, stream_mode="custom"):
...     print(event)
{'custom_data': 'foo'}

With stream_mode="messages":

>>> from typing_extensions import Annotated, TypedDict
>>> from langgraph.graph import StateGraph, START
>>> from langchain_openai import ChatOpenAI
...
>>> llm = ChatOpenAI(model="gpt-4o-mini")
...
>>> class State(TypedDict):
...     question: str
...     answer: str
...
>>> def node_a(state: State):
...     response = llm.invoke(state["question"])
...     return {"answer": response.content}
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", node_a)
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()

>>> for event in graph.stream({"question": "What is the capital of France?"}, stream_mode="messages"):
...     print(event)
(AIMessageChunk(content='The', additional_kwargs={}, response_metadata={}, id='...'), {'langgraph_step': 1, 'langgraph_node': 'a', 'langgraph_triggers': ['start:a'], 'langgraph_path': ('__pregel_pull', 'a'), 'langgraph_checkpoint_ns': '...', 'checkpoint_ns': '...', 'ls_provider': 'openai', 'ls_model_name': 'gpt-4o-mini', 'ls_model_type': 'chat', 'ls_temperature': 0.7})
(AIMessageChunk(content=' capital', additional_kwargs={}, response_metadata={}, id='...'), {'langgraph_step': 1, 'langgraph_node': 'a', 'langgraph_triggers': ['start:a'], ...})
(AIMessageChunk(content=' of', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' France', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' is', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' Paris', additional_kwargs={}, response_metadata={}, id='...'), {...})

astream async

astream(
    input: Union[dict[str, Any], Any],
    config: Optional[RunnableConfig] = None,
    *,
    stream_mode: Optional[
        Union[StreamMode, list[StreamMode]]
    ] = None,
    output_keys: Optional[Union[str, Sequence[str]]] = None,
    interrupt_before: Optional[
        Union[All, Sequence[str]]
    ] = None,
    interrupt_after: Optional[
        Union[All, Sequence[str]]
    ] = None,
    checkpoint_during: Optional[bool] = None,
    debug: Optional[bool] = None,
    subgraphs: bool = False
) -> AsyncIterator[Union[dict[str, Any], Any]]

Stream graph steps for a single input.

Parameters:

  • input (Union[dict[str, Any], Any]) –

    The input to the graph.

  • config (Optional[RunnableConfig], default: None ) –

    The configuration to use for the run.

  • stream_mode (Optional[Union[StreamMode, list[StreamMode]]], default: None ) –

    The mode to stream output, defaults to self.stream_mode. Options are:

    • "values": Emit all values in the state after each step. When used with functional API, values are emitted once at the end of the workflow.
    • "updates": Emit only the node or task names and updates returned by the nodes or tasks after each step. If multiple updates are made in the same step (e.g. multiple nodes are run) then those updates are emitted separately.
    • "custom": Emit custom data from inside nodes or tasks using StreamWriter.
    • "messages": Emit LLM messages token-by-token together with metadata for any LLM invocations inside nodes or tasks.
    • "debug": Emit debug events with as much information as possible for each step.
  • output_keys (Optional[Union[str, Sequence[str]]], default: None ) –

    The keys to stream, defaults to all non-context channels.

  • interrupt_before (Optional[Union[All, Sequence[str]]], default: None ) –

    Nodes to interrupt before, defaults to all nodes in the graph.

  • interrupt_after (Optional[Union[All, Sequence[str]]], default: None ) –

    Nodes to interrupt after, defaults to all nodes in the graph.

  • checkpoint_during (Optional[bool], default: None ) –

    Whether to checkpoint intermediate steps, defaults to True. If False, only the final checkpoint is saved.

  • debug (Optional[bool], default: None ) –

    Whether to print debug information during execution, defaults to False.

  • subgraphs (bool, default: False ) –

    Whether to stream subgraphs, defaults to False.

Yields:

  • AsyncIterator[Union[dict[str, Any], Any]]

    The output of each step in the graph. The output shape depends on the stream_mode.

Examples:

Using different stream modes with a graph:

>>> import operator
>>> from typing_extensions import Annotated, TypedDict
>>> from langgraph.graph import StateGraph, START
...
>>> class State(TypedDict):
...     alist: Annotated[list, operator.add]
...     another_list: Annotated[list, operator.add]
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", lambda _state: {"another_list": ["hi"]})
>>> builder.add_node("b", lambda _state: {"alist": ["there"]})
>>> builder.add_edge("a", "b")
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()
With stream_mode="values":

>>> async for event in graph.astream({"alist": ['Ex for stream_mode="values"']}, stream_mode="values"):
...     print(event)
{'alist': ['Ex for stream_mode="values"'], 'another_list': []}
{'alist': ['Ex for stream_mode="values"'], 'another_list': ['hi']}
{'alist': ['Ex for stream_mode="values"', 'there'], 'another_list': ['hi']}
With stream_mode="updates":

>>> async for event in graph.astream({"alist": ['Ex for stream_mode="updates"']}, stream_mode="updates"):
...     print(event)
{'a': {'another_list': ['hi']}}
{'b': {'alist': ['there']}}
With stream_mode="debug":

>>> async for event in graph.astream({"alist": ['Ex for stream_mode="debug"']}, stream_mode="debug"):
...     print(event)
{'type': 'task', 'timestamp': '2024-06-23T...+00:00', 'step': 1, 'payload': {'id': '...', 'name': 'a', 'input': {'alist': ['Ex for stream_mode="debug"'], 'another_list': []}, 'triggers': ['start:a']}}
{'type': 'task_result', 'timestamp': '2024-06-23T...+00:00', 'step': 1, 'payload': {'id': '...', 'name': 'a', 'result': [('another_list', ['hi'])]}}
{'type': 'task', 'timestamp': '2024-06-23T...+00:00', 'step': 2, 'payload': {'id': '...', 'name': 'b', 'input': {'alist': ['Ex for stream_mode="debug"'], 'another_list': ['hi']}, 'triggers': ['a']}}
{'type': 'task_result', 'timestamp': '2024-06-23T...+00:00', 'step': 2, 'payload': {'id': '...', 'name': 'b', 'result': [('alist', ['there'])]}}

With stream_mode="custom":

>>> from langgraph.types import StreamWriter
...
>>> async def node_a(state: State, writer: StreamWriter):
...     writer({"custom_data": "foo"})
...     return {"alist": ["hi"]}
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", node_a)
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()
...
>>> async for event in graph.astream({"alist": ['Ex for stream_mode="custom"']}, stream_mode="custom"):
...     print(event)
{'custom_data': 'foo'}

With stream_mode="messages":

>>> from typing_extensions import Annotated, TypedDict
>>> from langgraph.graph import StateGraph, START
>>> from langchain_openai import ChatOpenAI
...
>>> llm = ChatOpenAI(model="gpt-4o-mini")
...
>>> class State(TypedDict):
...     question: str
...     answer: str
...
>>> async def node_a(state: State):
...     response = await llm.ainvoke(state["question"])
...     return {"answer": response.content}
...
>>> builder = StateGraph(State)
>>> builder.add_node("a", node_a)
>>> builder.add_edge(START, "a")
>>> graph = builder.compile()

>>> for event in graph.stream({"question": "What is the capital of France?"}, stream_mode="messages"):
...     print(event)
(AIMessageChunk(content='The', additional_kwargs={}, response_metadata={}, id='...'), {'langgraph_step': 1, 'langgraph_node': 'a', 'langgraph_triggers': ['start:a'], 'langgraph_path': ('__pregel_pull', 'a'), 'langgraph_checkpoint_ns': '...', 'checkpoint_ns': '...', 'ls_provider': 'openai', 'ls_model_name': 'gpt-4o-mini', 'ls_model_type': 'chat', 'ls_temperature': 0.7})
(AIMessageChunk(content=' capital', additional_kwargs={}, response_metadata={}, id='...'), {'langgraph_step': 1, 'langgraph_node': 'a', 'langgraph_triggers': ['start:a'], ...})
(AIMessageChunk(content=' of', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' France', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' is', additional_kwargs={}, response_metadata={}, id='...'), {...})
(AIMessageChunk(content=' Paris', additional_kwargs={}, response_metadata={}, id='...'), {...})

invoke

invoke(
    input: Union[dict[str, Any], Any],
    config: Optional[RunnableConfig] = None,
    *,
    stream_mode: StreamMode = "values",
    output_keys: Optional[Union[str, Sequence[str]]] = None,
    interrupt_before: Optional[
        Union[All, Sequence[str]]
    ] = None,
    interrupt_after: Optional[
        Union[All, Sequence[str]]
    ] = None,
    checkpoint_during: Optional[bool] = None,
    debug: Optional[bool] = None,
    **kwargs: Any
) -> Union[dict[str, Any], Any]

Run the graph with a single input and config.

Parameters:

  • input (Union[dict[str, Any], Any]) –

    The input data for the graph. It can be a dictionary or any other type.

  • config (Optional[RunnableConfig], default: None ) –

    Optional. The configuration for the graph run.

  • stream_mode (StreamMode, default: 'values' ) –

    Optional[str]. The stream mode for the graph run. Default is "values".

  • output_keys (Optional[Union[str, Sequence[str]]], default: None ) –

    Optional. The output keys to retrieve from the graph run.

  • interrupt_before (Optional[Union[All, Sequence[str]]], default: None ) –

    Optional. The nodes to interrupt the graph run before.

  • interrupt_after (Optional[Union[All, Sequence[str]]], default: None ) –

    Optional. The nodes to interrupt the graph run after.

  • debug (Optional[bool], default: None ) –

    Optional. Enable debug mode for the graph run.

  • **kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the graph run.

Returns:

  • Union[dict[str, Any], Any]

    The output of the graph run. If stream_mode is "values", it returns the latest output.

  • Union[dict[str, Any], Any]

    If stream_mode is not "values", it returns a list of output chunks.

ainvoke async

ainvoke(
    input: Union[dict[str, Any], Any],
    config: Optional[RunnableConfig] = None,
    *,
    stream_mode: StreamMode = "values",
    output_keys: Optional[Union[str, Sequence[str]]] = None,
    interrupt_before: Optional[
        Union[All, Sequence[str]]
    ] = None,
    interrupt_after: Optional[
        Union[All, Sequence[str]]
    ] = None,
    checkpoint_during: Optional[bool] = None,
    debug: Optional[bool] = None,
    **kwargs: Any
) -> Union[dict[str, Any], Any]

Asynchronously invoke the graph on a single input.

Parameters:

  • input (Union[dict[str, Any], Any]) –

    The input data for the computation. It can be a dictionary or any other type.

  • config (Optional[RunnableConfig], default: None ) –

    Optional. The configuration for the computation.

  • stream_mode (StreamMode, default: 'values' ) –

    Optional. The stream mode for the computation. Default is "values".

  • output_keys (Optional[Union[str, Sequence[str]]], default: None ) –

    Optional. The output keys to include in the result. Default is None.

  • interrupt_before (Optional[Union[All, Sequence[str]]], default: None ) –

    Optional. The nodes to interrupt before. Default is None.

  • interrupt_after (Optional[Union[All, Sequence[str]]], default: None ) –

    Optional. The nodes to interrupt after. Default is None.

  • debug (Optional[bool], default: None ) –

    Optional. Whether to enable debug mode. Default is None.

  • **kwargs (Any, default: {} ) –

    Additional keyword arguments.

Returns:

  • Union[dict[str, Any], Any]

    The result of the computation. If stream_mode is "values", it returns the latest value.

  • Union[dict[str, Any], Any]

    If stream_mode is "chunks", it returns a list of chunks.

PregelNode

Bases: Runnable

A node in a Pregel graph. This won't be invoked as a runnable by the graph itself, but instead acts as a container for the components necessary to make a PregelExecutableTask for a node.

name instance-attribute

name: Optional[str]

The name of the Runnable. Used for debugging and tracing.

InputType property

InputType: type[Input]

The type of input this Runnable accepts specified as a type annotation.

OutputType property

OutputType: type[Output]

The type of output this Runnable produces specified as a type annotation.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

channels instance-attribute

channels: Union[list[str], Mapping[str, str]] = channels

The channels that will be passed as input to bound. If a list, the node will be invoked with the first of that isn't empty. If a dict, the keys are the names of the channels, and the values are the keys to use in the input to bound.

triggers instance-attribute

triggers: list[str] = list(triggers)

If any of these channels is written to, this node will be triggered in the next step.

mapper instance-attribute

mapper: Optional[Callable[[Any], Any]] = mapper

A function to transform the input before passing it to bound.

writers instance-attribute

writers: list[Runnable] = writers or []

A list of writers that will be executed after bound, responsible for taking the output of bound and writing it to the appropriate channels.

bound instance-attribute

bound: Runnable[Any, Any] = (
    bound if bound is not None else DEFAULT_BOUND
)

The main logic of the node. This will be invoked with the input from channels.

retry_policy instance-attribute

retry_policy: Optional[Sequence[RetryPolicy]]

The retry policies to use when invoking the node.

tags instance-attribute

tags: Optional[Sequence[str]] = tags

Tags to attach to the node for tracing.

metadata instance-attribute

metadata: Optional[Mapping[str, Any]] = metadata

Metadata to attach to the node for tracing.

subgraphs instance-attribute

subgraphs: Sequence[PregelProtocol]

Subgraphs used by the node.

flat_writers cached property

flat_writers: list[Runnable]

Get writers with optimizations applied. Dedupes consecutive ChannelWrites.

node cached property

node: Optional[Runnable[Any, Any]]

Get a runnable that combines bound and writers.

input_cache_key cached property

input_cache_key: INPUT_CACHE_KEY_TYPE

Get a cache key for the input to the node. This is used to avoid calculating the same input multiple times.

get_name

get_name(
    suffix: Optional[str] = None,
    *,
    name: Optional[str] = None
) -> str

Get the name of the Runnable.

get_input_schema

get_input_schema(
    config: Optional[RunnableConfig] = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

  • config (Optional[RunnableConfig], default: None ) –

    A config to use when generating the schema.

Returns:

  • type[BaseModel]

    A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: Optional[RunnableConfig] = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

  • config (Optional[RunnableConfig], default: None ) –

    A config to use when generating the schema.

Returns:

  • dict[str, Any]

    A JSON schema that represents the input to the Runnable.

Example:

.. code-block:: python

    from langchain_core.runnables import RunnableLambda

    def add_one(x: int) -> int:
        return x + 1

    runnable = RunnableLambda(add_one)

    print(runnable.get_input_jsonschema())

.. versionadded:: 0.3.0

get_output_schema

get_output_schema(
    config: Optional[RunnableConfig] = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

  • config (Optional[RunnableConfig], default: None ) –

    A config to use when generating the schema.

Returns:

  • type[BaseModel]

    A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: Optional[RunnableConfig] = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

  • config (Optional[RunnableConfig], default: None ) –

    A config to use when generating the schema.

Returns:

  • dict[str, Any]

    A JSON schema that represents the output of the Runnable.

Example:

.. code-block:: python

    from langchain_core.runnables import RunnableLambda

    def add_one(x: int) -> int:
        return x + 1

    runnable = RunnableLambda(add_one)

    print(runnable.get_output_jsonschema())

.. versionadded:: 0.3.0

config_schema

config_schema(
    *, include: Optional[Sequence[str]] = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

  • include (Optional[Sequence[str]], default: None ) –

    A list of fields to include in the config schema.

Returns:

  • type[BaseModel]

    A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Optional[Sequence[str]] = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

  • include (Optional[Sequence[str]], default: None ) –

    A list of fields to include in the config schema.

Returns:

  • dict[str, Any]

    A JSON schema that represents the config of the Runnable.

.. versionadded:: 0.3.0

get_graph

get_graph(config: Optional[RunnableConfig] = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: Optional[RunnableConfig] = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

pick

pick(
    keys: Union[str, list[str]],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key

.. code-block:: python

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick list of keys

.. code-block:: python

from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")

chain = RunnableMap(
    str=as_str,
    json=as_json,
    bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

assign

assign(
    **kwargs: Union[
        Runnable[dict[str, Any], Any],
        Callable[[dict[str, Any]], Any],
        Mapping[
            str,
            Union[
                Runnable[dict[str, Any], Any],
                Callable[[dict[str, Any]], Any],
            ],
        ],
    ],
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

Returns a new Runnable.

.. code-block:: python

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

batch

batch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> list[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> Iterator[tuple[int, Union[Output, Exception]]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

abatch async

abatch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> list[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

  • inputs (list[Input]) –

    A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, list[RunnableConfig]]], default: None ) –

    A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

  • return_exceptions (bool, default: False ) –

    Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Returns:

  • list[Output]

    A list of outputs from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any]
) -> AsyncIterator[tuple[int, Union[Output, Exception]]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

  • inputs (Sequence[Input]) –

    A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]], default: None ) –

    A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None. Defaults to None.

  • return_exceptions (bool, default: False ) –

    Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • AsyncIterator[tuple[int, Union[Output, Exception]]]

    A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any
) -> Union[
    AsyncIterator[RunLogPatch], AsyncIterator[RunLog]
]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

  • input (Any) –

    The input to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable.

  • diff (bool, default: True ) –

    Whether to yield diffs between each step or the current state.

  • with_streamed_output_list (bool, default: True ) –

    Whether to yield the streamed_output list.

  • include_names (Optional[Sequence[str]], default: None ) –

    Only include logs with these names.

  • include_types (Optional[Sequence[str]], default: None ) –

    Only include logs with these types.

  • include_tags (Optional[Sequence[str]], default: None ) –

    Only include logs with these tags.

  • exclude_names (Optional[Sequence[str]], default: None ) –

    Exclude logs with these names.

  • exclude_types (Optional[Sequence[str]], default: None ) –

    Exclude logs with these types.

  • exclude_tags (Optional[Sequence[str]], default: None ) –

    Exclude logs with these tags.

  • kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]

    A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | event | name | chunk | input | output | +======================+==================+=================================+===============================================+=================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_start | format_docs | | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_stream | format_docs | "hello world!, goodbye world!" | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | "hello world!, goodbye world!" | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

.. code-block:: python

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

.. code-block:: python

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

.. code-block:: python

template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

async def reverse(s: str) -> str:
    return s[::-1]

chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

.. code-block:: python

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

  • input (Any) –

    The input to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable.

  • version (Literal['v1', 'v2'], default: 'v2' ) –

    The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in v2.

  • include_names (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching names.

  • include_types (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching types.

  • include_tags (Optional[Sequence[str]], default: None ) –

    Only include events from runnables with matching tags.

  • exclude_names (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching names.

  • exclude_types (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching types.

  • exclude_tags (Optional[Sequence[str]], default: None ) –

    Exclude events from runnables with matching tags.

  • kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

Yields:

  • AsyncIterator[StreamEvent]

    An async stream of StreamEvents.

Raises:

  • NotImplementedError

    If the version is not v1 or v2.

transform

transform(
    input: Iterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any]
) -> Iterator[Output]

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

  • input (Iterator[Input]) –

    An iterator of inputs to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable. Defaults to None.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • Output

    The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any]
) -> AsyncIterator[Output]

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

  • input (AsyncIterator[Input]) –

    An async iterator of inputs to the Runnable.

  • config (Optional[RunnableConfig], default: None ) –

    The config to use for the Runnable. Defaults to None.

  • kwargs (Optional[Any], default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Yields:

  • AsyncIterator[Output]

    The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

  • kwargs (Any, default: {} ) –

    The arguments to bind to the Runnable.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the arguments bound.

Example:

.. code-block:: python

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model='llama2')

# Without bind.
chain = (
    llm
    | StrOutputParser()
)

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = (
    llm.bind(stop=["three"])
    | StrOutputParser()
)

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: Optional[RunnableConfig] = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

  • config (Optional[RunnableConfig], default: None ) –

    The config to bind to the Runnable.

  • kwargs (Any, default: {} ) –

    Additional keyword arguments to pass to the Runnable.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_end: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_error: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

on_start: Called before the Runnable starts running, with the Run object. on_end: Called after the Runnable finishes running, with the Run object. on_error: Called if the Runnable throws an error, with the Run object.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

  • on_start (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called before the Runnable starts running. Defaults to None.

  • on_end (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called after the Runnable finishes running. Defaults to None.

  • on_error (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]], default: None ) –

    Called if the Runnable throws an error. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time

def test_runnable(time_to_sleep : int):
    time.sleep(time_to_sleep)

def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)

def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)

chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start,
    on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: Optional[AsyncListener] = None,
    on_end: Optional[AsyncListener] = None,
    on_error: Optional[AsyncListener] = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable, returning a new Runnable.

on_start: Asynchronously called before the Runnable starts running. on_end: Asynchronously called after the Runnable finishes running. on_error: Asynchronously called if the Runnable throws an error.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

  • on_start (Optional[AsyncListener], default: None ) –

    Asynchronously called before the Runnable starts running. Defaults to None.

  • on_end (Optional[AsyncListener], default: None ) –

    Asynchronously called after the Runnable finishes running. Defaults to None.

  • on_error (Optional[AsyncListener], default: None ) –

    Asynchronously called if the Runnable throws an error. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep : int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj : Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj : Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: Optional[type[Input]] = None,
    output_type: Optional[type[Output]] = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

  • input_type (Optional[type[Input]], default: None ) –

    The input type to bind to the Runnable. Defaults to None.

  • output_type (Optional[type[Output]], default: None ) –

    The output type to bind to the Runnable. Defaults to None.

Returns:

  • Runnable[Input, Output]

    A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: Optional[
        ExponentialJitterParams
    ] = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

  • retry_if_exception_type (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to retry on. Defaults to (Exception,).

  • wait_exponential_jitter (bool, default: True ) –

    Whether to add jitter to the wait time between retries. Defaults to True.

  • stop_after_attempt (int, default: 3 ) –

    The maximum number of attempts to make before giving up. Defaults to 3.

  • exponential_jitter_params (Optional[ExponentialJitterParams], default: None ) –

    Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

Returns:

  • Runnable[Input, Output]

    A new Runnable that retries the original Runnable on exceptions.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
         pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert (count == 2)

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke() with each input.

Returns:

  • Runnable[list[Input], list[Output]]

    A new Runnable that maps a list of inputs to a list of outputs.

Example:

.. code-block:: python

        from langchain_core.runnables import RunnableLambda

        def _lambda(x: int) -> int:
            return x + 1

        runnable = RunnableLambda(_lambda)
        print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: Optional[str] = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

  • fallbacks (Sequence[Runnable[Input, Output]]) –

    A sequence of runnables to try if the original Runnable fails.

  • exceptions_to_handle (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to handle. Defaults to (Exception,).

  • exception_key (Optional[str], default: None ) –

    If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

Returns:

  • RunnableWithFallbacks[Input, Output]

    A new Runnable that will try the original Runnable, and then each

  • RunnableWithFallbacks[Input, Output]

    fallback in order, upon failures.

Example:

.. code-block:: python

    from typing import Iterator

    from langchain_core.runnables import RunnableGenerator


    def _generate_immediate_error(input: Iterator) -> Iterator[str]:
        raise ValueError()
        yield ""


    def _generate(input: Iterator) -> Iterator[str]:
        yield from "foo bar"


    runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
        [RunnableGenerator(_generate)]
        )
    print(''.join(runnable.stream({}))) #foo bar

Parameters:

  • fallbacks (Sequence[Runnable[Input, Output]]) –

    A sequence of runnables to try if the original Runnable fails.

  • exceptions_to_handle (tuple[type[BaseException], ...], default: (Exception,) ) –

    A tuple of exception types to handle.

  • exception_key (Optional[str], default: None ) –

    If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

Returns:

  • RunnableWithFallbacks[Input, Output]

    A new Runnable that will try the original Runnable, and then each

  • RunnableWithFallbacks[Input, Output]

    fallback in order, upon failures.

as_tool

as_tool(
    args_schema: Optional[type[BaseModel]] = None,
    *,
    name: Optional[str] = None,
    description: Optional[str] = None,
    arg_types: Optional[dict[str, type]] = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

  • args_schema (Optional[type[BaseModel]], default: None ) –

    The schema for the tool. Defaults to None.

  • name (Optional[str], default: None ) –

    The name of the tool. Defaults to None.

  • description (Optional[str], default: None ) –

    The description of the tool. Defaults to None.

  • arg_types (Optional[dict[str, type]], default: None ) –

    A dictionary of argument names to types. Defaults to None.

Returns:

  • BaseTool

    A BaseTool instance.

Typed dict input:

.. code-block:: python

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda

class Args(TypedDict):
    a: int
    b: list[int]

def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

.. code-block:: python

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

.. code-block:: python

from typing import Any
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

def f(x: str) -> str:
    return x + "a"

def g(x: str) -> str:
    return x + "z"

runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

.. versionadded:: 0.2.14

Comments