Note
If you pass extra keys to the subgraph node (i.e., in addition to the shared keys), they will be ignored by the subgraph node. Similarly, if you return extra keys from the subgraph, they will be ignored by the parent graph.
Caution
If you are using LangGraph.js on the frontend, make sure you are not exposing any private keys! For chat models, this means you need to use something like WebLLM that can run client-side without authentication.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Tip
The resolutions
or pnpm.overrides
fields for yarn
or pnpm
must be set in the root package.json
file.
Also note that we specify EXACT versions for resolutions.
Prerequisites
This guide assumes familiarity with the following:
This functionality also requires @langchain/langgraph>=0.2.29
.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Important
You might have noticed that we add an ends
field as an extra param to the node where we use Command
. This is necessary for graph compilation and validation, and tells LangGraph that nodeA
can navigate to nodeB
and nodeC
.
Prerequisites
This guide assumes familiarity with the following:
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Compatibility
This section requires @langchain/langgraph>=0.2.20
. For help upgrading, see this guide.
Note
If you are using a version of @langchain/core
< 0.2.3, when calling chat models or LLMs you need to call await model.stream()
within your nodes to get token-by-token streaming events, and aggregate final outputs if needed to update the graph state. In later versions of @langchain/core
, this occurs automatically, and you can call await model.invoke()
.
For more on how to upgrade @langchain/core
, check out the instructions here.
Streaming Support
Token streaming is supported by many, but not all chat models. Check to see if your LLM integration supports token streaming here (doc). Note that some integrations may support general token streaming but lack support for streaming tool calls.
Note
In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent({ llm, tools })
(API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.
Compatibility
This section requires @langchain/langgraph>=0.2.20
. For help upgrading, see this guide.
Important
If you want to use tools that return Command
instances and update graph state, you can either use prebuilt createReactAgent
/ ToolNode
components, or implement your own tool-executing node that identifies Command
objects returned by your tools and returns a mixed array of traditional state updates and Commands
.
See this section for an example.
Compatibility
This guide requires @langchain/langgraph>=0.2.33
and @langchain/core@0.3.23
. For help upgrading, see this guide.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Note
In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer)
(API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.
Note
Support for the Store
API that is used in this guide was added in LangGraph.js v0.2.10
.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Note
If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass store when compiling the graph, since it's done automatically.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Note
We're wrapping the grandchildGraph
invocation in a separate function (callGrandchildGraph
) that transforms the input state before calling the grandchild graph and then transforms the output of grandchild graph back to child graph state. If you just pass grandchildGraph
directly to .addNode
without the transformations, LangGraph will raise an error as there are no shared state channels (keys) between child and grandchild states.
Note
We're wrapping the childGraph
invocation in a separate function (callChildGraph
) that transforms the input state before calling the child graph and then transforms the output of the child graph back to parent graph state. If you just pass childGraph
directly to .addNode
without the transformations, LangGraph will raise an error as there are no shared state channels (keys) between parent and child states.
Compatibility
This guide requires @langchain/core>=0.2.19
, and if you are using LangSmith, langsmith>=0.1.39
. For help upgrading, see this guide.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Prerequisites
This guide assumes familiarity with the following:
This functionality also requires @langchain/langgraph>=0.2.29
.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Compatibility
This functionality was added in @langchain/langgraph>=0.2.53
.
It also requires async_hooks
support, which is supported in many popular JavaScript environments (such as Node.js, Deno, and Cloudflare Workers), but not all of them (mainly web browsers). If you are deploying to an environment where this is not supported, see the closures section below.
Compatibility
This guide requires @langchain/langgraph>=0.0.28
, @langchain/anthropic>=0.2.6
, and @langchain/core>=0.2.17
. For help upgrading, see this guide.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.
Compatibility
The stateModifier
parameter was added in @langchain/langgraph>=0.2.27
.
If you are on an older version, you will need to use the deprecated messageModifier
parameter.
For help upgrading, see this guide.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Note
You cannot invoke more than one subgraph inside the same node if you have checkpointing enabled for the subgraphs. See this page for more information.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Prebuilt Agent
Please note that here will we use a prebuilt agent. One of the big benefits of LangGraph is that you can easily create your own agent architectures. So while it's fine to start here to build an agent quickly, we would strongly recommend learning how to build your own agent so that you can take full advantage of LangGraph. Read this guide to learn how to create your own ReAct agent from scratch.
Set up LangSmith for LangGraph development
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
Note
You shouldn't provide a checkpointer when compiling a subgraph. Instead, you must define a **single** checkpointer that you pass to parentGraph.compile()
, and LangGraph will automatically propagate the checkpointer to the child subgraphs. If you pass the checkpointer to the subgraph.compile()
, it will simply be ignored. This also applies when you add a node that invokes the subgraph explicitly.
Note: multi-conversation memory
If you need memory that is shared across multiple conversations or users (cross-thread persistence), check out this how-to guide).
Note
In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer)
(API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.
Note
These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.