Skip to content

How-to Guides

Here you’ll find answers to “How do I...?” types of questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task. For conceptual explanations see the Conceptual guide. For end-to-end walk-throughs see Tutorials. For comprehensive descriptions of every class and function see the API Reference.

LangGraph

Controllability

LangGraph offers a high level of control over the execution of your graph.

These how-to guides show how to achieve that controllability.

Persistence

LangGraph Persistence makes it easy to persist state across graph runs (thread-level persistence) and across threads (cross-thread persistence). These how-to guides show how to add persistence to your graph.

Memory

LangGraph makes it easy to manage conversation memory in your graph. These how-to guides show how to implement different strategies for that.

Human-in-the-loop

Human-in-the-loop functionality allows you to involve humans in the decision-making process of your graph. These how-to guides show how to implement human-in-the-loop workflows in your graph.

Key workflows:

  • How to wait for user input: A basic example that shows how to implement a human-in-the-loop workflow in your graph using the interrupt function.
  • How to review tool calls: Incorporate human-in-the-loop for reviewing/editing/accepting tool call requests before they executed using the interrupt function.

Other methods:

Time Travel

Time travel allows you to replay past actions in your LangGraph application to explore alternative paths and debug issues. These how-to guides show how to use time travel in your graph.

Streaming

Streaming is crucial for enhancing the responsiveness of applications built on LLMs. By displaying output progressively, even before a complete response is ready, streaming significantly improves user experience (UX), particularly when dealing with the latency of LLMs.

Tool calling

Tool calling is a type of chat model API that accepts tool schemas, along with messages, as input and returns invocations of those tools as part of the output message.

These how-to guides show common patterns for tool calling with LangGraph:

Subgraphs

Subgraphs allow you to reuse an existing graph from another graph. These how-to guides show how to use subgraphs:

Multi-agent

Multi-agent systems are useful to break down complex LLM applications into multiple agents, each responsible for a different part of the application. These how-to guides show how to implement multi-agent systems in LangGraph:

See the multi-agent tutorials for implementations of other multi-agent architectures.

State Management

Other

Prebuilt ReAct Agent

The LangGraph prebuilt ReAct agent is pre-built implementation of a tool calling agent.

One of the big benefits of LangGraph is that you can easily create your own agent architectures. So while it's fine to start here to build an agent quickly, we would strongly recommend learning how to build your own agent so that you can take full advantage of LangGraph.

These guides show how to use the prebuilt ReAct agent:

LangGraph Platform

This section includes how-to guides for LangGraph Platform.

LangGraph Platform is a commercial solution for deploying agentic applications in production, built on the open-source LangGraph framework.

The LangGraph Platform offers a few different deployment options described in the deployment options guide.

Tip

  • LangGraph is an MIT-licensed open-source library, which we are committed to maintaining and growing for the community.
  • You can always deploy LangGraph applications on your own infrastructure using the open-source LangGraph project without using LangGraph Platform.

Application Structure

Learn how to set up your app for deployment to LangGraph Platform:

Deployment

LangGraph applications can be deployed using LangGraph Cloud, which provides a range of services to help you deploy, manage, and scale your applications.

Authentication & Access Control

Assistants

Assistants is a configured instance of a template.

Threads

Runs

LangGraph Platform supports multiple types of runs besides streaming runs.

Streaming

Streaming the results of your LLM application is vital for ensuring a good user experience, especially when your graph may call multiple models and take a long time to fully complete a run. Read about how to stream values from your graph in these how to guides:

Human-in-the-loop

When designing complex graphs, relying entirely on the LLM for decision-making can be risky, particularly when it involves tools that interact with files, APIs, or databases. These interactions may lead to unintended data access or modifications, depending on the use case. To mitigate these risks, LangGraph allows you to integrate human-in-the-loop behavior, ensuring your LLM applications operate as intended without undesirable outcomes.

Double-texting

Graph execution can take a while, and sometimes users may change their mind about the input they wanted to send before their original input has finished running. For example, a user might notice a typo in their original request and will edit the prompt and resend it. Deciding what to do in these cases is important for ensuring a smooth user experience and preventing your graphs from behaving in unexpected ways.

Webhooks

Cron Jobs

LangGraph Studio

LangGraph Studio is a built-in UI for visualizing, testing, and debugging your agents.

Troubleshooting

These are the guides for resolving common errors you may find while building with LangGraph. Errors referenced below will have an lc_error_code property corresponding to one of the below codes when they are thrown in code.

Comments