# LangGraph ## Tutorials [Learn the basics](https://langchain-ai.github.io/langgraph/tutorials/introduction/): LLM should read this page when needing to build a LangGraph chatbot or when learning about chat agents with memory, human-in-the-loop functionality, and state management. This page provides a comprehensive LangGraph quickstart tutorial covering building a support chatbot with web search capability, conversation memory, human review routing, custom state management, and time travel functionality to explore alternative conversation paths. [Local Deploy](https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/): LLM should read this page when setting up a LangGraph app locally using `langgraph dev` and troubleshooting LangGraph server deployment. This page contains a quickstart guide for launching a LangGraph server locally, including installation steps, app creation from templates, environment setup, API testing with Python/JS SDKs, and links to deployment options and further documentation. [Workflows and Agents](https://langchain-ai.github.io/langgraph/tutorials/workflows/): LLM should read this page when implementing agent systems, designing workflow architectures, or troubleshooting LLM orchestration strategies. The page covers patterns for LLM system design, comparing workflows (predefined paths) vs agents (dynamic control), with implementations of prompt chaining, parallelization, routing, orchestrator-worker, evaluator-optimizer, and agent patterns using both graph and functional APIs in LangGraph. ## Concepts [Concepts](https://langchain-ai.github.io/langgraph/concepts/): LLM should read this page when needing to understand LangGraph's key concepts or when planning to deploy LangGraph applications. Comprehensive guide covering LangGraph fundamentals (graph primitives, agents, multi-agent systems, breakpoints, persistence), features (time travel, memory, streaming), and LangGraph Platform deployment options (self-hosted, cloud, enterprise). [Agent architectures](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/): LLM should read this page when designing agent architectures, implementing control flows for LLM applications, or customizing agent behavior patterns. This page covers different LLM agent architectures including routers, tool calling agents (ReAct), structured outputs, memory systems, planning capabilities, and advanced customization options like human-in-the-loop, parallelization, subgraphs, and reflection mechanisms. [Application Structure](https://langchain-ai.github.io/langgraph/concepts/application_structure/): LLM should read this page when needing to understand LangGraph application structure, preparing to deploy a LangGraph application, or troubleshooting configuration issues. This page details the structure of LangGraph applications, including required components (graphs, langgraph.json config file, dependency files, optional .env), file organization patterns for Python/JavaScript projects, configuration file format with all supported fields, and how to specify dependencies, graphs, and environment variables. [Assistants](https://langchain-ai.github.io/langgraph/concepts/assistants/): LLM should read this page when looking for information about LangGraph assistants, understanding assistant configuration in LangGraph Platform, or learning about versioning agent configurations. This page explains LangGraph assistants, which allow developers to modify agent configurations (prompts, models, etc.) without changing graph logic, supports versioning for tracking changes, and is available only in LangGraph Platform (not open source). [Authentication & Access Control](https://langchain-ai.github.io/langgraph/concepts/auth/): LLM should read this page when implementing authentication in LangGraph Platform, designing access control for LangGraph applications, or troubleshooting security issues in LangGraph deployments. This page explains LangGraph's authentication and authorization system, covering the difference between authentication and authorization, system architecture, implementing custom auth handlers, common access patterns, and supported resources/actions for access control. [Bring Your Own Cloud (BYOC)](https://langchain-ai.github.io/langgraph/concepts/bring_your_own_cloud/): LLM should read this page when learning about LangGraph Platform deployment options, understanding Bring Your Own Cloud architecture, or managing deployments in AWS. This page explains LangGraph's BYOC deployment model, detailing how it separates control plane (managed by LangChain) from data plane (in customer's AWS account), outlines AWS requirements, infrastructure setup via Terraform, required permissions, and explains the deployment workflow. [Deployment Options](https://langchain-ai.github.io/langgraph/concepts/deployment_options/): LLM should read this page when needing information about LangGraph deployment options, comparing different deployment methods, or understanding LangGraph Platform plans. This page outlines four deployment options for LangGraph Platform: Self-Hosted Lite (available for all plans), Self-Hosted Enterprise (Enterprise plan only), Cloud SaaS (Plus and Enterprise plans), and Bring Your Own Cloud (Enterprise plan only, AWS-only). [Double Texting](https://langchain-ai.github.io/langgraph/concepts/double_texting/): LLM should read this page when handling concurrent user interactions in LangGraph Platform, implementing double-texting safeguards, or designing stateful conversation systems. This page explains four approaches to handling "double texting" in LangGraph (when users send a second message before the first completes): Reject, Enqueue, Interrupt, and Rollback, noting these features are currently only available in LangGraph Platform. [Durable Execution](https://langchain-ai.github.io/langgraph/concepts/durable_execution/): LLM should read this page when needing to understand durable execution in LangGraph, implementing workflow persistence, or troubleshooting workflow resumption. This page explains durable execution in LangGraph: how workflows save progress to resume later, requirements (checkpointers and thread IDs), determinism guidelines for consistent replay, using tasks to encapsulate non-deterministic operations, and approaches for pausing/resuming workflows. [FAQ](https://langchain-ai.github.io/langgraph/concepts/faq/): LLM should read this page when needing to understand differences between LangGraph and LangChain, exploring deployment options for LangGraph Platform, or determining compatibility with various LLMs. FAQ covering LangGraph basics, comparisons with other frameworks, deployment options (free self-hosted, Cloud SaaS, BYOC, Enterprise), compatibility with different LLMs including OSS models, and feature differences between open-source LangGraph and proprietary LangGraph Platform. [Functional API](https://langchain-ai.github.io/langgraph/concepts/functional_api/): LLM should read this page when implementing workflows with persistent state, adding human-in-the-loop features, or converting existing code to use LangGraph. The page documents LangGraph's Functional API, which allows adding persistence, memory, and human-in-the-loop capabilities with minimal code changes using @entrypoint and @task decorators, handling serialization requirements, state management, and common patterns for parallel execution and error handling. [Why LangGraph?](https://langchain-ai.github.io/langgraph/concepts/high_level/): LLM should read this page when understanding LangGraph's core capabilities, exploring LLM application infrastructure, or evaluating agent/workflow persistence options. LangGraph provides infrastructure for LLM applications with three key benefits: persistence for memory and human-in-the-loop capabilities, streaming of workflow events and LLM outputs, and tools for debugging and deployment via LangGraph Platform. [Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/): LLM should read this page when implementing human-in-the-loop workflows in LangGraph, designing approval systems with LLMs, or creating interactive multi-turn conversation agents. This page explains human-in-the-loop patterns in LangGraph using the interrupt function, showing how to pause graph execution for human review/input and resume with Command. Includes design patterns for approval workflows, state editing, tool call reviews, and multi-turn conversations, with code examples and warnings about execution flow and common pitfalls. [LangGraph CLI](https://langchain-ai.github.io/langgraph/concepts/langgraph_cli/): LLM should read this page when looking for information about LangGraph CLI installation or when needing to deploy a LangGraph API server locally. The page covers LangGraph CLI installation methods (Homebrew, pip), key commands (build, dev, up, dockerfile), and features like hot reloading, debugger support, and database management for running LangGraph servers. [Cloud SaaS](https://langchain-ai.github.io/langgraph/concepts/langgraph_cloud/): LLM should read this page when learning about LangGraph's Cloud SaaS offering, understanding deployment options for LangGraph Servers, or planning autoscaling infrastructure for LangGraph applications. This page describes LangGraph Cloud SaaS, a managed deployment service for LangGraph Servers with details on deployment types (Development/Production), revisions, persistence, autoscaling capabilities (up to 10 containers), LangSmith integration, IP whitelisting, and automatic deletion policies after 28 days of non-use. [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/): LLM should read this page when seeking information about LangGraph Platform's components or evaluating production deployment options for agentic applications. The page details the LangGraph Platform, a commercial solution for deploying agentic applications, including its components (Server, Studio, CLI, SDK, Remote Graph) and key benefits like streaming support, background runs, long run handling, burstiness management, and human-in-the-loop capabilities. [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/): LLM should read this page when developing applications with LangGraph Server, deploying agent-based applications, or integrating persistent state management in agent workflows. LangGraph Server provides an API for creating and managing agent applications with key features like streaming endpoints, background runs, task queues, persistence, webhooks, cron jobs, and monitoring capabilities through a structured system of assistants, threads, runs, and stores. [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/): LLM should read this page when looking for information about LangGraph Studio features, needing to troubleshoot LangGraph Studio issues, or learning how to connect a LangGraph application to the Studio. LangGraph Studio is a specialized agent IDE for visualizing, interacting with, and debugging LLM applications, offering features such as graph visualization, state editing, assistant management, and integration with LangSmith, with instructions for connecting via deployed applications or local development servers, plus troubleshooting FAQs. [LangGraph Glossary](https://langchain-ai.github.io/langgraph/concepts/low_level/): LLM should read this page when needing to understand LangGraph terminology, implementing agent workflows as graphs, or developing modular multi-step AI systems. The page covers core LangGraph concepts including StateGraph, nodes, edges, state management, messaging, persistence, configuration, human-in-the-loop features, subgraphs, and visualization capabilities. [Memory](https://langchain-ai.github.io/langgraph/concepts/memory/): LLM should read this page when implementing memory systems for AI agents, managing conversation context across sessions, or designing systems that require both short-term and long-term information retention. This page explains memory systems in LangGraph, covering short-term (thread-scoped) memory for managing conversation history and long-term memory across threads, with techniques for handling long conversations, summarizing past interactions, and organizing persistent memories in namespaces. [Multi-agent Systems](https://langchain-ai.github.io/langgraph/concepts/multi_agent/): LLM should read this page when implementing multi-agent systems, troubleshooting complex agent architectures, or designing agent communication patterns. Multi-agent systems organize LLMs into modular architectures (network, supervisor, hierarchical, custom) with different communication patterns, using Command objects for handoffs between agents, and supporting various state management approaches. [Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/): LLM should read this page when needing to understand LangGraph persistence mechanisms, implementing stateful workflows, or managing conversation history across interactions. This page covers LangGraph's persistence features including checkpointers, threads, state snapshots, replay functionality, forking state, cross-thread memory via InMemoryStore, and semantic search capabilities for stored memories. [LangGraph Platform Plans](https://langchain-ai.github.io/langgraph/concepts/plans/): LLM should read this page when determining LangGraph Platform pricing tiers, comparing deployment options, or researching features available across different plans. This page outlines LangGraph Platform plans (Developer, Plus, Enterprise), detailing deployment options, usage limitations, feature availability, and pricing structure for agentic application deployment. [LangGraph Platform Architecture](https://langchain-ai.github.io/langgraph/concepts/platform_architecture/): LLM should read this page when needing to understand LangGraph Platform's technical architecture or troubleshooting deployment issues. The page details how LangGraph Platform uses Postgres for persistent storage of user/run data and Redis for worker communication (run cancellation, output streaming) and ephemeral metadata storage (retry attempts). [LangGraph's Runtime (Pregel)](https://langchain-ai.github.io/langgraph/concepts/pregel/): LLM should read this page when learning about LangGraph's runtime, implementing applications with Pregel directly, or understanding how LangGraph executes graph applications. Explains LangGraph's Pregel runtime which manages graph application execution through a three-phase process (Plan, Execution, Update), describes different channel types (LastValue, Topic, Context, BinaryOperatorAggregate), provides direct implementation examples, and contrasts the StateGraph API with the Functional API. [LangGraph Platform: Scalability & Resilience](https://langchain-ai.github.io/langgraph/concepts/scalability_and_resilience/): LLM should read this page when needing to understand LangGraph Platform's scaling capabilities, designing high-availability LangGraph deployments, or troubleshooting resilience issues. This page details LangGraph Platform's horizontal scaling features including stateless server instances, queue worker scaling, resilience mechanisms for handling crashes, and database failover strategies in Postgres and Redis. [LangGraph SDK](https://langchain-ai.github.io/langgraph/concepts/sdk/): LLM should read this page when looking for installation instructions for LangGraph SDK, needing to choose between sync and async Python clients, or requiring SDK API references. The page covers LangGraph SDK installation for Python and JS, provides API reference links, explains the difference between synchronous and asynchronous Python clients, and includes code examples for both client types. [Self-Hosted](https://langchain-ai.github.io/langgraph/concepts/self_hosted/): LLM should read this page when looking for LangGraph deployment options, understanding self-hosted versions, or seeking requirements for self-hosting LangGraph. This page details two self-hosted deployment options for LangGraph Platform: Self-Hosted Lite (limited to 1M nodes/year) and Self-Hosted Enterprise (full version requiring license). Includes requirements, deployment process using Redis/Postgres, Docker, and optional Kubernetes deployment via Helm chart. [Streaming](https://langchain-ai.github.io/langgraph/concepts/streaming/): LLM should read this page when implementing streaming features in LangGraph applications, understanding different streaming modes, or building responsive LLM applications. This page explains streaming in LangGraph, covering the main types (workflow progress, LLM tokens, custom updates) and streaming modes (values, updates, custom, messages, debug, events), with details on how to use multiple modes simultaneously and differences between LangGraph library and Platform implementations. [Template Applications](https://langchain-ai.github.io/langgraph/concepts/template_applications/): LLM should read this page when looking for LangGraph template applications, setting up a new LangGraph project, or finding reference implementations for agentic workflows. This page presents LangGraph template applications with installation requirements, available templates (including ReAct Agent, Memory Agent, Retrieval Agent, etc.), instructions for creating new apps using the CLI, deployment options, and links to further learning resources. [Time Travel ⏱️](https://langchain-ai.github.io/langgraph/concepts/time-travel/): LLM should read this page when debugging LLM-based agent behavior, analyzing decision-making paths, or exploring alternative execution branches in LangGraph. This page explains LangGraph's Time Travel debugging features: Replaying (reproducing past actions up to specific checkpoints) and Forking (creating alternative execution paths from specific points), with code examples for retrieving checkpoints, configuring replay, and creating forked states. ## How Tos [How-to Guides](https://langchain-ai.github.io/langgraph/how-tos/): LLM should read this page when looking for specific implementation techniques in LangGraph or when trying to deploy LangGraph applications to production environments. This page contains an extensive collection of how-to guides for LangGraph, covering graph fundamentals, persistence, memory management, human-in-the-loop features, tool calling, multi-agent systems, streaming, and deployment options through LangGraph Platform. [How to implement handoffs between agents](https://langchain-ai.github.io/langgraph/how-tos/agent-handoffs/): LLM should read this page when implementing multi-agent systems that require agent coordination, when building systems with specialized agents that need to work together, or when needing to implement handoffs between agents. This page explains how to implement handoffs between agents in LangGraph using Command objects, both directly from agent nodes and through specialized handoff tools, with code examples for creating multi-agent systems. [How to run a graph asynchronously](https://langchain-ai.github.io/langgraph/how-tos/async/): LLM should read this page when needing to implement asynchronous graph execution in LangGraph or when optimizing IO-bound LLM applications. This page explains how to convert synchronous graphs to asynchronous in LangGraph, including updating node definitions with async/await, using StateGraph with TypedDict, implementing conditional edges, and streaming results. [How to integrate LangGraph with AutoGen, CrewAI, and other frameworks](https://langchain-ai.github.io/langgraph/how-tos/autogen-integration/): LLM should read this page when integrating LangGraph with other agent frameworks, building multi-agent systems, or adding persistence features to agents. The page demonstrates how to combine LangGraph with AutoGen by calling AutoGen agents inside LangGraph nodes, showing code examples for setting up the integration with memory and conversation persistence. [How to integrate LangGraph (functional API) with AutoGen, CrewAI, and other frameworks](https://langchain-ai.github.io/langgraph/how-tos/autogen-integration-functional/): LLM should read this page when integrating LangGraph with other agent frameworks, building multi-agent systems with different frameworks, or adding LangGraph features to existing agent systems. This page demonstrates how to integrate LangGraph's functional API with AutoGen, including code examples for creating a workflow that calls AutoGen agents, leveraging LangGraph's memory and persistence features. [How to create branches for parallel node execution](https://langchain-ai.github.io/langgraph/how-tos/branching/): LLM should read this page when needing to implement parallel node execution in LangGraph, optimizing graph performance, or handling conditional branching in workflows. This page explains how to create branches for parallel execution in LangGraph using fan-out/fan-in mechanisms, reducer functions for state accumulation, handling exceptions during parallel execution, and implementing conditional branching logic between nodes. [How to combine control flow and state updates with Command](https://langchain-ai.github.io/langgraph/how-tos/command): LLM should read this page when learning how to combine control flow with state updates in LangGraph, understanding Command objects, or navigating between parent graphs and subgraphs. This page explains how to use Command objects to simultaneously update state and control flow between nodes, demonstrates using Command.PARENT to navigate from subgraphs to parent graphs, and includes examples of implementing reducers for state updates across graph hierarchies. [How to add runtime configuration to your graph](https://langchain-ai.github.io/langgraph/how-tos/configuration/): LLM should read this page when implementing runtime configuration for LangGraph, adding model selection options to agents, or enabling dynamic system messages. This page demonstrates how to configure LangGraph at runtime, including selecting different LLMs dynamically and adding custom configuration options like system messages through the configurable dictionary. [How to use the pre-built ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/): LLM should read this page when implementing a ReAct agent, needing pre-built agent solutions, or learning how to integrate tools with LLM agents. This page covers how to use the pre-built ReAct agent in LangGraph, including setup instructions, creating a weather checking tool, implementing the agent architecture, and examples of running the agent with and without tool calls. [How to add human-in-the-loop processes to the prebuilt ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent-hitl/): LLM should read this page when implementing human-in-the-loop processes for ReAct agents, debugging tool calls, or learning about interrupts in LangGraph. This guide demonstrates how to add human-in-the-loop functionality to prebuilt ReAct agents using interrupt_before=["tools"], working with MemorySaver checkpoints, and showing how to approve or edit tool calls before they execute. [How to add thread-level memory to a ReAct Agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent-memory/): LLM should read this page when adding memory to ReAct agents, implementing thread-level persistence in LangGraph, or building stateful conversational agents. This guide demonstrates how to add memory to a ReAct agent using LangGraph's checkpointer interface, with code examples showing MemorySaver implementation, thread_id configuration, and persistent chat context across multiple interactions. [How to return structured output from the prebuilt ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent-structured-output/): LLM should read this page when implementing structured output with ReAct agents, customizing agent response formats, or working with LangGraph agents. This page explains how to return structured output from prebuilt ReAct agents by providing a response_format parameter with a Pydantic schema, including examples with weather data and options for customizing the prompt. [How to add a custom system prompt to the prebuilt ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent-system-prompt/): LLM should read this page when learning to customize ReAct agents, needing to add system prompts to agents, or working with LangGraph's prebuilt agents. This tutorial demonstrates how to add a custom system prompt to a prebuilt ReAct agent, with code examples showing model setup, tool creation, and using the prompt parameter in the create_react_agent function. [How to add cross-thread persistence to your graph](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence): LLM should read this page when needing to implement persistence across multiple threads in LangGraph, when storing user data between conversations, or when implementing shared memory in graph-based LLM applications. This page demonstrates how to use LangGraph's Store API to persist data across threads, including creating an InMemoryStore with embedding search capabilities, passing stores to graph nodes, and accessing user-specific memories in different conversation threads. [How to add cross-thread persistence (functional API)](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence-functional): LLM should read this page when needing to implement cross-thread persistence in LangGraph functional API, storing user data across different conversation threads, or creating shared memory between workflows. This page explains how to add cross-thread persistence to LangGraph using the Store interface, including defining a store, configuring the entrypoint decorator, and implementing a workflow that can store and retrieve user information across different conversation threads. [How to do a Self-hosted deployment of LangGraph](https://langchain-ai.github.io/langgraph/how-tos/deploy-self-hosted/): LLM should read this page when implementing a self-hosted deployment of LangGraph, configuring required environment variables, or building Docker images for LangGraph applications. This page explains how to deploy LangGraph applications using Docker, covering environment requirements (Redis, Postgres), how to build Docker images with the LangGraph CLI, configuration using environment variables, and deployment options using Docker or Docker Compose. [How to disable streaming for models that don't support it](https://langchain-ai.github.io/langgraph/how-tos/disable-streaming/): LLM should read this page when handling models that don't support streaming, implementing LangGraph with non-streaming models, or troubleshooting streaming errors with OpenAI's O1 models. This page explains how to use the disable_streaming=True parameter with ChatOpenAI to make non-streaming models work with LangGraph's astream_events API, with code examples showing the error case and proper implementation. [How to edit graph state](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/edit-graph-state/): LLM should read this page when needing to implement human intervention in LangGraph workflows, wanting to edit graph state during execution, or implementing breakpoints in agent systems. This page explains how to edit graph state in LangGraph using breakpoints, including implementing human-in-the-loop interactions, setting up interruptions before specific nodes, and updating state during agent execution. [How to Review Tool Calls](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/review-tool-calls/): LLM should read this page when implementing human review of tool calls, creating interactive agent workflows, or building approval systems for AI actions. This page explains how to implement human-in-the-loop review for tool calls in LangGraph, including approving tool calls, modifying tool calls manually, and providing natural language feedback to agents with complete code examples and explanations. [How to view and update past graph state](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/time-travel/): LLM should read this page when needing to access or modify past states in LangGraph, when debugging agent execution, or when implementing user interventions in agent workflows. This page demonstrates how to view and update past graph states in LangGraph using get_state and update_state methods, with examples of replaying execution from checkpoints and branching workflows. [How to wait for user input using interrupt](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/wait-user-input/): LLM should read this page when implementing wait-for-user functions in LangGraph, implementing human-in-the-loop interactions, or learning how to use the interrupt() function. This page explains how to pause graph execution to collect user input using LangGraph's interrupt() function, with examples of simple feedback collection and more complex agent interactions that ask clarifying questions. [How to define input/output schema for your graph](https://langchain-ai.github.io/langgraph/how-tos/input_output_schema/): LLM should read this page when needing to define separate input/output schemas for LangGraph, implementing schema-based data filtering, or understanding schema definitions in StateGraph. This page explains how to define distinct input and output schemas for a StateGraph, showing how input schema validates the provided data structure while output schema filters internal data to return only relevant information, with code examples demonstrating implementation. [How to handle large numbers of tools](https://langchain-ai.github.io/langgraph/how-tos/many-tools/): LLM should read this page when handling large tool collections, implementing dynamic tool selection, or creating retrieval-based tool management in LangGraph. This page demonstrates how to manage large numbers of tools by using vector search to dynamically select relevant tools based on user queries, implementing tool selection nodes in LangGraph, and handling tool selection errors with retry mechanisms. [How to create map-reduce branches for parallel execution](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/): LLM should read this page when learning to implement parallel execution in LangGraph, creating map-reduce operations, or handling dynamic task decomposition. This guide explains how to use LangGraph's Send API to create map-reduce workflows, breaking tasks into parallel sub-tasks and recombining results, with examples showing joke generation across multiple subjects. [How to add summary of the conversation history](https://langchain-ai.github.io/langgraph/how-tos/memory/add-summary-conversation-history/): LLM should read this page when implementing conversation summarization, managing context windows, or building chatbots with memory management. This page demonstrates how to add summary functionality to conversation history using LangGraph, including checking conversation length, creating summaries, and removing old messages while maintaining context. [How to delete messages](https://langchain-ai.github.io/langgraph/how-tos/memory/delete-messages): LLM should read this page when attempting to manage message history in LangGraph, needing to delete specific messages from conversational state, or implementing memory management in LLM applications. This page explains how to delete messages from a LangGraph application using RemoveMessage modifiers, covering both manual deletion with message IDs and programmatic deletion within graph logic to maintain conversation history limits. [How to manage conversation history](https://langchain-ai.github.io/langgraph/how-tos/memory/manage-conversation-history/): LLM should read this page when managing conversation history in LangGraph, preventing context window issues, or implementing custom message filtering. This page explains how to manage conversation history in LangGraph to prevent context window overflow by implementing message filtering functions that control which messages are sent to the LLM. [How to add semantic search to your agent's memory](https://langchain-ai.github.io/langgraph/how-tos/memory/semantic-search/): LLM should read this page when implementing semantic search in agent memory, enabling memory-aware AI assistants, or configuring advanced memory retrieval systems. This page demonstrates how to add semantic search to LangGraph agent memory stores, covering basic setup with embeddings, storing memories, searching by semantic similarity, integrating memory in agents and ReAct agents, and advanced usage like multi-vector indexing and selective memory indexing. [How to add multi-turn conversation in a multi-agent application](https://langchain-ai.github.io/langgraph/how-tos/multi-agent-multi-turn-convo/): LLM should read this page when implementing multi-turn conversations between agents, creating interactive agent systems with human input, or learning about langgraph interrupts and agent handoffs. This page demonstrates how to build a multi-agent system with multi-turn conversations, including human-in-the-loop interactions, agent handoffs, and state management using LangGraph, Command objects, and interrupts. [How to add multi-turn conversation in a multi-agent application (functional API)](https://langchain-ai.github.io/langgraph/how-tos/multi-agent-multi-turn-convo-functional/): LLM should read this page when building multi-turn conversational agents, implementing agent-to-agent handoffs, or using interrupts to collect user input in LangGraph. This guide demonstrates how to create a multi-agent system with multi-turn conversations using LangGraph's functional API, featuring agent handoffs, interrupt mechanics for user input, and a complete example of travel and hotel advisor agents that can transfer control between each other. [How to build a multi-agent network](https://langchain-ai.github.io/langgraph/how-tos/multi-agent-network/): LLM should read this page when implementing multi-agent networks, setting up agent communication via handoffs, or building travel assistance agents. This page explains how to create a fully-connected multi-agent network with LangGraph where agents can communicate with each other via handoffs, including custom agent implementation and using prebuilt ReAct agents with tools. [How to build a multi-agent network (functional API)](https://langchain-ai.github.io/langgraph/how-tos/multi-agent-network-functional/): LLM should read this page when building multi-agent systems, implementing agent handoffs between specialists, or creating fully-connected agent networks. This guide demonstrates how to create a multi-agent network using LangGraph's functional API, with tasks for individual agents and entrypoint functions to manage agent handoffs based on tool calls. [How to add node retry policies](https://langchain-ai.github.io/langgraph/how-tos/node-retries/): LLM should read this page when implementing error handling in LangGraph nodes, configuring API retry mechanisms, or troubleshooting node failures in graph workflows. Shows how to add custom retry policies to LangGraph nodes, including specifying which exceptions to retry on, setting max attempts, intervals, backoff factors, and implementing different retry behaviors for different node types. [How to pass config to tools](https://langchain-ai.github.io/langgraph/how-tos/pass-config-to-tools/): LLM should read this page when implementing secure tool configuration in LangChain, passing user-specific parameters to tools, or configuring tools with runtime values. This page explains how to pass configuration to LangChain tools using RunnableConfig, allowing application-controlled values (like user IDs) to be securely passed to tools without LLM control, with examples of implementing tools that access user-specific data. [How to pass private state between nodes](https://langchain-ai.github.io/langgraph/how-tos/pass_private_state/): LLM should read this page when implementing data sharing between specific nodes in LangGraph, handling private state in graph workflows, or designing multi-node sequential processes with selective data visibility. This page demonstrates how to pass private data between specific nodes in a LangGraph without making it part of the main schema, using typed dictionaries to define both public and private states, and showing a three-node example where private data flows only between the first two nodes. [How to add thread-level persistence to your graph](https://langchain-ai.github.io/langgraph/how-tos/persistence/): LLM should read this page when implementing persistence in LangGraph, needing to preserve context across user interactions, or learning about thread-level state management. This page explains how to add thread-level persistence to LangGraph applications using MemorySaver, including code examples for creating stateful conversations where context is maintained across multiple interactions. [How to add thread-level persistence (functional API)](https://langchain-ai.github.io/langgraph/how-tos/persistence-functional/): LLM should read this page when implementing thread-level persistence in LangGraph, creating conversational agents with memory, or using functional API with state management. This page explains how to add thread-level persistence to LangGraph functional API workflows using checkpointers, including code examples for creating a simple chatbot with memory across conversation turns. [How to use MongoDB checkpointer for persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence_mongodb/): LLM should read this page when implementing persistence in LangGraph agents, setting up MongoDB for state checkpointing, or working with MongoDB connections in LangGraph applications. This page explains how to use the MongoDB checkpointer for LangGraph persistence, covering connection methods (direct, client-based, async), basic setup requirements, and practical examples of saving and retrieving agent state between interactions. [How to use Postgres checkpointer for persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence_postgres/): LLM should read this page when setting up persistence for LangGraph agents, implementing PostgreSQL as a checkpoint storage backend, or working with either synchronous or asynchronous database connections. This page details how to use PostgreSQL for persisting LangGraph agent state, covering setup and configuration of PostgresSaver and AsyncPostgresSaver with different connection methods (pool, direct connection, connection string). [How to create a custom checkpointer using Redis](https://langchain-ai.github.io/langgraph/how-tos/persistence_redis/): LLM should read this page when implementing persistence in LangGraph applications, creating custom checkpoint mechanisms for agents, or working with Redis as a storage backend. This page demonstrates how to create custom checkpointers for LangGraph agents using Redis, including implementations for both synchronous and asynchronous interfaces that save and retrieve agent state. [How to create a ReAct agent from scratch](https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/): LLM should read this page when needing to create a custom ReAct agent, wanting more control than prebuilt agents, or implementing ReAct from scratch with LangGraph. This guide shows how to build a custom ReAct agent using LangGraph, covering state definition, model/tool setup, node/edge configuration, graph creation, and testing the implementation with a weather query example. [How to create a ReAct agent from scratch (Functional API)](https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch-functional): LLM should read this page when creating a ReAct agent using LangGraph's Functional API, implementing tool-calling workflows, or building conversational agents with thread persistence. This page explains how to build a ReAct agent from scratch using LangGraph's Functional API, including model and tool setup, defining tasks for model/tool calling, creating an entrypoint for orchestration, and adding thread-level persistence for conversational experiences. [How to force tool-calling agent to structure output](https://langchain-ai.github.io/langgraph/how-tos/react-agent-structured-output): LLM should read this page when needing to force tool-calling agents to produce structured output, implementing consistent output formats for downstream software, or choosing between single-LLM vs two-LLM structured output approaches. The page explains two methods for implementing structured output with tool-calling agents: binding output as a tool (single LLM approach) and using two LLMs with structured output conversion, with code examples for both approaches using LangGraph. [How to create and control loops](https://langchain-ai.github.io/langgraph/how-tos/recursion-limit/): LLM should read this page when building loops in computational graphs, needing to implement termination conditions, or handling recursion limits in LangGraph. The page explains how to create graphs with loops using conditional edges for termination, set recursion limits, handle GraphRecursionError, and implement complex loops with branches. [How to review tool calls (Functional API)](https://langchain-ai.github.io/langgraph/how-tos/review-tool-calls-functional/): LLM should read this page when implementing human review of tool calls, creating ReAct agents with Functional API, or adding human-in-the-loop workflows. This page demonstrates how to review tool calls before execution in a ReAct agent using LangGraph's Functional API, including accepting, revising, or generating custom tool messages with the interrupt function. [How to pass custom run ID or set tags and metadata for graph runs in LangSmith](https://langchain-ai.github.io/langgraph/how-tos/run-id-langsmith/): LLM should read this page when needing to customize trace information in LangSmith for LangGraph runs or when debugging graph runs with custom identifiers. The page explains how to pass custom run_id, set tags, add metadata, and customize run names for LangGraph traces in LangSmith using RunnableConfig, with examples showing implementation with a ReAct agent. [How to create a sequence of steps](https://langchain-ai.github.io/langgraph/how-tos/sequence/): LLM should read this page when implementing sequential workflows in LangGraph, creating multi-step processes in applications, or learning about state management in graph-based systems. This page explains how to create sequences in LangGraph, covering methods for building sequential graphs using .add_node/.add_edge or the shorthand .add_sequence, defining state with TypedDict, creating nodes as functions that update state, and compiling/invoking graphs with examples. [How to use Pydantic model as graph state](https://langchain-ai.github.io/langgraph/how-tos/state-model): LLM should read this page when implementing Pydantic models for state validation in LangGraph, handling complex state schema definitions, or troubleshooting validation errors in graph nodes. This guide explains how to use Pydantic BaseModel as a state schema in LangGraph for runtime validation, covering basic implementation, limitations, validation behavior across multiple nodes, serialization patterns, type coercion, and working with message models. [How to update graph state from nodes](https://langchain-ai.github.io/langgraph/how-tos/state-reducers/): LLM should read this page when needing to update state in LangGraph, designing graphs with nodes that modify state, or implementing reducers for state management. This page explains how to define state schemas in LangGraph using TypedDict, how nodes can update state, and how to use reducers to control state updates, with specific examples using message handling. [How to stream](https://langchain-ai.github.io/langgraph/how-tos/streaming/): LLM should read this page when needing to implement streaming in LangGraph applications, understanding different streaming modes, or troubleshooting LLM response delivery. This page explains how to stream LLM outputs using LangGraph, covering different streaming modes (values, updates, custom, messages, debug), with code examples for each mode and how to combine multiple streaming modes. [How to stream data from within a tool](https://langchain-ai.github.io/langgraph/how-tos/streaming-events-from-within-tools/): LLM should read this page when implementing streaming functionality in tools, integrating LLM outputs with custom data streams, or developing LangGraph applications with real-time feedback. This page explains how to stream data from within tools using LangGraph, covering custom data streaming with stream_mode="custom", LLM token streaming with stream_mode="messages", and implementation approaches both with and without LangChain. [How to stream LLM tokens from specific nodes](https://langchain-ai.github.io/langgraph/how-tos/streaming-specific-nodes/): LLM should read this page when needing to filter token streaming from specific nodes in LangGraph, implementing selective streaming in multi-node workflows, or controlling which node outputs are displayed. Guide explains how to stream LLM tokens from specific nodes using stream_mode="messages" and filtering by the langgraph_node metadata field, with complete code examples for implementing this in StateGraph applications. [How to stream from subgraphs](https://langchain-ai.github.io/langgraph/how-tos/streaming-subgraphs/): LLM should read this page when needing to stream outputs from subgraphs in LangGraph, implementing nested graph streaming, or debugging hierarchical graph execution. This page explains how to stream outputs from subgraphs in LangGraph by using the subgraphs=True parameter in the parent graph's stream() method, with a complete code example showing the difference between regular streaming and subgraph streaming. [How to stream LLM tokens from your graph](https://langchain-ai.github.io/langgraph/how-tos/streaming-tokens): LLM should read this page when needing to stream LLM tokens from a LangGraph application, implementing custom token streaming, or filtering streamed outputs. This page explains how to stream individual LLM tokens from LangGraph nodes using graph.stream() with different stream_mode options, including examples with and without LangChain, async implementations, and how to filter streamed tokens using metadata. [How to use subgraphs](https://langchain-ai.github.io/langgraph/how-tos/subgraph/): LLM should read this page when building complex systems with subgraphs, implementing multi-agent systems, or needing to share state between parent graphs and subgraphs. The page explains two methods for using subgraphs: adding compiled subgraphs when schemas share keys, and invoking subgraphs via node functions when schemas differ, with code examples for both approaches. [How to add thread-level persistence to a subgraph](https://langchain-ai.github.io/langgraph/how-tos/subgraph-persistence/): LLM should read this page when implementing persistence in nested LangGraph architectures, adding thread-level storage to subgraphs, or debugging state propagation in LangGraph applications. This guide demonstrates how to add thread-level persistence to subgraphs by passing a checkpointer only to the parent graph during compilation, accessing persisted states from both parent and child graphs, and retrieving subgraph state using the proper configuration parameters. [How to transform inputs and outputs of a subgraph](https://langchain-ai.github.io/langgraph/how-tos/subgraph-transform-state/): LLM should read this page when needing to work with nested subgraphs, transforming state between parent and child graphs, or integrating independent state components in LangGraph. This page demonstrates how to transform inputs and outputs between parent graphs and subgraphs with different state structures, showing implementation of three nested graphs (parent, child, grandchild) with separate state dictionaries and transformation functions. [How to view and update state in subgraphs](https://langchain-ai.github.io/langgraph/how-tos/subgraphs-manage-state/): LLM should read this page when working with state management in nested subgraphs, implementing human-in-the-loop patterns, or debugging complex graph flows. This guide covers viewing and updating state in LangGraph subgraphs, including how to resume execution from breakpoints, modify subgraph state, act as specific nodes, and work with multi-level nested subgraphs. [How to call tools using ToolNode](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/): LLM should read this page when learning how to implement tool calling with LangGraph, when working with the ToolNode component, or when building ReAct agents. This page covers using LangGraph's ToolNode for tool calling, including setup, manual invocation, working with chat models, building a ReAct agent, handling single and parallel tool calls, and error handling. [How to handle tool calling errors](https://langchain-ai.github.io/langgraph/how-tos/tool-calling-errors/): LLM should read this page when handling tool call errors, implementing error handling for LLM-tool interactions, or creating fallback strategies for failed tool calls. This page covers strategies for handling tool calling errors in LangGraph, including using the prebuilt ToolNode with built-in error handling, implementing custom error handling patterns, and fallback mechanisms with model upgrades when tools fail. [How to update graph state from tools](https://langchain-ai.github.io/langgraph/how-tos/update-state-from-tools/): LLM should read this page when needing to update graph state from tools in LangGraph, implementing personalized responses based on tool updates, or using Command objects to modify state. This page details how to update graph state from tools using Command objects, creating personalized agents with state tracking, and implementing dynamic prompt construction based on updated state values. [How to interact with the deployment using RemoteGraph](https://langchain-ai.github.io/langgraph/how-tos/use-remote-graph/): LLM should read this page when needing to interact with LangGraph Platform deployments remotely, when implementing RemoteGraph interfaces, or when using deployed graphs as subgraphs. This page explains how to use RemoteGraph to interact with LangGraph Platform deployments, covering initialization methods (URL-based or client-based), synchronous/asynchronous invocation, thread-level persistence, and using RemoteGraph as a subgraph in larger applications. [How to visualize your graph](https://langchain-ai.github.io/langgraph/how-tos/visualization): LLM should read this page when needing to visualize LangGraph graphs, looking for graph visualization methods, or working with graph visualization in Python. Comprehensive guide for visualizing graphs in LangGraph with multiple methods: Mermaid syntax, Mermaid.ink API for PNG rendering, Pyppeteer-based visualization, and Graphviz, with customization options for colors, styles, and layout. [How to wait for user input (Functional API)](https://langchain-ai.github.io/langgraph/how-tos/wait-user-input-functional/): LLM should read this page when implementing human-in-the-loop workflows, integrating user input into agent systems, or adding interruption capabilities to LangGraph applications. The page explains how to use the `interrupt()` function in LangGraph's Functional API to pause execution for human input, with examples for both simple workflows and ReAct agents, including code implementations with checkpointing.