How-to guides¶
Welcome to the LangGraph.js How-to Guides! These guides provide practical, step-by-step instructions for accomplishing key tasks in LangGraph.js.
Installation¶
- Manage ecosystem dependencies: How to install and use LangGraph.js alongside other packages in the LangChain ecosystem.
- Use in web environments: How to install and use LangGraph.js in web environments like the browser.
Core¶
The core guides show how to address common needs when building a out AI workflows, with special focus placed on ReAct-style agents with tool calling.
- Persistence: How to give your graph "memory" and resiliance by saving and loading state
- Time travel: How to navigate and manipulate graph state history once it's persisted
- Stream tokens: How to stream tokens and tool calls from your agent within a graph
- Configuration: How to indicate that a graph can swap out configurable components
Design patterns¶
How to apply common design patterns in your workflows:
- Subgraphs: How to compose subgraphs within a larger graph
- Branching: How to create branching logic in your graphs for parallel node execution
- Human-in-the-loop: How to incorporate human feedback and intervention
The following examples are useful especially if you are used to LangChain's AgentExecutor configurations.
- Force calling a tool first: Define a fixed workflow before ceding control to the ReAct agent
- Dynamic direct return: Let the LLM decide whether the graph should finish after a tool is run or whether the LLM should be able to review the output and keep going
- Respond in structured format: Let the LLM use tools or populate schema to provide the user. Useful if your agent should generate structured content
- Managing agent steps: How to format the intermediate steps of your workflow for the agent