Skip to content

Human-in-the-loop

To review, edit and approve tool calls in an agent you can use LangGraph's built-in human-in-the-loop features, specifically the interrupt() primitive.

LangGraph allows you to pause execution indefinitely — for minutes, hours, or even days—until human input is received.

This is possible because the agent state is checkpointed into a database, which allows the system to persist execution context and later resume the workflow, continuing from where it left off.

For a deeper dive into the human-in-the-loop concept, see the concept guide.

image

A human can review and edit the output from the agent before proceeding. This is particularly critical in applications where the tool calls requested may be sensitive or require human oversight.

Review tool calls

To add a human approval step to a tool:

  1. Use interrupt() in the tool to pause execution.
  2. Resume with a Command({ resume: ... }) to continue based on human input.
import { MemorySaver } from "@langchain/langgraph-checkpoint";
import { interrupt } from "@langchain/langgraph";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// An example of a sensitive tool that requires human review / approval
const bookHotel = tool(
  async (input: { hotelName: string; }) => {
    let hotelName = input.hotelName;
    const response = interrupt(  // (1)!
      `Trying to call \`book_hotel\` with args {'hotel_name': ${hotelName}}. ` +
      `Please approve or suggest edits.`
    )
    if (response.type === "accept") {
      // proceed to execute the tool logic
    } else if (response.type === "edit") {
        hotelName = response.args["hotel_name"]
    } else {
        throw new Error(`Unknown response type: ${response.type}`)
    }
    return `Successfully booked a stay at ${hotelName}.`;
  },
  {
    name: "bookHotel",
    schema: z.object({
      hotelName: z.string().describe("Hotel to book"),
    }),
    description: "Book a hotel.",
  }
);

const checkpointer = new MemorySaver();  // (2)!

const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
  llm,
  tools: [bookHotel],
  checkpointer  // (3)!
});
  1. The interrupt function pauses the agent graph at a specific node. In this case, we call interrupt() at the beginning of the tool function, which pauses the graph at the node that executes the tool. The information inside interrupt() (e.g., tool calls) can be presented to a human, and the graph can be resumed with the user input (tool call approval, edit or feedback).
  2. The InMemorySaver is used to store the agent state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities. In this example, we use InMemorySaver to store the agent state in memory. In a production application, the agent state will be stored in a database.
  3. Initialize the agent with the checkpointer.

Run the agent with the stream() method, passing the config object to specify the thread ID. This allows the agent to resume the same conversation on future invocations.

const config = {
   configurable: {
      "thread_id": "1"
   }
}

for await (const chunk of await agent.stream(
  { messages: "book a stay at McKittrick hotel" },
  config
)) {
  console.log(chunk);
  console.log("\n");
};

You should see that the agent runs until it reaches the interrupt() call, at which point it pauses and waits for human input.

Resume the agent with a Command({ resume: ... }) to continue based on human input.

import { Command } from "@langchain/langgraph";

for await (const chunk of await agent.stream(
  new Command({ resume: { type: "accept" } }),  // (1)!
  // new Command({ resume: { type: "edit", args: { "hotel_name": "McKittrick Hotel" } } }),
  config
)) {
  console.log(chunk);
  console.log("\n");
};
  1. The interrupt function is used in conjunction with the Command object to resume the graph with a value provided by the human.

Additional resources