Skip to content

Human-in-the-loop

To review, edit and approve tool calls in an agent you can use LangGraph's built-in human-in-the-loop features, specifically the interrupt() primitive.

LangGraph allows you to pause execution indefinitely — for minutes, hours, or even days—until human input is received.

This is possible because the agent state is checkpointed into a database, which allows the system to persist execution context and later resume the workflow, continuing from where it left off.

For a deeper dive into the human-in-the-loop concept, see the concept guide.

image

A human can review and edit the output from the agent before proceeding. This is particularly critical in applications where the tool calls requested may be sensitive or require human oversight.

Review tool calls

To add a human approval step to a tool:

  1. Use interrupt() in the tool to pause execution.
  2. Resume with a Command({ resume: ... }) to continue based on human input.
import { MemorySaver } from "@langchain/langgraph-checkpoint";
import { interrupt } from "@langchain/langgraph";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// An example of a sensitive tool that requires human review / approval
const bookHotel = tool(
  async (input: { hotelName: string; }) => {
    let hotelName = input.hotelName;
    const response = interrupt(  
      `Trying to call \`book_hotel\` with args {'hotel_name': ${hotelName}}. ` +
      `Please approve or suggest edits.`
    )
    if (response.type === "accept") {
      // proceed to execute the tool logic
    } else if (response.type === "edit") {
        hotelName = response.args["hotel_name"]
    } else {
        throw new Error(`Unknown response type: ${response.type}`)
    }
    return `Successfully booked a stay at ${hotelName}.`;
  },
  {
    name: "bookHotel",
    schema: z.object({
      hotelName: z.string().describe("Hotel to book"),
    }),
    description: "Book a hotel.",
  }
);

const checkpointer = new MemorySaver();  

const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
  llm,
  tools: [bookHotel],
  checkpointer  
});

Run the agent with the stream() method, passing the config object to specify the thread ID. This allows the agent to resume the same conversation on future invocations.

const config = {
   configurable: {
      "thread_id": "1"
   }
}

for await (const chunk of await agent.stream(
  { messages: "book a stay at McKittrick hotel" },
  config
)) {
  console.log(chunk);
  console.log("\n");
};

You should see that the agent runs until it reaches the interrupt() call, at which point it pauses and waits for human input.

Resume the agent with a Command({ resume: ... }) to continue based on human input.

import { Command } from "@langchain/langgraph";

for await (const chunk of await agent.stream(
  new Command({ resume: { type: "accept" } }),  
  // new Command({ resume: { type: "edit", args: { "hotel_name": "McKittrick Hotel" } } }),
  config
)) {
  console.log(chunk);
  console.log("\n");
};

Additional resources