How to add human-in-the-loop processes to the prebuilt ReAct agent¶
This tutorial will show how to add human-in-the-loop processes to the prebuilt ReAct agent. Please see this tutorial for how to get started with the prebuilt ReAct agent
You can add a a breakpoint before tools are called by passing interrupt_before=["tools"]
to create_react_agent
. Note that you need to be using a checkpointer for this to work.
Setup¶
First, let's install the required packages and set our API keys
%%capture --no-stderr
%pip install -U langgraph langchain-openai
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
Code¶
# First we initialize the model we want to use.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o", temperature=0)
# For this tutorial we will use custom tool that returns pre-defined values for weather in two cities (NYC & SF)
from typing import Literal
from langchain_core.tools import tool
@tool
def get_weather(location: str):
"""Use this to get weather information from a given location."""
if location.lower() in ["nyc", "new york"]:
return "It might be cloudy in nyc"
elif location.lower() in ["sf", "san francisco"]:
return "It's always sunny in sf"
else:
raise AssertionError("Unknown Location")
tools = [get_weather]
# We need a checkpointer to enable human-in-the-loop patterns
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
# Define the graph
from langgraph.prebuilt import create_react_agent
graph = create_react_agent(
model, tools=tools, interrupt_before=["tools"], checkpointer=memory
)
Usage¶
def print_stream(stream):
for s in stream:
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "42"}}
inputs = {"messages": [("user", "what is the weather in SF, CA?")]}
print_stream(graph.stream(inputs, config, stream_mode="values"))
================================ Human Message ================================= what is the weather in SF, CA? ================================== Ai Message ================================== Tool Calls: get_weather (call_uCtiELl4MERM1BSzvGQNVNIO) Call ID: call_uCtiELl4MERM1BSzvGQNVNIO Args: location: SF, CA
We can verify that our graph stopped at the right place:
snapshot = graph.get_state(config)
print("Next step: ", snapshot.next)
Next step: ('tools',)
Now we can either approve or edit the tool call before proceeding to the next node. If we wanted to approve the tool call, we would simply continue streaming the graph with None
input. If we wanted to edit the tool call we need to update the state to have the correct tool call, and then after the update has been applied we can continue.
We can try resuming and we will see an error arise:
print_stream(graph.stream(None, config, stream_mode="values"))
================================== Ai Message ================================== Tool Calls: get_weather (call_uCtiELl4MERM1BSzvGQNVNIO) Call ID: call_uCtiELl4MERM1BSzvGQNVNIO Args: location: SF, CA ================================= Tool Message ================================= Name: get_weather Error: AssertionError('Unknown Location') Please fix your mistakes. ================================== Ai Message ================================== Tool Calls: get_weather (call_CS02EQchFuqotH3gAiKcABx1) Call ID: call_CS02EQchFuqotH3gAiKcABx1 Args: location: San Francisco, CA
This error arose because our tool argument of "San Francisco, CA" is not a location our tool recognizes.
Let's show how we would edit the tool call to search for "San Francisco" instead of "San Francisco, CA" - since our tool as written treats "San Francisco, CA" as an unknown location. We will update the state and then resume streaming the graph and should see no errors arise:
state = graph.get_state(config)
last_message = state.values['messages'][-1]
last_message.tool_calls[0]['args'] = {"location": "San Francisco"}
graph.update_state(config, {"messages": [ last_message]})
{'configurable': {'thread_id': '42', 'checkpoint_ns': '', 'checkpoint_id': '1ef706ce-e7a4-6740-8004-0bf23a8d9eb8'}}
print_stream(graph.stream(None, config, stream_mode="values"))
================================== Ai Message ================================== Tool Calls: get_weather (call_CS02EQchFuqotH3gAiKcABx1) Call ID: call_CS02EQchFuqotH3gAiKcABx1 Args: location: San Francisco ================================= Tool Message ================================= Name: get_weather It's always sunny in sf ================================== Ai Message ================================== The weather in San Francisco is currently sunny.
Fantastic! Our graph updated properly to query the weather in San Francisco and got the correct "It's always sunny in sf" response from the tool, and then responded to the user accordingly.