How to add human-in-the-loop processes to the prebuilt ReAct agent¶
This tutorial will show how to add human-in-the-loop processes to the prebuilt ReAct agent. Please see this tutorial for how to get started with the prebuilt ReAct agent
You can add a a breakpoint before tools are called by passing interrupt_before=["tools"]
to create_react_agent
. Note that you need to be using a checkpointer for this to work.
Setup¶
In [1]:
Copied!
%%capture --no-stderr
%pip install -U langgraph langchain-openai
%%capture --no-stderr
%pip install -U langgraph langchain-openai
In [2]:
Copied!
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
# Recommended
_set_env("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Create ReAct Agent Tutorial"
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
# Recommended
_set_env("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Create ReAct Agent Tutorial"
OPENAI_API_KEY: ········
Code¶
In [3]:
Copied!
# First we initialize the model we want to use.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o", temperature=0)
# For this tutorial we will use custom tool that returns pre-defined values for weather in two cities (NYC & SF)
from typing import Literal
from langchain_core.tools import tool
@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
else:
raise AssertionError("Unknown city")
tools = [get_weather]
# We need a checkpointer to enable human-in-the-loop patterns
from langgraph.checkpoint import MemorySaver
memory = MemorySaver()
# Define the graph
from langgraph.prebuilt import create_react_agent
graph = create_react_agent(
model, tools=tools, interrupt_before=["tools"], checkpointer=memory
)
# First we initialize the model we want to use.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o", temperature=0)
# For this tutorial we will use custom tool that returns pre-defined values for weather in two cities (NYC & SF)
from typing import Literal
from langchain_core.tools import tool
@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
else:
raise AssertionError("Unknown city")
tools = [get_weather]
# We need a checkpointer to enable human-in-the-loop patterns
from langgraph.checkpoint import MemorySaver
memory = MemorySaver()
# Define the graph
from langgraph.prebuilt import create_react_agent
graph = create_react_agent(
model, tools=tools, interrupt_before=["tools"], checkpointer=memory
)
Usage¶
In [2]:
Copied!
def print_stream(stream):
for s in stream:
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
def print_stream(stream):
for s in stream:
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
In [4]:
Copied!
config = {"configurable": {"thread_id": "42"}}
inputs = {"messages": [("user", "What's the weather in SF?")]}
print_stream(graph.stream(inputs, config, stream_mode="values"))
config = {"configurable": {"thread_id": "42"}}
inputs = {"messages": [("user", "What's the weather in SF?")]}
print_stream(graph.stream(inputs, config, stream_mode="values"))
================================ Human Message ================================= What's the weather in SF? ================================== Ai Message ================================== Tool Calls: get_weather (call_0OMmuTLec9t8kxMVkllZCSxo) Call ID: call_0OMmuTLec9t8kxMVkllZCSxo Args: city: sf
In [5]:
Copied!
snapshot = graph.get_state(config)
print("Next step: ", snapshot.next)
snapshot = graph.get_state(config)
print("Next step: ", snapshot.next)
Next step: ('tools',)
In [6]:
Copied!
print_stream(graph.stream(None, config, stream_mode="values"))
print_stream(graph.stream(None, config, stream_mode="values"))
================================= Tool Message ================================= Name: get_weather It's always sunny in sf ================================== Ai Message ================================== The weather in San Francisco is currently sunny.
In [ ]:
Copied!