How to review tool calls (Functional API)¶
Prerequisites
This guide assumes familiarity with the following:
- Implementing human-in-the-loop workflows with interrupt
- How to create a ReAct agent using the Functional API
This guide demonstrates how to implement human-in-the-loop workflows in a ReAct agent using the LangGraph Functional API.
We will build off of the agent created in the How to create a ReAct agent using the Functional API guide.
Specifically, we will demonstrate how to review tool calls generated by a chat model prior to their execution. This can be accomplished through use of the interrupt function at key points in our application.
Preview:
We will implement a simple function that reviews tool calls generated from our chat model and call it from inside our application's entrypoint:
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
Setup¶
First, let's install the required packages and set our API keys:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
Set up LangSmith for better debugging
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM aps built with LangGraph — read more about how to get started in the docs.
Define model and tools¶
Let's first define the tools and model we will use for our example. As in the ReAct agent guide, we will use a single place-holder tool that gets a description of the weather for a location.
We will use an OpenAI chat model for this example, but any model supporting tool-calling will suffice.
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
API Reference: ChatOpenAI | tool
Define tasks¶
Our tasks are unchanged from the ReAct agent guide:
- Call model: We want to query our chat model with a list of messages.
- Call tool: If our model generates tool calls, we want to execute them.
from langchain_core.messages import ToolCall, ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
API Reference: ToolCall | ToolMessage | entrypoint | task
Define entrypoint¶
To review tool calls before execution, we add a review_tool_call
function that calls interrupt. When this function is called, execution will be paused until we issue a command to resume it.
Given a tool call, our function will interrupt
for human review. At that point we can either:
- Accept the tool call;
- Revise the tool call and continue;
- Generate a custom tool message (e.g., instructing the model to re-format its tool call).
We will demonstrate these three cases in the usage examples below.
from typing import Union
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
We can now update our entrypoint to review the generated tool calls. If a tool call is accepted or revised, we execute in the same way as before. Otherwise, we just append the ToolMessage
supplied by the human.
Tip
The results of prior tasks — in this case the initial model call — are persisted, so that they are not run again following the interrupt
.
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Review tool calls
tool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)
if isinstance(review, ToolMessage):
tool_results.append(review)
else: # is a validated tool call
tool_calls.append(review)
if review != tool_call:
llm_response.tool_calls[i] = review # update message
# Execute remaining tool calls
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],
)
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
API Reference: MemorySaver | add_messages | Command | interrupt
Usage¶
Let's demonstrate some scenarios.
def _print_step(step: dict) -> None:
for task_name, result in step.items():
if task_name == "agent":
continue # just stream from tasks
print(f"\n{task_name}:")
if task_name in ("__interrupt__", "review_tool_call"):
print(result)
else:
result.pretty_print()
Accept a tool call¶
To accept a tool call, we just indicate in the data we provide in the Command
that the tool call should pass through.
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_Bh5cSwMqCpCxTjx7AjdrQTPd)
Call ID: call_Bh5cSwMqCpCxTjx7AjdrQTPd
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_Bh5cSwMqCpCxTjx7AjdrQTPd', 'type': 'tool_call'}}, resumable=True, ns=['agent:22fcc9cd-3573-b39b-eea7-272a025903e2'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
Revise a tool call¶
To revise a tool call, we can supply updated arguments.
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_b9h8e18FqH0IQm3NMoeYKz6N)
Call ID: call_b9h8e18FqH0IQm3NMoeYKz6N
Args:
location: san francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'san francisco'}, 'id': 'call_b9h8e18FqH0IQm3NMoeYKz6N', 'type': 'tool_call'}}, resumable=True, ns=['agent:9559a81d-5720-dc19-a457-457bac7bdd83'], when='during'),)
human_input = Command(resume={"action": "update", "data": {"location": "SF, CA"}})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
- In the trace before the interrupt, we generate a tool call for location
"San Francisco"
. - In the trace after resuming, we see that the tool call in the message has been updated to
"SF, CA"
.
Generate a custom ToolMessage¶
To Generate a custom ToolMessage
, we supply the content of the message. In this case we will ask the model to reformat its tool call.
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_VqGjKE7uu8HdWs9XuY1kMV18)
Call ID: call_VqGjKE7uu8HdWs9XuY1kMV18
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_VqGjKE7uu8HdWs9XuY1kMV18', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(
resume={
"action": "feedback",
"data": "Please format as <City>, <State>.",
},
)
for step in agent.stream(human_input, config):
_print_step(step)
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_xoXkK8Cz0zIpvWs78qnXpvYp)
Call ID: call_xoXkK8Cz0zIpvWs78qnXpvYp
Args:
location: San Francisco, CA
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_xoXkK8Cz0zIpvWs78qnXpvYp', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco, CA is sunny!