How to create a ReAct agent from scratch (Functional API)¶
Prerequisites
This guide assumes familiarity with the following:
This guide demonstrates how to implement a ReAct agent using the LangGraph Functional API.
The ReAct agent is a tool-calling agent that operates as follows:
- Queries are issued to a chat model;
- If the model generates no tool calls, we return the model response.
- If the model generates tool calls, we execute the tool calls with available tools, append them as tool messages to our message list, and repeat the process.
This is a simple and versatile set-up that can be extended with memory, human-in-the-loop capabilities, and other features. See the dedicated how-to guides for examples.
Setup¶
First, let's install the required packages and set our API keys:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
Set up LangSmith for better debugging
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM aps built with LangGraph — read more about how to get started in the docs.
Create ReAct agent¶
Now that you have installed the required packages and set your environment variables, we can create our agent.
Define model and tools¶
Let's first define the tools and model we will use for our example. Here we will use a single place-holder tool that gets a description of the weather for a location.
We will use an OpenAI chat model for this example, but any model supporting tool-calling will suffice.
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
API Reference: ChatOpenAI | tool
Define tasks¶
We next define the tasks we will execute. Here there are two different tasks:
- Call model: We want to query our chat model with a list of messages.
- Call tool: If our model generates tool calls, we want to execute them.
from langchain_core.messages import ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
API Reference: ToolMessage | entrypoint | task
Define entrypoint¶
Our entrypoint will handle the orchestration of these two tasks. As described above, when our call_model
task generates tool calls, the call_tool
task will generate responses for each. We append all messages to a single messages list.
Tip
Note that because tasks return future-like objects, the below implementation executes tools in parallel.
from langgraph.graph.message import add_messages
@entrypoint()
def agent(messages):
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Execute tools
tool_result_futures = [
call_tool(tool_call) for tool_call in llm_response.tool_calls
]
tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(messages, [llm_response, *tool_results])
# Call model again
llm_response = call_model(messages).result()
return llm_response
API Reference: add_messages
Usage¶
To use our agent, we invoke it with a messages list. Based on our implementation, these can be LangChain message objects or OpenAI-style dicts:
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message]):
for task_name, message in step.items():
if task_name == "agent":
continue # Just print task updates
print(f"\n{task_name}:")
message.pretty_print()
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_tNnkrjnoz6MNfCHJpwfuEQ0v)
Call ID: call_tNnkrjnoz6MNfCHJpwfuEQ0v
Args:
location: san francisco
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
get_weather
tool and responds to the user after receiving the information from the tool. Check out the LangSmith trace here.
Add thread-level persistence¶
Adding thread-level persistence lets us support conversational experiences with our agent: subsequent invocations will append to the prior messages list, retaining the full conversational context.
To add thread-level persistence to our agent:
- Select a checkpointer: here we will use MemorySaver, a simple in-memory checkpointer.
- Update our entrypoint to accept the previous messages state as a second argument. Here, we simply append the message updates to the previous sequence of messages.
- Choose which values will be returned from the workflow and which will be saved by the checkpointer as
previous
usingentrypoint.final
(optional)
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Execute tools
tool_result_futures = [
call_tool(tool_call) for tool_call in llm_response.tool_calls
]
tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(messages, [llm_response, *tool_results])
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
API Reference: MemorySaver
We will now need to pass in a config when running our application. The config will specify an identifier for the conversational thread.
Tip
Read more about thread-level persistence in our concepts page and how-to guides.
We start a thread the same way as before, this time passing in the config:
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
for task_name, message in step.items():
if task_name == "agent":
continue # Just print task updates
print(f"\n{task_name}:")
message.pretty_print()
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_lubbUSdDofmOhFunPEZLBz3g)
Call ID: call_lubbUSdDofmOhFunPEZLBz3g
Args:
location: San Francisco
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
user_message = {"role": "user", "content": "How does it compare to Boston, MA?"}
print(user_message)
for step in agent.stream([user_message], config):
for task_name, message in step.items():
if task_name == "agent":
continue # Just print task updates
print(f"\n{task_name}:")
message.pretty_print()
{'role': 'user', 'content': 'How does it compare to Boston, MA?'}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_8sTKYAhSIHOdjLD5d6gaswuV)
Call ID: call_8sTKYAhSIHOdjLD5d6gaswuV
Args:
location: Boston, MA
call_tool:
=================================[1m Tool Message [0m=================================
It's rainy!
call_model:
==================================[1m Ai Message [0m==================================
Compared to San Francisco, which is sunny, Boston, MA is experiencing rainy weather.