Agents¶
What is an agent?¶
An agent consists of three components: a large language model (LLM), a set of tools it can use, and a prompt that provides instructions.
The LLM operates in a loop. In each iteration, it selects a tool to invoke, provides input, receives the result (an observation), and uses that observation to inform the next action. The loop continues until a stopping condition is met — typically when the agent has gathered enough information to respond to the user.
Basic configuration¶
Use create_react_agent
to instantiate an agent:
API Reference: create_react_agent
from langgraph.prebuilt import create_react_agent
def get_weather(city: str) -> str: # (1)!
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest", # (2)!
tools=[get_weather], # (3)!
prompt="You are a helpful assistant" # (4)!
)
# Run the agent
agent.invoke({"messages": "what is the weather in sf"})
- Define a tool for the agent to use. Tools can be defined as vanilla Python functions. For more advanced tool usage and customization, check the tools page.
- Provide a language model for the agent to use. To learn more about configuring language models for the agents, check the models page.
- Provide a list of tools for the model to use.
- Provide a system prompt (instructions) to the language model used by the agent.
LLM configuration¶
Use init_chat_model to configure an LLM with specific parameters, such as temperature:
API Reference: init_chat_model | create_react_agent
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
model = init_chat_model(
"anthropic:claude-3-7-sonnet-latest",
temperature=0
)
agent = create_react_agent(
model=model,
tools=[get_weather],
)
See the models page for more information on how to configure LLMs.
Custom Prompts¶
Prompts instruct the LLM how to behave. They can be:
- Static: A string is interpreted as a system message
- Dynamic: a list of messages generated at runtime based on input or configuration
Static prompts¶
Define a fixed prompt string or list of messages.
API Reference: create_react_agent
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_weather],
# A static prompt that never changes
prompt="Never answer questions about the weather."
)
agent.invoke(
{"messages": "what is the weather in sf"},
)
Dynamic prompts¶
Define a function that returns a message list based on the agent's state and configuration:
API Reference: AnyMessage | RunnableConfig | create_react_agent
from langchain_core.messages import AnyMessage
from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.prebuilt import create_react_agent
def prompt(state: AgentState, config: RunnableConfig) -> list[AnyMessage]: # (1)!
user_name = config.get("configurable", {}).get("user_name")
system_msg = f"You are a helpful assistant. Address the user as {user_name}."
return [{"role": "system", "content": system_msg}] + state["messages"]
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_weather],
prompt=prompt
)
agent.invoke(
{"messages": "what is the weather in sf"},
config={"configurable": {"user_name": "John Smith"}}
)
-
Dynamic prompts allow including non-message context when constructing an input to the LLM, such as:
- Information passed at runtime, like a
user_id
or API credentials (usingconfig
). - Internal agent state updated during a multi-step reasoning process (using
state
).
Dynamic prompts can be defined as functions that take
state
andconfig
and return a list of messages to send to the LLM. - Information passed at runtime, like a
See the context page for more information.
Memory¶
To allow multi-turn conversations with an agent, you need to enable persistence by providing a checkpointer
when creating an agent. At runtime you need to provide a config containing thread_id
— a unique identifier for the conversation (session):
API Reference: create_react_agent | InMemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
checkpointer = InMemorySaver()
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_weather],
checkpointer=checkpointer # (1)!
)
# Run the agent
config = {"configurable": {"thread_id": "1"}}
sf_response = agent.invoke(
{"messages": "what is the weather in sf"},
config # (2)!
)
ny_response = agent.invoke(
{"messages": "what about new york?"},
config
)
checkpointer
allows the agent to store its state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities.- Pass configuration with
thread_id
to be able to resume the same conversation on future agent invocations.
When you enable the checkpointer, it stores agent state at every step in the provided checkpointer database (or in memory, if using InMemorySaver
).
Note that in the above example, when the agent is invoked the second time with the same thread_id
, the original message history from the first conversation is automatically included, together with the new user input.
Please see the memory guide for more details on how to work with memory.
Structured output¶
To produce structured responses conforming to a schema, use the response_format
parameter. The schema can be defined with a Pydantic
model or TypedDict
. The result will be accessible via the structured_response
field.
API Reference: create_react_agent
from pydantic import BaseModel
from langgraph.prebuilt import create_react_agent
class WeatherResponse(BaseModel):
conditions: str
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[get_weather],
response_format=WeatherResponse # (1)!
)
response = agent.invoke({"messages": "what is the weather in sf"})
response["structured_response"]
-
When
response_format
is provided, a separate step is added at the end of the agent loop: agent message history is passed to an LLM with structured output to generate a structured response.To provide a system prompt to this LLM, use a tuple
(prompt, schema)
, e.g.,response_format=(prompt, WeatherResponse)
.
LLM post-processing
Structured output requires an additional call to the LLM to format the response according to the schema.