LangGraph Supervisor¶
Functions:
Name | Description |
---|---|
create_supervisor |
Create a multi-agent supervisor. |
create_supervisor
¶
create_supervisor(
agents: list[Pregel],
*,
model: LanguageModelLike,
tools: list[BaseTool | Callable] | None = None,
prompt: Prompt | None = None,
response_format: Optional[
Union[
StructuredResponseSchema,
tuple[str, StructuredResponseSchema],
]
] = None,
parallel_tool_calls: bool = False,
state_schema: StateSchemaType = AgentState,
config_schema: Type[Any] | None = None,
output_mode: OutputMode = "last_message",
add_handoff_messages: bool = True,
handoff_tool_prefix: Optional[str] = None,
add_handoff_back_messages: Optional[bool] = None,
supervisor_name: str = "supervisor",
include_agent_name: AgentNameMode | None = None
) -> StateGraph
Create a multi-agent supervisor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
agents
|
list[Pregel]
|
List of agents to manage. An agent can be a LangGraph CompiledStateGraph, a functional API workflow, or any other Pregel object. |
required |
model
|
LanguageModelLike
|
Language model to use for the supervisor |
required |
tools
|
list[BaseTool | Callable] | None
|
Tools to use for the supervisor |
None
|
prompt
|
Prompt | None
|
Optional prompt to use for the supervisor. Can be one of:
|
None
|
response_format
|
Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]]
|
An optional schema for the final supervisor output. If provided, output will be formatted to match the given schema and returned in the 'structured_response' state key.
If not provided,
Important
Note
|
None
|
parallel_tool_calls
|
bool
|
Whether to allow the supervisor LLM to call tools in parallel (only OpenAI and Anthropic). Use this to control whether the supervisor can hand off to multiple agents at once. If True, will enable parallel tool calls. If False, will disable parallel tool calls (default). Important This is currently supported only by OpenAI and Anthropic models. To control parallel tool calling for other providers, add explicit instructions for tool use to the system prompt. |
False
|
state_schema
|
StateSchemaType
|
State schema to use for the supervisor graph. |
AgentState
|
config_schema
|
Type[Any] | None
|
An optional schema for configuration.
Use this to expose configurable parameters via |
None
|
output_mode
|
OutputMode
|
Mode for adding managed agents' outputs to the message history in the multi-agent workflow. Can be one of:
|
'last_message'
|
add_handoff_messages
|
bool
|
Whether to add a pair of (AIMessage, ToolMessage) to the message history when a handoff occurs. |
True
|
handoff_tool_prefix
|
Optional[str]
|
Optional prefix for the handoff tools (e.g., "delegate_to_" or "transfer_to_")
If provided, the handoff tools will be named |
None
|
add_handoff_back_messages
|
Optional[bool]
|
Whether to add a pair of (AIMessage, ToolMessage) to the message history when returning control to the supervisor to indicate that a handoff has occurred. |
None
|
supervisor_name
|
str
|
Name of the supervisor node. |
'supervisor'
|
include_agent_name
|
AgentNameMode | None
|
Use to specify how to expose the agent name to the underlying supervisor LLM.
|
None
|
Example
from langchain_openai import ChatOpenAI
from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent
# Create specialized agents
def add(a: float, b: float) -> float:
'''Add two numbers.'''
return a + b
def web_search(query: str) -> str:
'''Search the web for information.'''
return 'Here are the headcounts for each of the FAANG companies in 2024...'
math_agent = create_react_agent(
model="openai:gpt-4o",
tools=[add],
name="math_expert",
)
research_agent = create_react_agent(
model="openai:gpt-4o",
tools=[web_search],
name="research_expert",
)
# Create supervisor workflow
workflow = create_supervisor(
[research_agent, math_agent],
model=ChatOpenAI(model="gpt-4o"),
)
# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [
{
"role": "user",
"content": "what's the combined headcount of the FAANG companies in 2024?"
}
]
})
Functions:
Name | Description |
---|---|
create_handoff_tool |
Create a tool that can handoff control to the requested agent. |
create_forward_message_tool |
Create a tool the supervisor can use to forward a worker message by name. |
create_handoff_tool
¶
create_handoff_tool(
*,
agent_name: str,
name: str | None = None,
description: str | None = None,
add_handoff_messages: bool = True
) -> BaseTool
Create a tool that can handoff control to the requested agent.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
agent_name
|
str
|
The name of the agent to handoff control to, i.e.
the name of the agent node in the multi-agent graph.
Agent names should be simple, clear and unique, preferably in snake_case,
although you are only limited to the names accepted by LangGraph
nodes as well as the tool names accepted by LLM providers
(the tool name will look like this: |
required |
name
|
str | None
|
Optional name of the tool to use for the handoff.
If not provided, the tool name will be |
None
|
description
|
str | None
|
Optional description for the handoff tool.
If not provided, the description will be |
None
|
add_handoff_messages
|
bool
|
Whether to add handoff messages to the message history. If False, the handoff messages will be omitted from the message history. |
True
|
create_forward_message_tool
¶
Create a tool the supervisor can use to forward a worker message by name.
This helps avoid information loss any time the supervisor rewrites a worker query to the user and also can save some tokens.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
supervisor_name
|
str
|
The name of the supervisor node (used for namespacing the tool). |
'supervisor'
|
Returns:
Name | Type | Description |
---|---|---|
BaseTool |
BaseTool
|
The 'forward_message' tool. |