Type alias CreateSupervisorParams<AnnotationRootT, StructuredResponseFormat>

CreateSupervisorParams<AnnotationRootT, StructuredResponseFormat>: {
    addHandoffBackMessages?: boolean;
    agents: (CompiledStateGraph<AnnotationRootT["State"], AnnotationRootT["Update"], string, AnnotationRootT["spec"], AnnotationRootT["spec"]> | RemoteGraph)[];
    contextSchema?: AnnotationRootT;
    includeAgentName?: AgentNameMode;
    llm: LanguageModelLike;
    outputMode?: OutputMode;
    postModelHook?: CreateReactAgentParams<AnnotationRootT, StructuredResponseFormat>["postModelHook"];
    preModelHook?: CreateReactAgentParams<AnnotationRootT, StructuredResponseFormat>["preModelHook"];
    prompt?: CreateReactAgentParams["prompt"];
    responseFormat?: InteropZodType<StructuredResponseFormat> | {
        prompt: string;
        schema: InteropZodType<StructuredResponseFormat> | Record<string, unknown>;
    } | Record<string, unknown>;
    stateSchema?: AnnotationRootT;
    supervisorName?: string;
    tools?: (StructuredToolInterface | RunnableToolLike | DynamicTool)[];
}

Type Parameters

  • AnnotationRootT extends AnnotationRoot<any>
  • StructuredResponseFormat extends Record<string, any> = Record<string, any>

Type declaration

  • Optional addHandoffBackMessages?: boolean

    Whether to add a pair of (AIMessage, ToolMessage) to the message history when returning control to the supervisor to indicate that a handoff has occurred

  • agents: (CompiledStateGraph<AnnotationRootT["State"], AnnotationRootT["Update"], string, AnnotationRootT["spec"], AnnotationRootT["spec"]> | RemoteGraph)[]

    List of agents to manage

  • Optional contextSchema?: AnnotationRootT

    Context schema to use for the supervisor graph

  • Optional includeAgentName?: AgentNameMode

    Use to specify how to expose the agent name to the underlying supervisor LLM.

    • undefined: Relies on the LLM provider using the name attribute on the AI message. Currently, only OpenAI supports this.
    • "inline": Add the agent name directly into the content field of the AI message using XML-style tags. Example: "How can I help you" -> "agent_nameHow can I help you?"
  • llm: LanguageModelLike

    Language model to use for the supervisor

  • Optional outputMode?: OutputMode

    Mode for adding managed agents' outputs to the message history in the multi-agent workflow. Can be one of:

    • "full_history": add the entire agent message history
    • "last_message": add only the last message (default)
  • Optional postModelHook?: CreateReactAgentParams<AnnotationRootT, StructuredResponseFormat>["postModelHook"]

    An optional node to add after the LLM node in the supervisor agent (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing. Post-model hook must be a callable or a runnable that takes in current graph state and returns a state update.

  • Optional preModelHook?: CreateReactAgentParams<AnnotationRootT, StructuredResponseFormat>["preModelHook"]

    An optional node to add before the LLM node in the supervisor agent (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.).

    Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of:

    {
    messages: [new RemoveMessage({ id: REMOVE_ALL_MESSAGES }), ...],
    llmInputMessages: [...]
    ...
    }

    Important: At least one of messages or llmInputMessages MUST be provided and will be used as an input to the agent node. The rest of the keys will be added to the graph state.

    Warning: If you are returning messages in the pre-model hook, you should OVERWRITE the messages key by doing the following:

    { messages: [new RemoveMessage({ id: REMOVE_ALL_MESSAGES }), ...newMessages], ... }
    
  • Optional prompt?: CreateReactAgentParams["prompt"]

    An optional prompt for the supervisor. Can be one of:

    • string: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"]
    • SystemMessage: this is added to the beginning of the list of messages in state["messages"]
    • Function: This function should take in full graph state and the output is then passed to the language model
    • Runnable: This runnable should take in full graph state and the output is then passed to the language model
  • Optional responseFormat?: InteropZodType<StructuredResponseFormat> | {
        prompt: string;
        schema: InteropZodType<StructuredResponseFormat> | Record<string, unknown>;
    } | Record<string, unknown>

    An optional schema for the final supervisor output.

    If provided, output will be formatted to match the given schema and returned in the 'structuredResponse' state key. If not provided, structuredResponse will not be present in the output state.

    Can be passed in as:

    • Zod schema
    • JSON schema
    • { prompt, schema }, where schema is one of the above. The prompt will be used together with the model that is being used to generate the structured response.

    Remarks

    Important: responseFormat requires the model to support .withStructuredOutput().

    Note: The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.

  • Optional stateSchema?: AnnotationRootT

    State schema to use for the supervisor graph

  • Optional supervisorName?: string

    Name of the supervisor node

  • Optional tools?: (StructuredToolInterface | RunnableToolLike | DynamicTool)[]

    Tools to use for the supervisor

Inline