Type alias CreateReactAgentParams<A, StructuredResponseType>

CreateReactAgentParams<A, StructuredResponseType>: {
    checkpointSaver?: BaseCheckpointSaver;
    checkpointer?: BaseCheckpointSaver;
    includeAgentName?: "inline";
    interruptAfter?: N[] | All;
    interruptBefore?: N[] | All;
    llm: LanguageModelLike;
    messageModifier?: MessageModifier;
    name?: string;
    postModelHook?: RunnableLike<A["State"], A["Update"], LangGraphRunnableConfig>;
    preModelHook?: RunnableLike<A["State"] & PreHookAnnotation["State"], A["Update"] & PreHookAnnotation["Update"], LangGraphRunnableConfig>;
    prompt?: Prompt;
    responseFormat?: InteropZodType<StructuredResponseType> | StructuredResponseSchemaAndPrompt<StructuredResponseType> | Record<string, any>;
    stateModifier?: StateModifier;
    stateSchema?: A;
    store?: BaseStore;
    tools: ToolNode | (ServerTool | ClientTool)[];
}

Type Parameters

Type declaration

  • Optional checkpointSaver?: BaseCheckpointSaver

    An optional checkpoint saver to persist the agent's state.

  • Optional checkpointer?: BaseCheckpointSaver

    An optional checkpoint saver to persist the agent's state. Alias of "checkpointSaver".

  • Optional includeAgentName?: "inline"

    Use to specify how to expose the agent name to the underlying supervisor LLM.

    • undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this.
    • "inline": Add the agent name directly into the content field of the AIMessage using XML-style tags. Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"
  • Optional interruptAfter?: N[] | All

    An optional list of node names to interrupt after running.

  • Optional interruptBefore?: N[] | All

    An optional list of node names to interrupt before running.

  • llm: LanguageModelLike

    The chat model that can utilize OpenAI-style tool calling.

  • Optional messageModifier?: MessageModifier

    Deprecated

    Use prompt instead.

  • Optional name?: string

    An optional name for the agent.

  • Optional postModelHook?: RunnableLike<A["State"], A["Update"], LangGraphRunnableConfig>

    An optional node to add after the agent node (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing.

  • Optional preModelHook?: RunnableLike<A["State"] & PreHookAnnotation["State"], A["Update"] & PreHookAnnotation["Update"], LangGraphRunnableConfig>

    An optional node to add before the agent node (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.).

  • Optional prompt?: Prompt

    An optional prompt for the LLM. This takes full graph state BEFORE the LLM is called and prepares the input to LLM.

    Can take a few different forms:

    • str: This is converted to a SystemMessage and added to the beginning of the list of messages in state["messages"].
    • SystemMessage: this is added to the beginning of the list of messages in state["messages"].
    • Function: This function should take in full graph state and the output is then passed to the language model.
    • Runnable: This runnable should take in full graph state and the output is then passed to the language model.

    Note: Prior to v0.2.46, the prompt was set using stateModifier / messagesModifier parameters. This is now deprecated and will be removed in a future release.

  • Optional responseFormat?: InteropZodType<StructuredResponseType> | StructuredResponseSchemaAndPrompt<StructuredResponseType> | Record<string, any>

    An optional schema for the final agent output.

    If provided, output will be formatted to match the given schema and returned in the 'structuredResponse' state key. If not provided, structuredResponse will not be present in the output state.

    Can be passed in as:

    • Zod schema
    • JSON schema
    • { prompt, schema }, where schema is one of the above. The prompt will be used together with the model that is being used to generate the structured response.

    Remarks

    Important: responseFormat requires the model to support .withStructuredOutput().

    Note: The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.

  • Optional stateModifier?: StateModifier

    Deprecated

    Use prompt instead.

  • Optional stateSchema?: A
  • Optional store?: BaseStore
  • tools: ToolNode | (ServerTool | ClientTool)[]

    A list of tools or a ToolNode.