Skip to content

Models

This page describes how to configure the chat model used by an agent.

Tool calling support

To enable tool-calling agents, the underlying LLM must support tool calling.

Compatible models can be found in the LangChain integrations directory.

Specifying a model by name

You can configure an agent with a model name string:

import os
from langgraph.prebuilt import create_react_agent

os.environ["OPENAI_API_KEY"] = "sk-..."

agent = create_react_agent(
    model="openai:gpt-4.1",
    # other parameters
)
import os
from langgraph.prebuilt import create_react_agent

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    # other parameters
)
import os
from langgraph.prebuilt import create_react_agent

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

agent = create_react_agent(
    model="azure_openai:gpt-4.1",
    # other parameters
)
import os
from langgraph.prebuilt import create_react_agent

os.environ["GOOGLE_API_KEY"] = "..."

agent = create_react_agent(
    model="google_genai:gemini-2.0-flash",
    # other parameters
)
from langgraph.prebuilt import create_react_agent

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

agent = create_react_agent(
    model="bedrock_converse:anthropic.claude-3-5-sonnet-20240620-v1:0",
    # other parameters
)

Using init_chat_model

The init_chat_model utility simplifies model initialization with configurable parameters:

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

model = init_chat_model(
    "openai:gpt-4.1",
    temperature=0,
    # other parameters
)

pip install -U "langchain[anthropic]"
import os
from langchain.chat_models import init_chat_model

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

model = init_chat_model(
    "anthropic:claude-3-5-sonnet-latest",
    temperature=0,
    # other parameters
)

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

model = init_chat_model(
    "azure_openai:gpt-4.1",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
    temperature=0,
    # other parameters
)

pip install -U "langchain[google-genai]"
import os
from langchain.chat_models import init_chat_model

os.environ["GOOGLE_API_KEY"] = "..."

model = init_chat_model(
    "google_genai:gemini-2.0-flash",
    temperature=0,
    # other parameters
)

pip install -U "langchain[aws]"
from langchain.chat_models import init_chat_model

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

model = init_chat_model(
    "anthropic.claude-3-5-sonnet-20240620-v1:0",
    model_provider="bedrock_converse",
    temperature=0,
    # other parameters
)

Refer to the API reference for advanced options.

Using provider-specific LLMs

If a model provider is not available via init_chat_model, you can instantiate the provider's model class directly. The model must implement the BaseChatModel interface and support tool calling:

API Reference: ChatAnthropic | create_react_agent

from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent

model = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    temperature=0,
    max_tokens=2048
)

agent = create_react_agent(
    model=model,
    # other parameters
)

Illustrative example

The example above uses ChatAnthropic, which is already supported by init_chat_model. This pattern is shown to illustrate how to manually instantiate a model not available through init_chat_model.

Disable streaming

To disable streaming of the individual LLM tokens, set disable_streaming=True when initializing the model:

from langchain.chat_models import init_chat_model

model = init_chat_model(
    "anthropic:claude-3-7-sonnet-latest",
    disable_streaming=True
)
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    disable_streaming=True
)

Refer to the API reference for more information on disable_streaming

Adding model fallbacks

You can add a fallback to a different model or a different LLM provider using model.with_fallbacks([...]):

from langchain.chat_models import init_chat_model

model_with_fallbacks = (
    init_chat_model("anthropic:claude-3-5-haiku-latest")
    .with_fallbacks([
        init_chat_model("openai:gpt-4.1-mini"),
    ])
)
from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI

model_with_fallbacks = (
    ChatAnthropic(model="claude-3-5-haiku-latest")
    .with_fallbacks([
        ChatOpenAI(model="gpt-4.1-mini"),
    ])
)

See this guide for more information on model fallbacks.

Additional resources