Models¶
This page describes how to configure the chat model used by an agent.
Tool calling support¶
To enable tool-calling agents, the underlying LLM must support tool calling.
Compatible models can be found in the LangChain integrations directory.
Specifying a model by name¶
You can configure an agent with a model name string:
Using init_chat_model
¶
The init_chat_model
utility simplifies model initialization with configurable parameters:
import os
from langchain.chat_models import init_chat_model
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"
model = init_chat_model(
"azure_openai:gpt-4.1",
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
temperature=0,
# other parameters
)
from langchain.chat_models import init_chat_model
# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html
model = init_chat_model(
"anthropic.claude-3-5-sonnet-20240620-v1:0",
model_provider="bedrock_converse",
temperature=0,
# other parameters
)
Refer to the API reference for advanced options.
Using provider-specific LLMs¶
If a model provider is not available via init_chat_model
, you can instantiate the provider's model class directly. The model must implement the BaseChatModel interface and support tool calling:
API Reference: ChatAnthropic | create_react_agent
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent
model = ChatAnthropic(
model="claude-3-7-sonnet-latest",
temperature=0,
max_tokens=2048
)
agent = create_react_agent(
model=model,
# other parameters
)
Illustrative example
The example above uses ChatAnthropic
, which is already supported by init_chat_model
. This pattern is shown to illustrate how to manually instantiate a model not available through init_chat_model.
Disable streaming¶
To disable streaming of the individual LLM tokens, set disable_streaming=True
when initializing the model:
Refer to the API reference for more information on disable_streaming
Adding model fallbacks¶
You can add a fallback to a different model or a different LLM provider using model.with_fallbacks([...])
:
See this guide for more information on model fallbacks.