Streaming¶
Streaming is key to building responsive applications. There are a few types of data you’ll want to stream:
- Agent progress — get updates after each node in the agent graph is executed.
- LLM tokens — stream tokens as they are generated by the language model.
- Custom updates — emit custom data from tools during execution (e.g., "Fetched 10/100 records")
You can stream more than one type of data at a time.
Agent progress¶
To stream agent progress, use the stream()
method with streamMode: "updates"
. This emits an event after every agent step.
For example, if you have an agent that calls a tool once, you should see the following updates:
- LLM node: AI message with tool call requests
- Tool node: Tool message with execution result
- LLM node: Final AI response
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
llm,
tools: [getWeather],
});
for await (const chunk of await agent.stream(
{ messages: "what is the weather in sf" },
{ streamMode: "updates" }
)) {
console.log(chunk);
console.log("\n");
}
LLM tokens¶
To stream tokens as they are produced by the LLM, use streamMode: "messages"
:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
llm,
tools: [getWeather],
});
for await (const [token, metadata] of await agent.stream(
{ messages: "what is the weather in sf" },
{ streamMode: "messages" }
)) {
console.log("Token", token);
console.log("Metadata", metadata);
console.log("\n");
}
Tool updates¶
To stream updates from tools as they are executed, you can use writer
object available via config.writer
:
import { LangGraphRunnableConfig } from "@langchain/langgraph";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
const getWeather = tool(
async (input: { city: string }, config: LangGraphRunnableConfig) => {
// stream any arbitrary data
config.writer?.(`Looking up data for city: ${input.city}`);
return `It's always sunny in ${input.city}!`;
},
{
name: "getWeather",
schema: z.object({
city: z.string().describe("The city to get the weather for"),
}),
description: "Get weather for a given city.",
}
);
const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
llm,
tools: [getWeather],
});
for await (const chunk of await agent.stream(
{ messages: "what is the weather in sf" },
{ streamMode: "custom" }
)) {
console.log(chunk);
console.log("\n");
}
Stream multiple modes¶
You can specify multiple streaming modes by passing stream mode as a list: streamMode: ["updates", "messages", "custom"]
:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { initChatModel } from "langchain/chat_models/universal";
const llm = await initChatModel("anthropic:claude-3-7-sonnet-latest");
const agent = createReactAgent({
llm,
tools: [getWeather],
});
for await (const [streamMode, chunk] of await agent.stream(
{ messages: "what is the weather in sf" },
{ streamMode: ["updates", "messages", "custom"] }
)) {
console.log(streamMode, chunk);
console.log("\n");
}