{ "cells": [ { "cell_type": "markdown", "id": "562ddb82", "metadata": {}, "source": [ "# Streaming Tokens\n", "\n", "In this example, we will stream tokens from the language model powering an\n", "agent. We will use a ReAct agent as an example. The tl;dr is to use\n", "[streamEvents](https://js.langchain.com/v0.2/docs/how_to/chat_streaming/#stream-events)\n", "([API Ref](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#streamEvents)).\n", "\n", "
Note
\n", "\n", " If you are using a version of `@langchain/core` < 0.2.3, when calling chat models or LLMs you need to call `await model.stream()` within your nodes to get token-by-token streaming events, and aggregate final outputs if needed to update the graph state. In later versions of `@langchain/core`, this occurs automatically, and you can call `await model.invoke()`.\n", "\n", " For more on how to upgrade `@langchain/core`, check out [the instructions here](https://js.langchain.com/v0.2/docs/how_to/installation/#installing-integration-packages).\n", "
\n", "\n", "Streaming Support
\n", "\n", " Token streaming is supported by many, but not all chat models. Check to see if your LLM integration supports token streaming here (doc). Note that some integrations may support _general_ token streaming but lack support for streaming tool calls.\n", "
\n", "Note
\n", "\n",
" In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool)
(API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n",
"
Note
\n", "\n", " These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.\n", "
\n", "