{ "cells": [ { "cell_type": "markdown", "id": "aad4e28d", "metadata": {}, "source": [ "# Persistence\n", "\n", "Many AI applications need memory to share context across multiple interactions.\n", "In LangGraph, memory is provided for any\n", "[StateGraph](https://langchain-ai.github.io/langgraphjs/reference/classes/index.StateGraph.html)\n", "through\n", "[Checkpointers](https://langchain-ai.github.io/langgraphjs/reference/interfaces/index.Checkpoint.html).\n", "\n", "When creating any LangGraph workflow, you can set them up to persist their state\n", "by doing using the following:\n", "\n", "1. A\n", " [Checkpointer](https://langchain-ai.github.io/langgraphjs/reference/classes/index.BaseCheckpointSaver.html),\n", " such as the\n", " [MemorySaver](https://langchain-ai.github.io/langgraphjs/reference/classes/index.MemorySaver.html)\n", "2. Call `compile(checkpointer=myCheckpointer)` when compiling the graph.\n", "\n", "Example:\n", "\n", "```javascript\n", "import { MemorySaver } from \"@langchain/langgraph\";\n", "\n", "const workflow = new StateGraph({\n", " channels: graphState,\n", "});\n", "\n", "/// ... Add nodes and edges\n", "// Initialize any compatible CheckPointSaver\n", "const memory = new MemorySaver();\n", "const persistentGraph = workflow.compile({ checkpointer: memory });\n", "```\n", "\n", "This works for\n", "[StateGraph](https://langchain-ai.github.io/langgraphjs/reference/classes/index.StateGraph.html)\n", "and all its subclasses, such as\n", "[MessageGraph](https://langchain-ai.github.io/langgraphjs/reference/classes/index.MessageGraph.html).\n", "\n", "Below is an example.\n", "\n", "
\n", "

Note

\n", "

\n", " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", "

\n", "
\n", "\n", "## Setup\n", "\n", "This guide will use OpenAI's GPT-4o model. We will optionally set our API key\n", "for [LangSmith tracing](https://smith.langchain.com/), which will give us\n", "best-in-class observability." ] }, { "cell_type": "code", "execution_count": 1, "id": "10021b8c", "metadata": { "lines_to_next_cell": 2 }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Persistence: LangGraphJS\n" ] } ], "source": [ "// process.env.OPENAI_API_KEY = \"sk_...\";\n", "\n", "// Optional, add tracing in LangSmith\n", "// process.env.LANGCHAIN_API_KEY = \"ls__...\";\n", "process.env.LANGCHAIN_CALLBACKS_BACKGROUND = \"true\";\n", "process.env.LANGCHAIN_TRACING_V2 = \"true\";\n", "process.env.LANGCHAIN_PROJECT = \"Persistence: LangGraphJS\";" ] }, { "cell_type": "markdown", "id": "5b9e252c", "metadata": {}, "source": [ "## Define the state\n", "\n", "The state is the interface for all of the nodes in our graph.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "9fc47087", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import { BaseMessage } from \"@langchain/core/messages\";\n", "import { StateGraphArgs } from \"@langchain/langgraph\";\n", "\n", "interface IState {\n", " messages: BaseMessage[];\n", "}\n", "\n", "// This defines the agent state\n", "const graphState: StateGraphArgs[\"channels\"] = {\n", " messages: {\n", " value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),\n", " default: () => [],\n", " },\n", "};" ] }, { "cell_type": "markdown", "id": "8bdba79f", "metadata": {}, "source": [ "## Set up the tools\n", "\n", "We will first define the tools we want to use. For this simple example, we will\n", "use create a placeholder search engine. However, it is really easy to create\n", "your own tools - see documentation\n", "[here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to do\n", "that." ] }, { "cell_type": "code", "execution_count": 3, "id": "5f1e5deb", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import { DynamicStructuredTool } from \"@langchain/core/tools\";\n", "import { z } from \"zod\";\n", "\n", "const searchTool = new DynamicStructuredTool({\n", " name: \"search\",\n", " description:\n", " \"Use to surf the web, fetch current information, check the weather, and retrieve other information.\",\n", " schema: z.object({\n", " query: z.string().describe(\"The query to use in your search.\"),\n", " }),\n", " func: async ({}: { query: string }) => {\n", " // This is a placeholder for the actual implementation\n", " return \"Cold, with a low of 13 ℃\";\n", " },\n", "});\n", "\n", "await searchTool.invoke({ query: \"What's the weather like?\" });\n", "\n", "const tools = [searchTool];" ] }, { "cell_type": "markdown", "id": "a5615fd8", "metadata": {}, "source": [ "We can now wrap these tools in a simple\n", "[ToolNode](https://langchain-ai.github.io/langgraphjs/reference/classes/prebuilt.ToolNode.html).\n", "This object will actually run the tools (functions) whenever they are invoked by\n", "our LLM." ] }, { "cell_type": "code", "execution_count": 4, "id": "1852d2a4", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import { ToolNode } from \"@langchain/langgraph/prebuilt\";\n", "\n", "const toolNode = new ToolNode<{ messages: BaseMessage[] }>(tools);" ] }, { "cell_type": "markdown", "id": "a593cc20", "metadata": {}, "source": [ "## Set up the model\n", "\n", "Now we will load the\n", "[chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", "

Note

\n", "

\n", " These model requirements are not general requirements for using LangGraph - they are just requirements for this one example.\n", "

\n", "
" ] }, { "cell_type": "code", "execution_count": 5, "id": "77c9701b", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import { ChatOpenAI } from \"@langchain/openai\";\n", "\n", "const model = new ChatOpenAI({ model: \"gpt-4o\" });" ] }, { "cell_type": "markdown", "id": "4177b143", "metadata": {}, "source": [ "After we've done this, we should make sure the model knows that it has these\n", "tools available to call. We can do this by calling\n", "[bindTools](https://v01.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html#bindTools)." ] }, { "cell_type": "code", "execution_count": 6, "id": "b35d9bd2", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "const boundModel = model.bindTools(tools);" ] }, { "cell_type": "markdown", "id": "bbb0ae12", "metadata": {}, "source": [ "## Define the graph\n", "\n", "We can now put it all together. We will run it first without a checkpointer:\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "5f85457b", "metadata": {}, "outputs": [], "source": [ "import { END, START, StateGraph } from \"@langchain/langgraph\";\n", "import { AIMessage } from \"@langchain/core/messages\";\n", "import { RunnableConfig } from \"@langchain/core/runnables\";\n", "\n", "const routeMessage = (state: IState) => {\n", " const { messages } = state;\n", " const lastMessage = messages[messages.length - 1] as AIMessage;\n", " // If no tools are called, we can finish (respond to the user)\n", " if (!lastMessage.tool_calls?.length) {\n", " return END;\n", " }\n", " // Otherwise if there is, we continue and call the tools\n", " return \"tools\";\n", "};\n", "\n", "const callModel = async (\n", " state: IState,\n", " config?: RunnableConfig,\n", ") => {\n", " const { messages } = state;\n", " const response = await boundModel.invoke(messages, config);\n", " return { messages: [response] };\n", "};\n", "\n", "const workflow = new StateGraph({\n", " channels: graphState,\n", "})\n", " .addNode(\"agent\", callModel)\n", " .addNode(\"tools\", toolNode)\n", " .addEdge(START, \"agent\")\n", " .addConditionalEdges(\"agent\", routeMessage)\n", " .addEdge(\"tools\", \"agent\");\n", "\n", "const graph = workflow.compile();" ] }, { "cell_type": "code", "execution_count": 8, "id": "41364864", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ \u001b[32m'user'\u001b[39m, \u001b[32m\"Hi I'm Yu, niced to meet you.\"\u001b[39m ]\n", "-----\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Skipping write for channel branch:agent:routeMessage:undefined which has no readers\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Nice to meet you, Yu! How can I assist you today?\n", "-----\n", "\n" ] } ], "source": [ "let inputs = { messages: [[\"user\", \"Hi I'm Yu, niced to meet you.\"]] };\n", "for await (\n", " const { messages } of await graph.stream(inputs, {\n", " streamMode: \"values\",\n", " })\n", ") {\n", " let msg = messages[messages?.length - 1];\n", " if (msg?.content) {\n", " console.log(msg.content);\n", " } else if (msg?.tool_calls?.length > 0) {\n", " console.log(msg.tool_calls);\n", " } else {\n", " console.log(msg);\n", " }\n", " console.log(\"-----\\n\");\n", "}" ] }, { "cell_type": "code", "execution_count": 9, "id": "ccddfd4a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ \u001b[32m'user'\u001b[39m, \u001b[32m'Remember my name?'\u001b[39m ]\n", "-----\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Skipping write for channel branch:agent:routeMessage:undefined which has no readers\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "I cannot remember personalized details, including names, from previous interactions. However, I'd be happy to help you with any inquiries you have! How can I assist you today?\n", "-----\n", "\n" ] } ], "source": [ "inputs = { messages: [[\"user\", \"Remember my name?\"]] };\n", "for await (\n", " const { messages } of await graph.stream(inputs, {\n", " streamMode: \"values\",\n", " })\n", ") {\n", " let msg = messages[messages?.length - 1];\n", " if (msg?.content) {\n", " console.log(msg.content);\n", " } else if (msg?.tool_calls?.length > 0) {\n", " console.log(msg.tool_calls);\n", " } else {\n", " console.log(msg);\n", " }\n", " console.log(\"-----\\n\");\n", "}" ] }, { "cell_type": "markdown", "id": "3bece060", "metadata": {}, "source": [ "## Add Memory\n", "\n", "Let's try it again with a checkpointer. We will use the\n", "[MemorySaver](https://langchain-ai.github.io/langgraphjs/reference/classes/index.MemorySaver.html),\n", "which will \"save\" checkpoints in-memory." ] }, { "cell_type": "code", "execution_count": 10, "id": "217ac741", "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import { MemorySaver } from \"@langchain/langgraph\";\n", "\n", "// Here we only save in-memory\n", "const memory = new MemorySaver();\n", "const persistentGraph = workflow.compile({ checkpointer: memory });" ] }, { "cell_type": "code", "execution_count": 11, "id": "173c17f9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ \u001b[32m'user'\u001b[39m, \u001b[32m\"Hi I'm Jo, niced to meet you.\"\u001b[39m ]\n", "-----\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Skipping write for channel branch:agent:routeMessage:undefined which has no readers\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Hi Jo, nice to meet you too! How can I assist you today?\n", "-----\n", "\n" ] } ], "source": [ "let config = { configurable: { thread_id: \"conversation-num-1\" } };\n", "inputs = { messages: [[\"user\", \"Hi I'm Jo, niced to meet you.\"]] };\n", "for await (\n", " const { messages } of await persistentGraph.stream(inputs, {\n", " ...config,\n", " streamMode: \"values\",\n", " })\n", ") {\n", " let msg = messages[messages?.length - 1];\n", " if (msg?.content) {\n", " console.log(msg.content);\n", " } else if (msg?.tool_calls?.length > 0) {\n", " console.log(msg.tool_calls);\n", " } else {\n", " console.log(msg);\n", " }\n", " console.log(\"-----\\n\");\n", "}" ] }, { "cell_type": "code", "execution_count": 12, "id": "1162eb84", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ \u001b[32m'user'\u001b[39m, \u001b[32m'Remember my name?'\u001b[39m ]\n", "-----\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Skipping write for channel branch:agent:routeMessage:undefined which has no readers\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Yes, your name is Jo. How can I assist you today?\n", "-----\n", "\n" ] } ], "source": [ "inputs = { messages: [[\"user\", \"Remember my name?\"]] };\n", "for await (\n", " const { messages } of await persistentGraph.stream(inputs, {\n", " ...config,\n", " streamMode: \"values\",\n", " })\n", ") {\n", " let msg = messages[messages?.length - 1];\n", " if (msg?.content) {\n", " console.log(msg.content);\n", " } else if (msg?.tool_calls?.length > 0) {\n", " console.log(msg.tool_calls);\n", " } else {\n", " console.log(msg);\n", " }\n", " console.log(\"-----\\n\");\n", "}" ] }, { "cell_type": "markdown", "id": "73902faf", "metadata": {}, "source": [ "## New Conversational Thread\n", "\n", "If we want to start a new conversation, we can pass in a different\n", "**`thread_id`**. Poof! All the memories are gone (just kidding, they'll always\n", "live in that other thread)!\n" ] }, { "cell_type": "code", "execution_count": 13, "id": "58cc0612", "metadata": { "lines_to_next_cell": 2 }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{ configurable: { thread_id: \u001b[32m'conversation-2'\u001b[39m } }\n" ] } ], "source": [ "config = { configurable: { thread_id: \"conversation-2\" } };" ] }, { "cell_type": "code", "execution_count": 14, "id": "25aea87b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ \u001b[32m'user'\u001b[39m, \u001b[32m'you forgot?'\u001b[39m ]\n", "-----\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Skipping write for channel branch:agent:routeMessage:undefined which has no readers\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Could you please provide more context or clarify what you're referring to? Let me know how I can assist you further!\n", "-----\n", "\n" ] } ], "source": [ "inputs = { messages: [[\"user\", \"you forgot?\"]] };\n", "for await (\n", " const { messages } of await persistentGraph.stream(inputs, {\n", " ...config,\n", " streamMode: \"values\",\n", " })\n", ") {\n", " let msg = messages[messages?.length - 1];\n", " if (msg?.content) {\n", " console.log(msg.content);\n", " } else if (msg?.tool_calls?.length > 0) {\n", " console.log(msg.tool_calls);\n", " } else {\n", " console.log(msg);\n", " }\n", " console.log(\"-----\\n\");\n", "}" ] } ], "metadata": { "jupytext": { "encoding": "# -*- coding: utf-8 -*-" }, "kernelspec": { "display_name": "TypeScript", "language": "typescript", "name": "tslab" }, "language_info": { "codemirror_mode": { "mode": "typescript", "name": "javascript", "typescript": true }, "file_extension": ".ts", "mimetype": "text/typescript", "name": "typescript", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 5 }