How to pass runtime values to tools¶
This guide shows how to define tools that depend on dynamically defined variables. These values are provided by your program, not by the LLM.
Tools can access the config.configurable field for values like user IDs that are known when a graph is initially executed, as well as managed values from the store for persistence across threads.
However, it can be convenient to access intermediate runtime values which are not known ahead of time, but are progressively generated as a graph executes, such as the current graph state. This guide will cover two techniques for this: context variables and closures.
Setup¶
Install the following to run this guide:
npm install @langchain/langgraph @langchain/openai @langchain/core
Next, configure your environment to connect to your model provider.
export OPENAI_API_KEY=your-api-key
Optionally, set your API key for LangSmith tracing, which will give us best-in-class observability.
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_CALLBACKS_BACKGROUND="true"
export LANGCHAIN_API_KEY=your-api-key
Context variables¶
Context variables are a powerful feature that allows you to set values at one level of your application, then access them within any child runnables (such as tools) nested within.
They are convenient in that you don’t need to have a direct reference to the declared variable to access it from a child, just a string with the variable name.
Compatibility
This functionality was added in @langchain/core>=0.3.10
. If you are using the LangSmith SDK separately in your project, we also recommend upgrading to langsmith>=0.1.65
. For help upgrading, see this guide.
It also requires async_hooks
support, which is supported in many popular JavaScript environments (such as Node.js, Deno, and Cloudflare Workers), but not all of them (mainly web browsers).
Let's define a tool that an LLM can use to update pet preferences for a user. The tool will retrieve the current state of the graph from the current context.
Define the agent state¶
For this example, the state we will track will just be a list of messages:
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
Now, declare a tool as shown below. The tool receives values in three different ways:
- It will receive a generated list of
pets
from the LLM in itsinput
. - It will pull a
userId
populated from the initial graph invocation. - It will get the current state of the graph at runtime from a context variable.
It will then use LangGraph's cross-thread persistence to save preferences:
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { getContextVariable } from "@langchain/core/context";
import { LangGraphRunnableConfig } from "@langchain/langgraph";
const updateFavoritePets = tool(async (input, config: LangGraphRunnableConfig) => {
// Some arguments are populated by the LLM; these are included in the schema below
const { pets } = input;
// Fetch a context variable named "currentState".
// We must set this variable explicitly in each node that calls this tool.
const currentState = getContextVariable("currentState");
// Other information (such as a UserID) are most easily provided via the config
// This is set when when invoking or streaming the graph
const userId = config.configurable?.userId;
// LangGraph's managed key-value store is also accessible from the config
const store = config.store;
await store.put([userId, "pets"], "names", pets);
// Store the initial input message from the user as a note.
// Using the same key will override previous values - you could
// use something different if you wanted to store many interactions.
await store.put([userId, "pets"], "context", currentState.messages[0].content);
return "update_favorite_pets called.";
},
{
// The LLM "sees" the following schema:
name: "update_favorite_pets",
description: "add to the list of favorite pets.",
schema: z.object({
pets: z.array(z.string()),
}),
});
If we look at the tool call schema, which is what is passed to the model for tool-calling, only pets
is being passed:
import { zodToJsonSchema } from "zod-to-json-schema";
console.log(zodToJsonSchema(updateFavoritePets.schema));
{ type: 'object', properties: { pets: { type: 'array', items: [Object] } }, required: [ 'pets' ], additionalProperties: false, '$schema': 'http://json-schema.org/draft-07/schema#' }
Let's also declare another tool so that our agent can retrieve previously set preferences:
const getFavoritePets = tool(
async (_, config: LangGraphRunnableConfig) => {
const userId = config.configurable?.userId;
// LangGraph's managed key-value store is also accessible via the config
const store = config.store;
const petNames = await store.get([userId, "pets"], "names");
const context = await store.get([userId, "pets"], "context");
return JSON.stringify({
pets: petNames.value,
context: context.value,
});
},
{
// The LLM "sees" the following schema:
name: "get_favorite_pets",
description: "retrieve the list of favorite pets for the given user.",
schema: z.object({}),
}
);
Define the nodes¶
We now need to define a few different nodes in our graph.
- The agent: responsible for deciding what (if any) actions to take.
- A function to invoke tools: if the agent decides to take an action, this node will then execute that action. It will also set the current state as a context variable.
We will also need to define some edges.
- After the agent is called, we should either invoke the tool node or finish.
- After the tool node have been invoked, it should always go back to the agent to decide what to do next
import {
END,
START,
StateGraph,
MemorySaver,
InMemoryStore,
} from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { setContextVariable } from "@langchain/core/context";
const model = new ChatOpenAI({ model: "gpt-4o" });
const tools = [getFavoritePets, updateFavoritePets];
const routeMessage = (state: typeof StateAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage?.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (state: typeof StateAnnotation.State) => {
const { messages } = state;
const modelWithTools = model.bindTools(tools);
const responseMessage = await modelWithTools.invoke([
{
role: "system",
content: "You are a personal assistant. Store any preferences the user tells you about."
},
...messages
]);
return { messages: [responseMessage] };
};
const toolNodeWithGraphState = async (state: typeof StateAnnotation.State) => {
// We set a context variable before invoking the tool node and running our tool.
setContextVariable("currentState", state);
const toolNodeWithConfig = new ToolNode(tools);
return toolNodeWithConfig.invoke(state);
};
const workflow = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNodeWithGraphState)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const memory = new MemorySaver();
const store = new InMemoryStore();
const graph = workflow.compile({ checkpointer: memory, store: store });
import * as tslab from "tslab";
const graphViz = graph.getGraph();
const image = await graphViz.drawMermaidPng();
const arrayBuffer = await image.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer));
Use it!¶
Let's use our graph now!
let inputs = { messages: [{ role: "user", content: "My favorite pet is a terrier. I saw a cute one on Twitter." }] };
let config = {
configurable: {
thread_id: "1",
userId: "a-user"
}
};
let stream = await graph.stream(inputs, config);
for await (const chunk of stream) {
for (const [node, values] of Object.entries(chunk)) {
console.log(`Output from node: ${node}`);
console.log("---");
console.log(values);
console.log("\n====\n");
}
}
Output from node: agent --- { messages: [ AIMessage { "id": "chatcmpl-AHcDfVrNHLi0DVBtW84UapOoeAP1t", "content": "", "additional_kwargs": { "tool_calls": [ { "id": "call_L3pw6ipwtBxdudekgCymgcBt", "type": "function", "function": "[Object]" } ] }, "response_metadata": { "tokenUsage": { "completionTokens": 19, "promptTokens": 102, "totalTokens": 121 }, "finish_reason": "tool_calls", "system_fingerprint": "fp_6b68a8204b" }, "tool_calls": [ { "name": "update_favorite_pets", "args": { "pets": "[Array]" }, "type": "tool_call", "id": "call_L3pw6ipwtBxdudekgCymgcBt" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 102, "output_tokens": 19, "total_tokens": 121 } } ] } ==== Output from node: tools --- { messages: [ ToolMessage { "content": "update_favorite_pets called.", "name": "update_favorite_pets", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "call_L3pw6ipwtBxdudekgCymgcBt" } ] } ==== Output from node: agent --- { messages: [ AIMessage { "id": "chatcmpl-AHcDfhVBJjGpk3Bdxw1tDQCZxqci5", "content": "I've added \"terrier\" to your list of favorite pets! If there's anything else you would like to share or update, feel free to let me know.", "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 33, "promptTokens": 139, "totalTokens": 172 }, "finish_reason": "stop", "system_fingerprint": "fp_6b68a8204b" }, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 139, "output_tokens": 33, "total_tokens": 172 } } ] } ====
Now verify it can properly fetch the stored preferences and cite where it got the information from:
inputs = { messages: [{ role: "user", content: "What're my favorite pets and what did I say when I told you about them?" }] };
config = {
configurable: {
thread_id: "2", // New thread ID, so the conversation history isn't present.
userId: "a-user"
}
};
stream = await graph.stream(inputs, {
...config
});
for await (
const chunk of stream
) {
for (const [node, values] of Object.entries(chunk)) {
console.log(`Output from node: ${node}`);
console.log("---");
console.log(values);
console.log("\n====\n");
}
}
Output from node: agent --- { messages: [ AIMessage { "id": "chatcmpl-AHcDgeIcrobhGEwsuuH0yI4YoEKbo", "content": "", "additional_kwargs": { "tool_calls": [ { "id": "call_1vtxWaH6Xhg8uwWo1M2Y5gOg", "type": "function", "function": "[Object]" } ] }, "response_metadata": { "tokenUsage": { "completionTokens": 13, "promptTokens": 103, "totalTokens": 116 }, "finish_reason": "tool_calls", "system_fingerprint": "fp_6b68a8204b" }, "tool_calls": [ { "name": "get_favorite_pets", "args": {}, "type": "tool_call", "id": "call_1vtxWaH6Xhg8uwWo1M2Y5gOg" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 103, "output_tokens": 13, "total_tokens": 116 } } ] } ==== Output from node: tools --- { messages: [ ToolMessage { "content": "{\"pets\":[\"terrier\"],\"context\":\"My favorite pet is a terrier. I saw a cute one on Twitter.\"}", "name": "get_favorite_pets", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "call_1vtxWaH6Xhg8uwWo1M2Y5gOg" } ] } ==== Output from node: agent --- { messages: [ AIMessage { "id": "chatcmpl-AHcDhsL27h4nI441ZPRBs8FDPoo5a", "content": "Your favorite pet is a terrier. You mentioned this when you said, \"My favorite pet is a terrier. I saw a cute one on Twitter.\"", "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 33, "promptTokens": 153, "totalTokens": 186 }, "finish_reason": "stop", "system_fingerprint": "fp_6b68a8204b" }, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 153, "output_tokens": 33, "total_tokens": 186 } } ] } ====
As you can see the agent is able to properly cite that the information came from Twitter!
function generateTools(state: typeof StateAnnotation.State) {
const updateFavoritePets = tool(
async (input, config: LangGraphRunnableConfig) => {
// Some arguments are populated by the LLM; these are included in the schema below
const { pets } = input;
// Others (such as a UserID) are best provided via the config
// This is set when when invoking or streaming the graph
const userId = config.configurable?.userId;
// LangGraph's managed key-value store is also accessible via the config
const store = config.store;
await store.put([userId, "pets"], "names", pets )
await store.put([userId, "pets"], "context", {content: state.messages[0].content})
return "update_favorite_pets called.";
},
{
// The LLM "sees" the following schema:
name: "update_favorite_pets",
description: "add to the list of favorite pets.",
schema: z.object({
pets: z.array(z.string()),
}),
}
);
return [updateFavoritePets];
};
Then, when laying out your graph, you will need to call the above method whenever you bind or invoke tools. For example:
const toolNodeWithClosure = async (state: typeof StateAnnotation.State) => {
// We fetch the tools any time this node is reached to
// form a closure and let it access the latest messages
const tools = generateTools(state);
const toolNodeWithConfig = new ToolNode(tools);
return toolNodeWithConfig.invoke(state);
};