⚡ Building language agents as graphs ⚡
LangGraph.js is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Built on top of LangChain.js, it offers these core benefits compared to other LLM frameworks: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your application, crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence, enabling advanced human-in-the-loop and memory features.
LangGraph is inspired by Pregel and Apache Beam. The public interface draws inspiration from NetworkX. LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
npm install @langchain/langgraph
One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.
Let's take a look at a simple example of an agent that can search the web using Tavily Search API.
First install the required dependencies:
npm install @langchain/openai @langchain/community
Then set the required environment variables:
export OPENAI_API_KEY=sk-...
export TAVILY_API_KEY=tvly-...
Optionally, set up LangSmith for best-in-class observability:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls__...
Now let's define our agent:
import { HumanMessage } from "@langchain/core/messages";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";
import { END, START, StateGraph, StateGraphArgs } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
// Define the state interface
interface AgentState {
messages: HumanMessage[];
}
// Define the graph state
const graphState: StateGraphArgs<AgentState>["channels"] = {
messages: {
value: (x: HumanMessage[], y: HumanMessage[]) => x.concat(y),
default: () => [],
},
};
// Define the tools for the agent to use
const tools = [new TavilySearchResults({ maxResults: 1 })];
const toolNode = new ToolNode<AgentState>(tools);
const model = new ChatOpenAI({ temperature: 0 }).bindTools(tools);
// Define the function that determines whether to continue or not
function shouldContinue(state: AgentState): "tools" | typeof END {
const messages = state.messages;
const lastMessage = messages[messages.length - 1];
// If the LLM makes a tool call, then we route to the "tools" node
if (lastMessage.additional_kwargs.tool_calls) {
return "tools";
}
// Otherwise, we stop (reply to the user)
return END;
}
// Define the function that calls the model
async function callModel(state: AgentState) {
const messages = state.messages;
const response = await model.invoke(messages);
// We return a list, because this will get added to the existing list
return { messages: [response] };
}
// Define a new graph
const workflow = new StateGraph<AgentState>({ channels: graphState })
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue)
.addEdge("tools", "agent");
// Initialize memory to persist state between graph runs
const checkpointer = new MemorySaver();
// Finally, we compile it!
// This compiles it into a LangChain Runnable.
// Note that we're (optionally) passing the memory when compiling the graph
const app = workflow.compile({ checkpointer });
// Use the Runnable
const finalState = await app.invoke(
{ messages: [new HumanMessage("what is the weather in sf")] },
{ configurable: { thread_id: "42" } }
);
console.log(finalState.messages[finalState.messages.length - 1].content);
This will output:
The current weather in San Francisco is as follows:
- Temperature: 60.1°F (15.6°C)
- Condition: Partly cloudy
- Wind: 5.6 mph (9.0 kph) from SSW
- Humidity: 83%
- Visibility: 9.0 miles (16.0 km)
- UV Index: 4.0
For more details, you can visit [Weather API](https://www.weatherapi.com/).
Now when we pass the same "thread_id"
, the conversation context is retained via the saved state (i.e. stored list of messages):
const nextState = await app.invoke(
{ messages: [new HumanMessage("what about ny")] },
{ configurable: { thread_id: "42" } }
);
console.log(nextState.messages[nextState.messages.length - 1].content);
The current weather in New York is as follows:
- Temperature: 20.3°C (68.5°F)
- Condition: Overcast
- Wind: 2.2 mph from the north
- Humidity: 65%
- Cloud Cover: 100%
- UV Index: 5.0
For more details, you can visit [Weather API](https://www.weatherapi.com/).
ChatOpenAI
as our LLM. NOTE: We need make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the .bindTools()
method.StateGraph
) by passing the state interface (AgentState
).graphState
object defines how updates from each node should be merged into the graph's state.There are two main nodes we need:
agent
node: responsible for deciding what (if any) actions to take.tools
node that invokes tools: if the agent decides to take an action, this node will then execute that action.First, we need to set the entry point for graph execution - the agent
node. We do this by
creating an edge from the virtual START
node to the agent
node.
Then we define one normal and one conditional edge. A conditional edge means that the destination depends on the contents of the graph's state (AgentState
). In our case, the destination is not known until the agent (LLM) decides.
.invoke()
, .stream()
and .batch()
with your inputs.MemorySaver
- a simple in-memory checkpointer."agent"
."agent"
node executes, invoking the chat model.AIMessage
. LangGraph adds this to the state.tool_calls
on the AIMessage
:AIMessage
has tool_calls
, the "tools"
node executes."agent"
node executes again and returns an AIMessage
.END
value and outputs the final state.As a result, we get a list of all our chat messages as output.