LangGraph Server¶
LangGraph Server offers an API for creating and managing agent-based applications. It is built on the concept of assistants, which are agents configured for specific tasks, and includes built-in persistence and a task queue. This versatile API supports a wide range of agentic application use cases, from background processing to real-time interactions.
Use LangGraph Server to create and manage assistants, threads, runs, cron jobs, webhooks, and more.
API reference
For detailed information on the API endpoints and data models, see LangGraph Platform API reference docs.
Server versions¶
There are two versions of LangGraph Server:
Lite
is a limited version of the LangGraph Server that you can run locally or in a self-hosted manner (up to 1 million nodes executed per year).Enterprise
is the full version of the LangGraph Server. To use theEnterprise
version, you must acquire a license key that you will need to specify when running the Docker image. To acquire a license key, please email sales@langchain.dev.
Feature Differences:
Lite | Enterprise | |
---|---|---|
Cron Jobs | ❌ | ✅ |
Custom Authentication | ❌ | ✅ |
Deployment options | Standalone container | Cloud Saas, Self-Hosted Data Plane, Self-Hosted Control Plane, Standalone container |
Application structure¶
To deploy a LangGraph Server application, you need to specify the graph(s) you want to deploy, as well as any relevant configuration settings, such as dependencies and environment variables.
Read the application structure guide to learn how to structure your LangGraph application for deployment.
Parts of a deployment¶
When you deploy LangGraph Server, you are deploying one or more graphs, a database for persistence, and a task queue.
Graphs¶
When you deploy a graph with LangGraph Server, you are deploying a "blueprint" for an Assistant.
An Assistant is a graph paired with specific configuration settings. You can create multiple assistants per graph, each with unique settings to accommodate different use cases that can be served by the same graph.
Upon deployment, LangGraph Server will automatically create a default assistant for each graph using the graph's default configuration settings.
Note
We often think of a graph as implementing an agent, but a graph does not necessarily need to implement an agent. For example, a graph could implement a simple chatbot that only supports back-and-forth conversation, without the ability to influence any application control flow. In reality, as applications get more complex, a graph will often implement a more complex flow that may use multiple agents working in tandem.
Persistence and task queue¶
LangGraph Server leverages a database for persistence and a task queue.
Currently, only Postgres is supported as a database for LangGraph Server and Redis as the task queue.
If you're deploying using LangGraph Platform, these components are managed for you. If you're deploying LangGraph Server on your own infrastructure, you'll need to set up and manage these components yourself.
Please review the deployment options guide for more information on how these components are set up and managed.
Learn more¶
- LangGraph Application Structure guide explains how to structure your LangGraph application for deployment.
- The LangGraph Platform API Reference provides detailed information on the API endpoints and data models.