Cloud SaaS¶
Prerequisites
Overview¶
LangGraph's Cloud SaaS is a managed service for deploying LangGraph APIs, regardless of its definition or dependencies. The service offers managed implementations of checkpointers and stores, allowing you to focus on building the right cognitive architecture for your use case. By handling scalable & secure infrastructure, LangGraph Cloud offers the fastest path to getting your LangGraph API deployed to production.
Deployment¶
A deployment is an instance of a LangGraph API. A single deployment can have many revisions. When a deployment is created, all the necessary infrastructure (e.g. database, containers, secrets store) are automatically provisioned. See the architecture diagram below for more details.
See the how-to guide for creating a new deployment.
Resource Allocation¶
Deployment Type | CPU | Memory | Scaling |
---|---|---|---|
Development | 1 CPU | 1 GB | Up to 1 container |
Production | 2 CPU | 2 GB | Up to 10 containers |
Autoscaling¶
Production
type deployments automatically scale up to 10 containers. Scaling is based on the current request load for a single container. Specifically, the autoscaling implementation scales the deployment so that each container is processing about 10 concurrent requests. For example...
- If the deployment is processing 20 concurrent requests, the deployment will scale up from 1 container to 2 containers (20 requests / 2 containers = 10 requests per container).
- If a deployment of 2 containers is processing 10 requests, the deployment will scale down from 2 containers to 1 container (10 requests / 1 container = 10 requests per container).
10 concurrent requests per container is the target threshold. However, 10 concurrent requests per container is not a hard limit. The number of concurrent requests can exceed 10 if there is a sudden burst of requests.
Scale down actions are delayed for 30 minutes before any action is taken. In other words, if the autoscaling implementation decides to scale down a deployment, it will first wait for 30 minutes before scaling down. After 30 minutes, the concurrency metric is recomputed and the deployment will scale down if the concurrency metric has met the target threshold. Otherwise, the deployment remains scaled up. This "cool down" period ensures that deployments do not scale up and down too frequently.
In the future, the autoscaling implementation may evolve to accommodate other metrics such as background run queue size.
Revision¶
A revision is an iteration of a deployment. When a new deployment is created, an initial revision is automatically created. To deploy new code changes or update environment variable configurations for a deployment, a new revision must be created. When a revision is created, a new container image is built automatically.
See the how-to guide for creating a new revision.
Asynchronous Deployment¶
Infrastructure for deployments and revisions are provisioned and deployed asynchronously. They are not deployed immediately after submission. Currently, deployment can take up to several minutes.
- When a new deployment is created, a new database is created for the deployment. Database creation is a one-time step. This step contributes to a longer deployment time for the initial revision of the deployment.
- When a subsequent revision is created for a deployment, there is no database creation step. The deployment time for a subsequent revision is significantly faster compared to the deployment time of the initial revision.
- The deployment process for each revision contains a build step, which can take up to a few minutes.
Database creation for Development
type deployments takes longer than database creation for Production
type deployments.
Architecture¶
Subject to Change
The Cloud SaaS deployment architecture may change in the future.
A high-level diagram of a Cloud SaaS deployment.