Iterate on prompts¶
Overview¶
LangGraph Studio supports two methods for modifying prompts in your graph: direct node editing and the LangSmith Playground interface.
Direct Node Editing¶
Studio allows you to edit prompts used inside individual nodes, directly from the graph interface.
Prerequisites
Graph Configuration¶
Define your configuration to specify prompt fields and their associated nodes using langgraph_nodes
and langgraph_type
keys.
Configuration Reference¶
langgraph_nodes
¶
- Description: Specifies which nodes of the graph a configuration field is associated with.
- Value Type: Array of strings, where each string is the name of a node in your graph.
- Usage Context: Include in the
json_schema_extra
dictionary for Pydantic models or themetadata["json_schema_extra"]
dictionary for dataclasses. - Example:
langgraph_type
¶
- Description: Specifies the type of configuration field, which determines how it's handled in the UI.
- Value Type: String
- Supported Values:
"prompt"
: Indicates the field contains prompt text that should be treated specially in the UI.- Usage Context: Include in the
json_schema_extra
dictionary for Pydantic models or themetadata["json_schema_extra"]
dictionary for dataclasses. - Example:
Example Configuration¶
## Using Pydantic
from pydantic import BaseModel, Field
from typing import Annotated, Literal
class Configuration(BaseModel):
"""The configuration for the agent."""
system_prompt: str = Field(
default="You are a helpful AI assistant.",
description="The system prompt to use for the agent's interactions. "
"This prompt sets the context and behavior for the agent.",
json_schema_extra={
"langgraph_nodes": ["call_model"],
"langgraph_type": "prompt",
},
)
model: Annotated[
Literal[
"anthropic/claude-3-7-sonnet-latest",
"anthropic/claude-3-5-haiku-latest",
"openai/o1",
"openai/gpt-4o-mini",
"openai/o1-mini",
"openai/o3-mini",
],
{"__template_metadata__": {"kind": "llm"}},
] = Field(
default="openai/gpt-4o-mini",
description="The name of the language model to use for the agent's main interactions. "
"Should be in the form: provider/model-name.",
json_schema_extra={"langgraph_nodes": ["call_model"]},
)
## Using Dataclasses
from dataclasses import dataclass, field
@dataclass(kw_only=True)
class Configuration:
"""The configuration for the agent."""
system_prompt: str = field(
default="You are a helpful AI assistant.",
metadata={
"description": "The system prompt to use for the agent's interactions. "
"This prompt sets the context and behavior for the agent.",
"json_schema_extra": {"langgraph_nodes": ["call_model"]},
},
)
model: Annotated[str, {"__template_metadata__": {"kind": "llm"}}] = field(
default="anthropic/claude-3-5-sonnet-20240620",
metadata={
"description": "The name of the language model to use for the agent's main interactions. "
"Should be in the form: provider/model-name.",
"json_schema_extra": {"langgraph_nodes": ["call_model"]},
},
)
Editing prompts in UI¶
- Locate the gear icon on nodes with associated configuration fields
- Click to open the configuration modal
- Edit the values
- Save to update the current assistant version or create a new one
LangSmith Playground¶
The LangSmith Playground interface allows testing individual LLM calls without running the full graph:
- Select a thread
- Click "View LLM Runs" on a node. This lists all the LLM calls (if any) made inside the node.
- Select an LLM run to open in Playground
- Modify prompts and test different model and tool settings
- Copy updated prompts back to your graph
For advanced Playground features, click the expand button in the top right corner.