Prompt Optimization API Reference¶
Functions:
-
create_prompt_optimizer
βCreate a prompt optimizer that improves prompt effectiveness.
-
create_multi_prompt_optimizer
βCreate a multi-prompt optimizer that improves prompt effectiveness.
create_prompt_optimizer
¶
create_prompt_optimizer(
model: str | BaseChatModel,
/,
*,
kind: KINDS = "gradient",
config: Union[
GradientOptimizerConfig,
MetapromptOptimizerConfig,
None,
] = None,
) -> Runnable[OptimizerInput, str]
Create a prompt optimizer that improves prompt effectiveness.
This function creates an optimizer that can analyze and improve prompts for better performance with language models. It supports multiple optimization strategies to iteratively enhance prompt quality and effectiveness.
Parameters:
-
model
(Union[str, BaseChatModel]
) βThe language model to use for optimization. Can be a model name string or a BaseChatModel instance.
-
kind
(Literal[gradient, prompt_memory, metaprompt]
, default:'gradient'
) βThe optimization strategy to use. Each strategy offers different benefits:
- gradient: Separates concerns between finding areas for improvement and recommending updates
- prompt_memory: Simple single-shot metaprompt
- metaprompt: Supports reflection but each step is a single LLM call.
-
config
(Optional[OptimizerConfig]
, default:None
) βConfiguration options for the optimizer. The type depends on the chosen strategy:
- GradientOptimizerConfig for kind="gradient" - PromptMemoryConfig for kind="prompt_memory" - MetapromptOptimizerConfig for kind="metaprompt"
Defaults to None.
Returns:
-
optimizer
(Runnable[OptimizerInput, str]
) βA callable that takes conversation trajectories and/or prompts and returns optimized versions.
Optimization Strategies¶
1. Gradient Optimizer¶
sequenceDiagram
participant U as User
participant O as Optimizer
participant R as Reflection
participant U2 as Update
U->>O: Prompt + Feedback
loop For min_steps to max_steps
O->>R: Think/Critique Current State
R-->>O: Proposed Improvements
O->>U2: Apply Update
U2-->>O: Updated Prompt
end
O->>U: Final Optimized Prompt
The gradient optimizer uses reflection to propose improvements:
- Analyzes prompt and feedback through reflection cycles
- Proposes specific improvements
- Applies single-step updates
Configuration (GradientOptimizerConfig):
- gradient_prompt: Custom prompt for predicting "what to improve"
- metaprompt: Custom prompt for applying the improvements
- max_reflection_steps: Maximum reflection iterations (default: 3)
- min_reflection_steps: Minimum reflection iterations (default: 1)
2. Meta-Prompt Optimizer¶
sequenceDiagram
participant U as User
participant M as MetaOptimizer
participant A as Analysis
participant U2 as Update
U->>M: Prompt + Examples
M->>A: Analyze Examples
A-->>M: Proposed Update
M->>U2: Apply Update
U2-->>U: Enhanced Prompt
Uses meta-learning to directly propose updates:
- Analyzes examples to understand patterns
- Proposes direct prompt updates
- Applies updates in a single step
Configuration (MetapromptOptimizerConfig):
- metaprompt: Custom instructions on how to update the prompt
- max_reflection_steps: Maximum meta-learning steps (default: 3)
- min_reflection_steps: Minimum meta-learning steps (default: 1)
3. Prompt Memory Optimizer¶
sequenceDiagram
participant U as User
participant P as PromptMemory
participant M as Memory
U->>P: Prompt + History
P->>M: Extract Patterns
M-->>P: Success Patterns
P->>U: Updated Prompt
Learns from conversation history:
- Extracts successful patterns from past interactions
- Identifies improvement areas from feedback
- Applies learned patterns to new prompts
No additional configuration required.
Examples
Basic prompt optimization:
from langmem import create_prompt_optimizer
optimizer = create_prompt_optimizer("anthropic:claude-3-5-sonnet-latest")
# Example conversation with feedback
conversation = [
{"role": "user", "content": "Tell me about the solar system"},
{"role": "assistant", "content": "The solar system consists of..."},
]
feedback = {"clarity": "needs more structure"}
# Use conversation history to improve the prompt
trajectories = [(conversation, feedback)]
better_prompt = await optimizer.ainvoke(
{"trajectories": trajectories, "prompt": "You are an astronomy expert"}
)
print(better_prompt)
# Output: 'Provide a comprehensive overview of the solar system...'
Optimizing with conversation feedback:
from langmem import create_prompt_optimizer
optimizer = create_prompt_optimizer(
"anthropic:claude-3-5-sonnet-latest", kind="prompt_memory"
)
# Conversation with feedback about what could be improved
conversation = [
{"role": "user", "content": "How do I write a bash script?"},
{"role": "assistant", "content": "Let me explain bash scripting..."},
]
feedback = "Response should include a code example"
# Use the conversation and feedback to improve the prompt
trajectories = [(conversation, {"feedback": feedback})]
better_prompt = await optimizer(trajectories, "You are a coding assistant")
print(better_prompt)
# Output: 'You are a coding assistant that always includes...'
Meta-prompt optimization for complex tasks:
from langmem import create_prompt_optimizer
optimizer = create_prompt_optimizer(
"anthropic:claude-3-5-sonnet-latest",
kind="metaprompt",
config={"max_reflection_steps": 3, "min_reflection_steps": 1},
)
# Complex conversation that needs better structure
conversation = [
{"role": "user", "content": "Explain quantum computing"},
{"role": "assistant", "content": "Quantum computing uses..."},
]
feedback = "Need better organization and concrete examples"
# Optimize with meta-learning
trajectories = [(conversation, feedback)]
improved_prompt = await optimizer(
trajectories, "You are a quantum computing expert"
)
Performance Considerations
Each strategy has different LLM call patterns:
- prompt_memory: 1 LLM call total
- Fastest as it only needs one pass
- metaprompt: 1-5 LLM calls (configurable)
- Each step is one LLM call
- Default range: min 2, max 5 reflection steps
- gradient: 2-10 LLM calls (configurable)
- Each step requires 2 LLM calls (think + critique)
- Default range: min 2, max 5 reflection steps
Strategy Selection
Choose based on your needs:
- Prompt Memory: Simplest prompting strategy
- Limited ability to learn from complicated patterns
- Metaprompt: Balance of speed and improvement
- Moderate cost (2-5 LLM calls)
- Gradient: Most thorough but expensive
- Highest cost (4-10 LLM calls)
- Uses separation of concerns to extract feedback from more conversational context.
create_multi_prompt_optimizer
¶
create_multi_prompt_optimizer(
model: str | BaseChatModel,
/,
*,
kind: Literal[
"gradient", "prompt_memory", "metaprompt"
] = "gradient",
config: Optional[dict] = None,
) -> Runnable[MultiPromptOptimizerInput, list[Prompt]]
Create a multi-prompt optimizer that improves prompt effectiveness.
This function creates an optimizer that can analyze and improve multiple prompts
simultaneously using the same optimization strategy. Each prompt is optimized using
the selected strategy (see create_prompt_optimizer
for strategy details).
Parameters:
-
model
(Union[str, BaseChatModel]
) βThe language model to use for optimization. Can be a model name string or a BaseChatModel instance.
-
kind
(Literal[gradient, prompt_memory, metaprompt]
, default:'gradient'
) βThe optimization strategy to use. Each strategy offers different benefits: - gradient: Iteratively improves through reflection - prompt_memory: Uses successful past prompts - metaprompt: Learns optimal patterns via meta-learning Defaults to "gradient".
-
config
(Optional[OptimizerConfig]
, default:None
) βConfiguration options for the optimizer. The type depends on the chosen strategy: - GradientOptimizerConfig for kind="gradient" - PromptMemoryConfig for kind="prompt_memory" - MetapromptOptimizerConfig for kind="metaprompt" Defaults to None.
Returns:
-
MultiPromptOptimizer
(Runnable[MultiPromptOptimizerInput, list[Prompt]]
) βA Runnable that takes conversation trajectories and prompts and returns optimized versions.
sequenceDiagram
participant U as User
participant M as Multi-prompt Optimizer
participant C as Credit Assigner
participant O as Single-prompt Optimizer
participant P as Prompts
U->>M: Annotated Trajectories + Prompts
activate M
Note over M: Using pre-initialized<br/>single-prompt optimizer
M->>C: Analyze trajectories
activate C
Note over C: Determine which prompts<br/>need improvement
C-->>M: Credit assignment results
deactivate C
loop For each prompt needing update
M->>O: Optimize prompt
activate O
O->>P: Apply optimization strategy
Note over O,P: Gradient/Memory/Meta<br/>optimization
P-->>O: Optimized prompt
O-->>M: Return result
deactivate O
end
M->>U: Return optimized prompts
deactivate M
The system optimizer:
Examples
Basic prompt optimization:
from langmem import create_multi_prompt_optimizer
optimizer = create_multi_prompt_optimizer("anthropic:claude-3-5-sonnet-latest")
# Example conversation with feedback
conversation = [
{"role": "user", "content": "Tell me about the solar system"},
{"role": "assistant", "content": "The solar system consists of..."},
]
feedback = {"clarity": "needs more structure"}
# Use conversation history to improve the prompts
trajectories = [(conversation, feedback)]
prompts = [
{"name": "research", "prompt": "Research the given topic thoroughly"},
{"name": "summarize", "prompt": "Summarize the research findings"},
]
better_prompts = await optimizer.ainvoke(
{"trajectories": trajectories, "prompts": prompts}
)
print(better_prompts)
Optimizing with conversation feedback:
from langmem import create_multi_prompt_optimizer
optimizer = create_multi_prompt_optimizer(
"anthropic:claude-3-5-sonnet-latest", kind="prompt_memory"
)
# Conversation with feedback about what could be improved
conversation = [
{"role": "user", "content": "How do I write a bash script?"},
{"role": "assistant", "content": "Let me explain bash scripting..."},
]
feedback = "Response should include a code example"
# Use the conversation and feedback to improve the prompts
trajectories = [(conversation, {"feedback": feedback})]
prompts = [
{"name": "explain", "prompt": "Explain the concept"},
{"name": "example", "prompt": "Provide a practical example"},
]
better_prompts = await optimizer(trajectories, prompts)
Controlling the max number of reflection steps:
from langmem import create_multi_prompt_optimizer
optimizer = create_multi_prompt_optimizer(
"anthropic:claude-3-5-sonnet-latest",
kind="metaprompt",
config={"max_reflection_steps": 3, "min_reflection_steps": 1},
)
# Complex conversation that needs better structure
conversation = [
{"role": "user", "content": "Explain quantum computing"},
{"role": "assistant", "content": "Quantum computing uses..."},
]
# Explicit feedback is optional
feedback = None
# Optimize with meta-learning
trajectories = [(conversation, feedback)]
prompts = [
{"name": "concept", "prompt": "Explain quantum concepts"},
{"name": "application", "prompt": "Show practical applications"},
{"name": "example", "prompt": "Give concrete examples"},
]
improved_prompts = await optimizer(trajectories, prompts)
Classes:
-
Prompt
βTypedDict for structured prompt management and optimization.
-
OptimizerInput
βInput for single-prompt optimization.
-
MultiPromptOptimizerInput
βInput for optimizing multiple prompts together, maintaining consistency.
-
AnnotatedTrajectory
βConversation history (list of messages) with optional feedback for prompt optimization.
Prompt
¶
Bases: TypedDict
TypedDict for structured prompt management and optimization.
Example
from langmem import Prompt
prompt = Prompt(
name="extract_entities",
prompt="Extract key entities from the text:",
update_instructions="Make minimal changes, only address where"
" errors have occurred after reasoning over why they occur.",
when_to_update="If there seem to be errors in recall of named entities.",
)
The name and prompt fields are required. Optional fields control optimization: - update_instructions: Guidelines for modifying the prompt - when_to_update: Dependencies between prompts during optimization
Use in the prompt optimizers.
OptimizerInput
¶
Bases: TypedDict
Input for single-prompt optimization.
Example
{
"trajectories": [
AnnotatedTrajectory(
messages=[
{"role": "user", "content": "What's the weather like?"},
{
"role": "assistant",
"content": "I'm sorry, I can't tell you that",
},
],
feedback="Should have checked your search tool.",
),
],
"prompt": Prompt(
name="main_assistant",
prompt="You are a helpful assistant with a search tool.",
update_instructions="Make minimal changes, only address where "
"errors have occurred after reasoning over why they occur.",
when_to_update="Any time you notice the agent behaving in a way that doesn't help the user.",
),
}
MultiPromptOptimizerInput
¶
Bases: TypedDict
Input for optimizing multiple prompts together, maintaining consistency.
Example
{
"trajectories": [
AnnotatedTrajectory(
messages=[
{"role": "user", "content": "Tell me about this image"},
{
"role": "assistant",
"content": "I see a dog playing in a park",
},
{"role": "user", "content": "What breed is it?"},
{
"role": "assistant",
"content": "Sorry, I can't tell the breed",
},
],
feedback="Vision model wasn't used for breed detection",
),
],
"prompts": [
Prompt(
name="vision_extract",
prompt="Extract visual details from the image",
update_instructions="Focus on using vision model capabilities",
),
Prompt(
name="vision_classify",
prompt="Classify specific attributes in the image",
when_to_update="After vision_extract is updated",
),
],
}
AnnotatedTrajectory
¶
Bases: NamedTuple
Conversation history (list of messages) with optional feedback for prompt optimization.
Example
from langmem.prompts.types import AnnotatedTrajectory
trajectory = AnnotatedTrajectory(
messages=[
{"role": "user", "content": "What pizza is good around here?"},
{"role": "assistant", "content": "Try LangPizzaβ’οΈ"},
{"role": "user", "content": "Stop advertising to me."},
{"role": "assistant", "content": "BUT YOU'LL LOVE IT!"},
],
feedback={
"developer_feedback": "too pushy",
"score": 0,
},
)