Prompt Chain Planner
Design multi-step prompt chains with variable flow validation and export as code or JSON config
⚠ "{{user_query}}" has no source (user input needed)
⚠ Output "{{search_terms}}" is not used by any later step
⚠ "{{context}}" has no source (user input needed)
⚠ "{{user_query}}" has no source (user input needed)
⚠ "{{context}}" has no source (user input needed)
[
{
"name": "Parse Query",
"prompt": "Extract the key search terms from: {{user_query}}",
"inputVars": [
"user_query"
],
"outputVar": "search_terms"
},
{
"name": "Generate Answer",
"prompt": "Using this context: {{context}}\n\nAnswer: {{user_query}}",
"inputVars": [
"context",
"user_query"
],
"outputVar": "answer"
},
{
"name": "Add Citations",
"prompt": "Add inline citations to this answer: {{answer}}\nSources: {{context}}",
"inputVars": [
"answer",
"context"
],
"outputVar": "cited_answer"
}
]What is a Prompt Chain Planner?
A prompt chain planner is a visual editor for designing multi-step LLM pipelines. Instead of cramming everything into one massive prompt, you break complex tasks into a sequence of focused steps where each step's output feeds into the next step's input. This approach — known as prompt chaining — dramatically improves reliability, debuggability, and output quality.
Prompt chaining is how production AI systems actually work. RAG pipelines parse queries, retrieve context, and generate answers in separate steps. Code agents plan, generate, review, and fix in a loop. Customer support bots classify, route, respond, and summarize. Each step can use a different model optimized for that specific task.
Our planner lets you define steps with prompt templates, validates that variables flow correctly between steps, highlights missing connections, and exports the entire chain as JSON config, Python code, or TypeScript code. Share chains via URL — no account needed.
How to Use This Tool
Design your prompt chain in a few steps:
- Start with a preset chain (RAG Pipeline, Summarize & Translate, or Classify & Route) or add steps from scratch.
- Click Edit on any step to configure its prompt template, output variable name, and optional model preference.
- Use {{variables}} in prompt templates — the planner auto-detects input variables and validates their sources.
- Check the validation summary: green means all variables have sources; warnings indicate variables that need user input.
- Reorder steps using the arrow buttons. Variable connections update automatically as you reorganize.
- Export your chain as JSON Config (for programmatic use), Python skeleton, or TypeScript skeleton. Share via URL for collaboration.
Why Prompt Chaining Works Better
Single monolithic prompts fail as tasks grow in complexity. Prompt chaining solves this by decomposing complex tasks into manageable steps:
Better Quality
Each step focuses on one task, producing more reliable output. A classifier that only needs to output a category label is far more reliable than a single prompt that must classify, retrieve context, generate a response, and format it — all at once.
Easier Debugging
When something goes wrong in a chain, you can inspect the output of each step independently. Was the classification wrong? Was the retrieved context irrelevant? Was the generation off? Single prompts give you one opaque output with no visibility into intermediate reasoning.
Model Optimization
Different steps have different requirements. Classification needs speed, not intelligence — use a cheap, fast model. Generation needs quality — use a capable model. Summarization is somewhere in between. Chaining lets you use the right model for each job, optimizing both cost and quality.
Conditional Logic
Chains can include conditional branches. If the classifier detects a billing issue, route to the billing handler; if it's technical, route to the tech support handler. This kind of routing logic is nearly impossible in a single prompt but natural in a chain.
Common Chain Patterns
These proven patterns are included as presets:
- RAG Pipeline — Parse the user query into search terms, retrieve relevant context from a vector database, generate an answer grounded in the retrieved context, and add inline citations. The most common pattern for knowledge-base applications.
- Summarize & Translate — Extract key entities from raw text, generate a concise summary, then translate to the target language. Works well for multilingual content processing pipelines.
- Classify & Route — Classify the user's intent, route to the appropriate handler with relevant context, generate a response, and create a log summary. The standard pattern for customer support bots and help desk automation.
Frequently Asked Questions
What do the variable colors mean?
Green variables come from a previous step's output — they are automatically connected. Gray variables with '← user' indicate they need to be provided as initial user input since no previous step produces them. Red highlights (in validation) indicate broken connections or missing variable names.
Can I have circular dependencies?
The planner uses a strictly sequential flow — each step can only reference outputs from previous steps, not from itself or later steps. This prevents circular dependencies by design. If you need iterative loops (like a review-and-fix cycle), model them as separate linear steps.
How do I implement the exported code?
The Python and TypeScript exports generate skeleton code with a callLLM function that you need to implement with your chosen provider's SDK. The chain logic (variable passing between steps) is fully implemented. Just add your API key, fill in the callLLM function, and run.
What's the difference between this and the Prompt Builder?
The Prompt Builder creates a single template with variables — one prompt, one LLM call. The Prompt Chain Planner designs multi-step workflows where each step is a separate LLM call and outputs flow between steps. Use the Builder for individual prompts, the Chain Planner for pipelines.
Can I estimate the cost of running this chain?
Yes — use our AI Agent Cost Calculator tool. Design your chain here, note the models and approximate token counts for each step, then enter them into the cost calculator. It will show you the per-run and monthly cost with overhead multipliers for retries and tool calls.
Related Tools
Explore more tools to build your AI pipeline:
- Prompt Template Builder — Build individual prompt templates with variables and code export
- AI Agent Cost Calculator — Calculate the operational cost of running your prompt chain
- Prompt Format Converter — Convert each step's prompt between OpenAI, Anthropic, and Google formats
Related Tools
Prompt Format Converter
Convert prompts between OpenAI, Anthropic Claude, Google Gemini, and other AI provider formats
AI Agent Operational Cost Calculator
Model the cost of multi-step AI agent workflows with per-step model selection and overhead multipliers
Prompt Template Builder
Build prompt templates with variables, fill them dynamically, and export as code for any AI provider