Prompt Format Converter
Convert prompts between OpenAI, Anthropic Claude, Google Gemini, and other AI provider formats
Paste a prompt in OpenAI, Anthropic, or Google Gemini format to convert it.
Auto-detects the source format. Handles message arrays, system prompts, and role mappings.
What is a Prompt Format Converter?
A prompt format converter transforms AI prompts between the different JSON structures required by major AI providers — OpenAI, Anthropic Claude, and Google Gemini. While all three providers use message-based chat interfaces, their API formats differ in how they structure messages, handle system prompts, and name roles.
Migrating between AI providers is increasingly common as teams evaluate different models for cost, quality, and feature trade-offs. Manually reformatting prompt templates is tedious and error-prone, especially for complex multi-turn conversations with system prompts. This converter handles the structural transformation automatically, highlighting exactly what changed.
The tool auto-detects your source format, converts to any target provider, and shows conversion notes explaining each transformation. All processing happens in your browser — no data is sent to any server.
How to Use This Converter
- Paste your prompt — Copy the JSON prompt from your codebase. The converter accepts full API request bodies, message arrays, or any valid JSON prompt structure.
- Check detected format — The tool auto-detects whether your input is OpenAI, Anthropic, or Google format and displays a badge.
- Select target format — Choose the provider you want to convert to from the dropdown menu.
- Review the output — The converted JSON appears with proper formatting. Check the conversion notes below the output to understand what changed.
- Copy and use — Click "Copy Output" to copy the formatted JSON ready for use in your target provider's API.
Key Differences Between AI API Formats
System Prompt Placement
The most significant difference between providers is where the system prompt lives. OpenAI puts it in the messages array as a message with `role: "system"`. Anthropic requires a separate `system` field at the top level. Google uses `system_instruction` with a parts array. Misplacing the system prompt is the #1 cause of migration errors.
Message Structure
OpenAI and Anthropic use similar message structures with `role` and `content` fields. Google's Gemini API uses `contents` (plural) with a different inner structure — each message has `parts` (an array) instead of a simple content string.
Role Naming
OpenAI and Anthropic both use "user" and "assistant" roles. Google uses "user" but replaces "assistant" with "model". This is a subtle but important difference — using "assistant" in a Gemini API request will cause an error.
Required Parameters
Anthropic requires `max_tokens` in every API request — it is not optional. OpenAI defaults to a model-specific limit. Google has optional generation configuration. The converter adds required parameters automatically.
Common Migration Scenarios
- OpenAI to Anthropic — Most common migration path. System prompt moves out of messages array, max_tokens must be added explicitly.
- OpenAI to Google — Requires restructuring messages to contents/parts format and mapping assistant to model role.
- Anthropic to OpenAI — System prompt moves back into messages array as the first message. max_tokens becomes optional.
- Multi-provider support — Teams building provider-agnostic applications often need to maintain prompts in multiple formats simultaneously.
Frequently Asked Questions
What formats does this converter support?
The converter supports three major AI API formats: OpenAI Chat Completions (messages array with system/user/assistant roles), Anthropic Messages API (separate system field + messages array), and Google Gemini (system_instruction + contents array with parts). It also accepts plain message arrays without a provider wrapper.
How does the converter handle system prompts?
System prompt handling is the main difference between formats. OpenAI includes system messages in the messages array. Anthropic requires a separate "system" field outside the messages. Google uses a "system_instruction" object with a parts array. The converter automatically moves system prompts to the correct location for each target format.
Does the converter handle tool/function definitions?
The current version focuses on message format conversion — system prompts, user messages, assistant responses, and role mappings. For tool/function definition conversion between providers, use our Tool/Function Definition Linter which validates definitions across all three providers and highlights format differences.
What is the "model" role in Google Gemini format?
Google Gemini uses "model" instead of "assistant" for AI responses. When converting from OpenAI or Anthropic to Google format, all "assistant" roles are automatically mapped to "model". When converting from Google to other formats, "model" is mapped back to "assistant". The converter handles this automatically.
Why does the Anthropic output include max_tokens?
The Anthropic Messages API requires the "max_tokens" parameter in every request — it is not optional. The converter adds a default value of 1024 to ensure the output is a valid Anthropic API request. You should adjust this value based on your expected response length before using it in production.
Related Tools
More tools for working with AI APIs:
- Conversation Message Builder — Visually build chat completion message arrays
- AI Hallucination Risk Scorer — Analyze prompts for confabulation risk patterns
- LLM Parameter Playground — Experiment with temperature, top-p, and other model settings
Related Tools
AI Hallucination Risk Scorer
Score prompts for hallucination risk and get actionable suggestions to reduce confabulation
Conversation Message Builder
Build and test chat completion message arrays with visual role-based editing
LLM Parameter Playground
Experiment with temperature, top-p, frequency penalty, and other LLM parameters with visual explanations