Skip to main content
Prompt Studio manages the full lifecycle of prompts—from creation and testing to optimization and deployment. You can test prompts across external, fine-tuned, and open-source models to find the best model and configuration through iteration.

Key Features

  • 65+ pre-built templates for common use cases.
  • Multi-model comparison and testing (up to 5 models simultaneously).
  • Variable support to test multiple scenarios at once.
  • AI-assisted prompt generation and synthetic test data creation.
  • Version control, draft history, and collaboration.
  • Model performance analytics: response time and token usage.

Access Prompt Studio

  1. Log in to your AI for Process account and select Prompts on the top navigation bar.
  2. On the Prompts dashboard, select a tab:
    • All prompts: All available prompts.
    • My prompts: Prompts you created or saved.
    • Shared prompts: Prompts shared with you for use or collaboration.
    Prompts dashboard Each tab shows the prompt name, prompt text, and creator name.
  3. Click New prompt. New prompt button
  4. In the New prompt dialog, enter a name and click Proceed. The Prompt landing page opens. New prompt dialog

Create a Prompt

On the Prompt landing page, choose one of three ways to start: Prompts landing page

Generate a Prompt

This option expands a short input (one or two sentences) into a detailed, structured prompt. It helps LLMs better understand context and perform tasks more effectively.
Only OpenAI and Anthropic models are supported for prompt generation and test data generation.
  1. Click Generate a prompt.
  2. In the Prompt generator dialog, select a model and enter your instruction. Generate prompt
  3. Review the AI-generated prompt.
  4. Click Proceed to copy the prompt to the canvas. Customize it as needed.
You can also generate prompts directly on the prompt canvas by clicking Generate Prompt in the Prompt field.

Start from Scratch

This option opens a blank prompt canvas where you add prompts, variables, and models, then generate output. You can also pull in templates from the prompt library. For steps, see Work on the Prompt Canvas.

Prompt Library

The Prompt Library includes 65+ built-in templates for common use cases—code generation, summarization, content creation, Q&A, and more. Search templates by keyword or filter by use case category.
All templates are read-only. You can edit content only after importing a template to the prompt canvas.
  1. Click Prompt library.
  2. In the Prompt library dialog, select a tab:
    • My templates: Templates you previously saved.
    • All templates: All available templates.
    Prompt library
  3. Click a template to preview it, then click Use template. Prompt library dialog
  4. The template loads on the prompt canvas. Customize it as needed. Prompt library canvas

Work on the Prompt Canvas

The prompt canvas is the core workspace for prompt experiments—testing and comparing AI model performance on a specific input (phrase, question, or paragraph). The workflow follows four steps:
Add prompts → Apply variables → Select models → Generate output

Add Prompts

  1. In the System prompt field, assign a role to the model. This field is optional—use the toggle to enable or disable it. System prompt
  2. In the Prompt field, enter your instructions. Click Generate Prompt to expand a short instruction into a more detailed prompt. Human prompt
  3. Optionally, define a Response JSON schema to structure model responses. If the selected model supports response formatting, the schema is applied directly. If not, the schema is included with the prompt, and the model responds in the requested format if it can. Without a schema, the model responds in plain text. Supported types: String, Boolean, Number, Integer, Object, Array, Enum, and anyOf. For schema syntax, see Defining JSON schema. Resolve any schema errors before proceeding. JSON schema
  • System prompts guide the model’s overall behavior or tone. Example: “You are a helpful assistant.”
  • Human prompts specify what the user wants. Example: “Summarize this error log and tell me the likely cause of the issue.”

Apply Variables

Variables let you run prompts with multiple values simultaneously—generating outputs for all values at once. Use the {{variable}} syntax anywhere in the prompt or system prompt. For datasets, map CSV column names to variable names (case-sensitive). For example, {{Name}} maps to a CSV column named “Name.”
  1. In the Prompt field, add variables in double curly braces. For example, {{xyz}}. The Variables column appears automatically.
  2. In the Variables window, assign a value to each variable. Click Add an empty row to add multiple rows. Variables column
  3. Verify that the variables are substituted correctly in the Prompt window. Prompt window with substituted variables

Select Models

Test prompts with up to 5 models simultaneously to compare accuracy, tone, and relevance.
  1. In the prompt canvas, click the Select Model field and choose a model and connection.
  2. To add more models, select from the columns on the right. Select model
  3. Click the model settings icon to adjust parameters: temperature, top k, top p, and max tokens.
For model settings, bookmarking, and other options, see Prompt Canvas Options.

Generate Output

After selecting models, click Run to generate output.
You can generate a maximum of 10 rows of data simultaneously.
Output screen The output area displays:
  • Model responses.
  • Total input and output tokens.
  • Time taken to generate each response.

Prompt Canvas Options

Model Column Options

Model column options
IconFeatureDescription
Rearrange iconRearrangeDrag and drop to reorder model columns on the canvas.
Remove model iconRemove ModelRemove a model from the comparison.
Bookmark iconBookmark a modelSet a model as preferred. Required when committing a version if no model is bookmarked.
Model settings iconModel SettingsAdjust temperature, top k, top p, and max tokens. Defaults: top p: 1, top k: 5, temperature: 1, max tokens: 256.
Play iconPlayRegenerate output for the entire column.
Average response time iconAverage Response TimeAverage time the model takes to generate a response.
Average tokens iconAverage TokensMean number of input and output tokens used per interaction.
Copy iconCopyCopy the generated output to clipboard.
View in JSON iconView in JSONView the prompt and response in JSON format in a separate dialog.
Regenerate iconRe-generateRegenerate output for a single cell.

Top Toolbar Options

Top toolbar options
IconFeatureDescription
Prompt library iconPrompt libraryBrowse and select from ~70 templates to use or customize.
Prompt API iconPrompt APIAccess version-specific API endpoints. Select cURL, Python, or Node.js format.
Three dots menuSave to prompt librarySave the current prompt as a template in My Templates.
Three dots menuDraft historyCapture and restore the prompt canvas state at different points in time.
Three dots menuExport as CSVExport the canvas—inputs, outputs, and metadata—as a CSV file.
Three dots menuShareShare the prompt with other users for collaboration or reuse.
Versions iconVersionsView, compare, and restore prompt versions.
Commit iconCommitSave the current prompt as a new version (V1, V2, etc.).
Run iconRunExecute the prompt and generate output.

Advanced Features

To create, import, and manage data, see manage dataset.

Bookmark a Model

Bookmarking sets a model as your preferred model for a prompt. The bookmarked model is recorded when you commit a version. If no model is bookmarked at commit time, you must select one manually.
  1. Click the Bookmark model with its settings icon on the model column. Bookmark model The icon changes to Model bookmarked with its settings after bookmarking.
  2. If you commit without bookmarking, a dialog prompts you to select a preferred model. Select preferred model
  3. After committing, click Versions to see the preferred model recorded in the version history. Versions dialog with preferred model

Draft History

Draft history saves the complete state of the prompt canvas at different points in time—including inputs, outputs, model selections, and variable values. Draft history Click Draft History (from the three dots menu) to open the dialog. It lists all saved drafts with system prompts, human prompts, variables, and generated outputs. Click Restore to revert to that state. Draft history dialog
  • Draft History captures both inputs and outputs—the full context of each iteration.
  • Versions track only prompt inputs (system and human prompts), not outputs.

Regenerate Output

Selective regeneration lets you re-run specific prompts without regenerating all outputs, reducing unnecessary model usage. Use regeneration to:
  • Fine-tune specific prompts for better quality.
  • Compare model performance on the same prompt.
  • Adjust prompts to reduce bias.
  • Experiment with specific cells while preserving other outputs.
Cell-level regeneration — re-runs a single output cell: Regenerate at cell level Column-level regeneration — re-runs all rows for a model: Regenerate at model level

Manage Prompt Versions

Prompt versioning tracks iterations in a shared repository. Each committed version records the prompt, system prompt, and preferred model—creating an auditable history of changes. Key behaviors:
  • Committing: You must generate output before committing. The first commit creates V1; subsequent commits create V2, V3, and so on. Version names are assigned automatically.
  • Default version: The latest committed version is the default. To change it, select a version and click Mark as a default version.
  • Using versions as drafts: Load any version as a draft to edit without affecting the original. Commit the draft to create a new version.
To commit a version:
  1. Click Commit on the prompt canvas. The system saves the current prompt, system prompt, and preferred model as a new version. Committed version count
  2. Click Versions to view the version history. Prompt versions dialog
  3. Select a version and click Use as a draft to load it on the canvas for editing.
Select Mark as a default version to set a specific version as the default for the API endpoint and collaborators.

Share Prompts

Prompt sharing lets you collaborate by sharing prompts—including inputs, outputs, and settings—with other users. The original creator is the Owner.
ScenarioWhat is sharedVersion history
Share before committingInputs, outputs, and settingsNot shared (no versions committed yet)
Share after committingInputs, outputs, settings, and full version historyShared; new user sees all versions
Share with multiple contributorsAll versions; each version shows the contributing userShared
To share a prompt:
  1. Open the prompt and click the three dots icon > Share. Share option
  2. In the Share dialog, select users from your account. To add users not in your account, go to Settings.
  3. Assign a role and click Share. The system notifies selected users about the shared prompt and their permissions. Share dialog

Prompt Roles and Permissions

RolePermissions
Full (Owner)View, edit, restore, commit, and delete prompts. Manage users, API keys, and test data.
Edit (Collaborator)All Full permissions except delete.
View (Viewer)View prompts and versions only. Cannot edit, delete, or commit.

Prompt API Endpoint

The Prompt API lets you access prompts externally using version-specific API keys, eliminating manual copy-pasting. The endpoint generates automatically when you commit the first version of a prompt. How it works:
  • Each prompt has one API endpoint. By default, it returns the latest version. Set a default version to control which version the endpoint serves.
  • A successful API request returns the SystemPrompt and HumanPrompt from the specified version.
  • Edit the endpoint’s query parameters to target a specific version. Without a version parameter, it returns the default version.
  • You can create multiple API keys per endpoint. Each key can be copied once and deleted but cannot be reused.
  • Deleting an API key invalidates it in all external systems where it was used.
Supported request formats: cURL, Python, Node.js. Prompt API keys

Best Practices

  • Start with templates: Use the prompt library for common use cases to accelerate prompt creation.
  • Use variables: Run multiple prompts simultaneously and ensure consistent testing across scenarios.
  • Bookmark strong models: Track better-performing models by bookmarking them.
  • Use version control: Commit versions for significant changes and log updates clearly.
  • Export results as CSV: Save important outputs for sharing and analysis.
  • Save successful prompts: Save effective prompts as templates in the prompt library for future use.