Evaluation Forms help QA Managers evaluate voice and chat interactions using standardized scoring criteria. The feature supports direction-aware evaluations, flexible scoring models, manual audits, and minimum duration thresholds to improve scoring accuracy across queues and channels. Each queue supports one evaluation form per channel and contact direction. For example, Voice–Inbound and Voice–Outbound can use different evaluation forms with separate metrics and scoring logic.Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Key Features
| Feature | Description |
|---|---|
| Multi-language Support | Localized metrics for accurate global evaluations. |
| Flexible Scoring | Percentage-based (simpler forms) or points-based (complex evaluations). |
| Direction-aware Evaluation | Assign forms by queue, channel, and contact direction for inbound, outbound, or both interaction types. |
| Advanced Scoring Options | Highlight critical issues with negative scoring, fatal metrics, and pass-score thresholds. |
| Channel-specific Configuration | Separate settings for Voice and Chat. |
| Queue and Channel Assignment | Assign forms to specific queues and channels. |
| Auto QA and Manual Audits | Supports automated scoring and supervisor-led manual evaluations. |
| Minimum Duration Threshold | Exclude short or incomplete contacts before evaluation. |
| Versioned Assignments | Apply updates only to future interactions while preserving historical scoring. |
How It Works
QA Managers create evaluation forms with weighted metrics and assign them to queues, channels, and contact directions.Access Evaluation Forms
Navigate to Quality AI > Configure > Evaluation Forms.
Evaluation Forms Elements
The Evaluation Forms display the following list of elements:| Column | Description |
|---|---|
| Name | Evaluation form name. |
| Description | Short description of the form. |
| Queues | Assigned and unassigned queues. |
| Channel | Channel mode assigned to the form (Voice or Chat). |
| Created By | Form creator. |
| Pass Score | Minimum score for the agent to pass. |
| Status | Enable or disable scoring for a form. |
| Search | Find evaluation forms by name. |
Enable Auto QA in Quality AI Settings before creating evaluation forms.
Evaluation Forms Structure
Defines the overall scoring structure. An evaluation form includes:- Evaluation Forms: Defines the overall scoring structure, scoring type, language, channel, pass score, and threshold.
- Assignments: Maps the queues, conversation sources, and contact direction mappings.
- Evaluation Metrics: Defines the scoring criteria and outcome rules that used to measure agent performance.
Create a New Evaluation Form
Creating an evaluation form involves the following three sections:General Settings
- Select the Evaluation Forms tab.
- Select + New Evaluation Forms.
- Enter a Name and optional Description.
- Select the required Language.
- Select a Channel type:
- Chat: Displays only chat metrics (excluding speech and voice-specific Playbook metrics).
- Voice: Displays voice, speech, and Playbook-supported metrics.
- Select a Scoring Type (Percentage or Points).
- (Optional) Enable Minimum Duration Threshold to exclude interactions below the configured duration from evaluation.
- Define a threshold value in minutes and seconds.
- Set a minimum Pass Score required for agents.
- Select Next.

Assignments
Assign queues to the evaluation form and define the interaction direction for evaluation.- Search and select available queues.
- Select Add Queues to assign them to the evaluation form.
- Select a Conversation Source:
- Quality AI Express: Processes Express-based processing,
- CCAI Integration: Processes Contact Center AI interactions
- Agent AI Integration: Processes Agent AI interactions.
- Assign a Contact Direction (Inbound, Outbound, or Both) for each queue.
- Select Next.

If CCAI or Agent AI queues are combined with Express queues, By Playbook and By Dialog metrics become unavailable.
Queue Assignment Rules
Assign one evaluation form for each unique combination of queue, channel, and contact direction. Supported directions include:- Inbound — Incoming customer interactions.
- Outbound — Agent-initiated interactions such as callbacks or campaigns.
- Both — Applies the same form to both inbound and outbound interactions.
Evaluation Metrics
Evaluation metrics define the scoring criteria used for Auto QA and manual audits. Manual metrics support supervisor-led evaluations and qualitative scoring such as empathy, tone, or judgment.Add Configure Metrics
- Search and select the required metrics, then add them to the form.
- Select Edit to configure each metric.
- Assign a metric Weightage value to each metric based on the selected scoring type: use percentage values for percentage-based forms and points for points-based forms.
- Select the correct Response and Outcome scoring that defines a match for each metric.
- Reorder or remove metrics as needed.
- Enable Fatal Error (optional) for critical metrics.
-
Select Create.

Manual metrics are supported only in points-based scoring and are excluded from automated (Auto QA) scorecards.
Evaluation Behavior
During evaluation, the system resolves the applicable form using:- Forms are selected based on queue, channel, and contact direction.
- If both directions are selected, the same evaluation form is applied to both inbound and outbound interactions.
- When both directions are selected, the same evaluation form is applied to both inbound and outbound interactions.
- Assignments are versioned; changes apply only to future evaluations.
- Metric availability depends on language, channel, and direction; only supported metrics are shown.
- Scores update automatically when weights or outcomes change.
- Disabling minimum duration includes all contacts.
- Manual metrics are used for supervisor audits and excluded from automated scoring.
- If no contact direction is selected, evaluation is skipped for that queue.
Metric Card Configuration
Trigger Scoring Disabled
When trigger scoring is off, the metric card displays the following controls:- Weightage: Enter a numeric percentage for the metric’s contribution to the form’s total score.
- Fatal Error Toggle: Marks the outcome as a fatal error, which fails the entire evaluation if the metric is not met.
Trigger Scoring Enabled
When trigger scoring is on, the metric card expands to show outcome-level sub-weight controls.- Scoring Rows with Outcome-level Weightage: Displays Yes and No rows, each with a Weightage input field.
- Correct Response: Available on specific outcome rows to mark the expected correct response.
- Fatal Error Toggle: Marks a non-adherent outcome as a fatal error.
You must configure negative scoring at the outcome level.
Outcome Configuration
For each metric, define the outcomes (for example, Yes or No) and assign a positive, zero, or negative weight based on the expected response. A matching response receives positive weight. A non-matching response receives zero or negative weight (if configured).Contact Duration
The system evaluates contact duration against the configured minimum duration threshold before scoring begins. The threshold is applied after form selection and before metric scoring. The system checks contact duration against the configured threshold at the evaluation form or scorecard level. If both are configured, the evaluation form-level threshold takes precedence.| Contact Duration Status | Assigned Result | Notes |
|---|---|---|
| Meets or Exceeds Threshold | — | Evaluated normally. |
| Falls Below Threshold | Below Threshold | Excluded from scoring and quality metrics. |
| Duration Unresolved | Duration unavailable | Excluded from evaluation. |
The minimum duration threshold applies only to the ingested contacts from supported sources, such as Native CCAI, Agent AI, and Express (FTP). Previously ingested contacts are not re-evaluated when the threshold is updated.
Duration Calculation By Channel
| Channel | Duration Measured As |
|---|---|
| Voice | Full call duration, including hold time. |
| Chat | Time between the first and last message timestamps. |
| Quality AI Express (FTP) | Based on the start_time and end_time fields. |
The system can exclude a contact for one evaluation form while evaluating it for another when thresholds differ.
Scoring Type Selection
Scoring type determines how you assign weights to evaluation metrics.Percentage-Based
Use Percentage-Based to assign weights as percentages (total must equal 100%), recommended for forms with fewer than 20 metrics. Example: A form with Greeting (20%), Verification (30%), and Resolution (50%) totals 100%.Points-Based
Use Points-Based to assign weights as points (no cap on positive points, but negative points must not exceed positive points), recommended for complex forms with 20+ metrics (ideally 40+) and supports manual evaluation metrics. Example: A critical compliance metric can carry 50 points, while a minor greeting metric can carry 5 points. A failed compliance check can apply negative scoring.Scoring Formula (Points-Based)
Kore Evaluation Score calculates the weighted impact of met and not-met metrics, subtracts penalties, divides by total positive weights, and multiplies the result by 100. Kore Evaluation Score = [(∑(Myi × Wyi) − ∑(Mni × Wni)) / ∑(Wyi)] × 100 Where: Myi, Mni: Binary indicators (1 or 0) representing whether a metric is adhered or not.- Wyi: 1 if the metric is adhered, otherwise 0.
- Wni: 1 if the metric is not adhered, otherwise 0.
Scoring Logic
- Pass: Final score ≥ Pass Score threshold.
- Fail: Final score < Pass Score threshold.
- Fatal error: Sets the score to 0 and marks the interaction as failed, regardless of other metric scores.
Scoring Systems Comparison
Quality AI supports two scoring methods: Percentage-Based and Points-Based.| Feature | Percentage-Based | Points-Based |
|---|---|---|
| Best for | Smaller forms (under ~20 metrics) | Larger forms (20+ metrics) |
| Total weight | Must equal 100% | No fixed maximum |
| Scalability | Limited by 100% cap | High flexibility |
| Weight per metric | Decreases as metrics increase | Assign any point value |
| Weight precision | May require fractional values | Uses whole-number allocation |
| Negative scoring | Supported within the 100% total constraint | Allowed, but can’t exceed total positive weight |
| Final score | Direct percentage (0-100) | Normalized to percentage (0-100) |
Weight Assignment Rules
| Configuration | Percentage-based | Points-based |
|---|---|---|
| If the expected correct response is Yes | Positive % for Yes; zero or negative % for No | Positive points for Yes; zero or negative points for No |
| If the expected correct response is No | Positive % for No; zero or negative % for Yes | Positive points for No; zero or negative points for Yes |
| Validation | Total positive weight must equal 100%; negative weight allowed within the 100% structure | No upper limit on total positive points; total negative points must not exceed total positive points |
| Manual Evaluation Metrics | Not supported | Supported |
Fatal Error Behavior
When a fatal metric fails, the system immediately sets the final score to 0, ignores all other metric results, and marks the interaction as failed.Form Selection Logic
During evaluation, the system resolves the applicable form using Queue → Channel → Contact Direction, with metric availability and scoring dependent on supported language, channel, and direction. If contact direction metadata is missing, the system defaults to Inbound before form matching. The contact direction must match the configured assignment for the interaction. If multiple forms match the same Queue → Channel → Contact Direction combination, the system applies the most recently versioned active assignment. If no matching form exists, evaluation is skipped without applying a fallback form. Evaluation outcomes are classified as Evaluated or Skipped (no matching form).Managing Evaluation Forms
This section guides you through editing and updating the existing evaluation forms.Edit and Delete Existing Evaluation Forms
Steps to edit or delete the existing evaluation forms:- Use the three-dot menu to Edit or Delete the evaluation form and update the required details.
- Before deleting an evaluation form, remove linked queue assignments, dependent metrics (if required), and resolve attribute dependencies. If the form is still in use, the system displays a warning.
- Select Update.
Warnings and Error Handling
Switching Scoring Types
Changing the scoring type clears all existing metric weights and outcome configurations, requiring full reconfiguration of all metrics.Language Configuration Warnings
Changes to language settings can affect speech recognition accuracy and metric results.Unsupported Language Error (Form Level)
This error occurs if a form language is not supported by one or more metrics. Example: A form uses English and Dutch metrics. Adding Hindi triggers a validation error if the metrics do not support Hindi. To resolve this,- Review the language configuration for each metric used in the form.
- Update each metric to support the new language (for example, Hindi).
- Verify all metrics support the selected language.
- Add the language to the form after updating all metrics.
Language Selection Behavior
The system applies an AND condition across all selected languages and displays only By-Question metrics that support all selected languages. For example, A form uses English and Dutch metrics. Adding Hindi triggers a validation error if the metrics don’t support Hindi.Metric-Level Language Limitation
This warning displays when you add or update a metric that doesn’t support a language already configured in the form. To resolve this,- Update the metric language configuration, or
- Select a metric that supports all languages configured in the form.
Channel Mode Change
Switching between Voice and Chat removes unsupported metrics (such as speech-based metrics in Chat). After switching channels,- Update the remaining metrics to support the new channel.
- Adjust the corresponding weights for proper evaluation.
- Select Update to save the changes.