Skip to main content

Documentation Index

Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Evaluation Forms help QA Managers evaluate voice and chat interactions using standardized scoring criteria. The feature supports direction-aware evaluations, flexible scoring models, manual audits, and minimum duration thresholds to improve scoring accuracy across queues and channels. Each queue supports one evaluation form per channel and contact direction. For example, Voice–Inbound and Voice–Outbound can use different evaluation forms with separate metrics and scoring logic.

Key Features

FeatureDescription
Multi-language SupportLocalized metrics for accurate global evaluations.
Flexible ScoringPercentage-based (simpler forms) or points-based (complex evaluations).
Direction-aware EvaluationAssign forms by queue, channel, and contact direction for inbound, outbound, or both interaction types.
Advanced Scoring OptionsHighlight critical issues with negative scoring, fatal metrics, and pass-score thresholds.
Channel-specific ConfigurationSeparate settings for Voice and Chat.
Queue and Channel AssignmentAssign forms to specific queues and channels.
Auto QA and Manual AuditsSupports automated scoring and supervisor-led manual evaluations.
Minimum Duration ThresholdExclude short or incomplete contacts before evaluation.
Versioned AssignmentsApply updates only to future interactions while preserving historical scoring.

How It Works

QA Managers create evaluation forms with weighted metrics and assign them to queues, channels, and contact directions.

Access Evaluation Forms

Navigate to Quality AI > Configure > Evaluation Forms. Evaluation Forms

Evaluation Forms Elements

The Evaluation Forms display the following list of elements:
ColumnDescription
NameEvaluation form name.
DescriptionShort description of the form.
QueuesAssigned and unassigned queues.
ChannelChannel mode assigned to the form (Voice or Chat).
Created ByForm creator.
Pass ScoreMinimum score for the agent to pass.
StatusEnable or disable scoring for a form.
SearchFind evaluation forms by name.
Enable Auto QA in Quality AI Settings before creating evaluation forms.

Evaluation Forms Structure

Defines the overall scoring structure. An evaluation form includes:
  • Evaluation Forms: Defines the overall scoring structure, scoring type, language, channel, pass score, and threshold.
  • Assignments: Maps the queues, conversation sources, and contact direction mappings.
  • Evaluation Metrics: Defines the scoring criteria and outcome rules that used to measure agent performance.

Create a New Evaluation Form

Creating an evaluation form involves the following three sections:

General Settings

  1. Select the Evaluation Forms tab.
  2. Select + New Evaluation Forms.
  3. Enter a Name and optional Description.
  4. Select the required Language.
  5. Select a Channel type:
    • Chat: Displays only chat metrics (excluding speech and voice-specific Playbook metrics).
    • Voice: Displays voice, speech, and Playbook-supported metrics.
  6. Select a Scoring Type (Percentage or Points).
  7. (Optional) Enable Minimum Duration Threshold to exclude interactions below the configured duration from evaluation.
  8. Define a threshold value in minutes and seconds.
  9. Set a minimum Pass Score required for agents.
  10. Select Next.
General Settings Configuration

Assignments

Assign queues to the evaluation form and define the interaction direction for evaluation.
  1. Search and select available queues.
  2. Select Add Queues to assign them to the evaluation form.
  3. Select a Conversation Source:
    • Quality AI Express: Processes Express-based processing,
    • CCAI Integration: Processes Contact Center AI interactions
    • Agent AI Integration: Processes Agent AI interactions.
  4. Assign a Contact Direction (Inbound, Outbound, or Both) for each queue.
  5. Select Next.
Queue Configuration Each queue provides Inbound and Outbound selection options to define where the evaluation form applies.
If CCAI or Agent AI queues are combined with Express queues, By Playbook and By Dialog metrics become unavailable.

Queue Assignment Rules

Assign one evaluation form for each unique combination of queue, channel, and contact direction. Supported directions include:
  • Inbound — Incoming customer interactions.
  • Outbound — Agent-initiated interactions such as callbacks or campaigns.
  • Both — Applies the same form to both inbound and outbound interactions.
The system displays only queues with assigned access permissions and blocks duplicate assignments for the same combination. If no direction is selected, the system skips evaluation for that queue. CCAI Chat queues don’t support the Outbound direction.

Evaluation Metrics

Evaluation metrics define the scoring criteria used for Auto QA and manual audits. Manual metrics support supervisor-led evaluations and qualitative scoring such as empathy, tone, or judgment.

Add Configure Metrics

  1. Search and select the required metrics, then add them to the form.
  2. Select Edit to configure each metric.
  3. Assign a metric Weightage value to each metric based on the selected scoring type: use percentage values for percentage-based forms and points for points-based forms.
  4. Select the correct Response and Outcome scoring that defines a match for each metric.
  5. Reorder or remove metrics as needed.
  6. Enable Fatal Error (optional) for critical metrics.
  7. Select Create. Points Metrics
The system automatically calculates total positive and negative scores.
Manual metrics are supported only in points-based scoring and are excluded from automated (Auto QA) scorecards.

Evaluation Behavior

During evaluation, the system resolves the applicable form using:
  • Forms are selected based on queue, channel, and contact direction.
  • If both directions are selected, the same evaluation form is applied to both inbound and outbound interactions.
  • When both directions are selected, the same evaluation form is applied to both inbound and outbound interactions.
  • Assignments are versioned; changes apply only to future evaluations.
  • Metric availability depends on language, channel, and direction; only supported metrics are shown.
  • Scores update automatically when weights or outcomes change.
  • Disabling minimum duration includes all contacts.
  • Manual metrics are used for supervisor audits and excluded from automated scoring.
  • If no contact direction is selected, evaluation is skipped for that queue.

Metric Card Configuration

Trigger Scoring Disabled

When trigger scoring is off, the metric card displays the following controls:
  • Weightage: Enter a numeric percentage for the metric’s contribution to the form’s total score.
  • Fatal Error Toggle: Marks the outcome as a fatal error, which fails the entire evaluation if the metric is not met.

Trigger Scoring Enabled

When trigger scoring is on, the metric card expands to show outcome-level sub-weight controls.
  • Scoring Rows with Outcome-level Weightage: Displays Yes and No rows, each with a Weightage input field.
  • Correct Response: Available on specific outcome rows to mark the expected correct response.
  • Fatal Error Toggle: Marks a non-adherent outcome as a fatal error.
You must configure negative scoring at the outcome level.

Outcome Configuration

For each metric, define the outcomes (for example, Yes or No) and assign a positive, zero, or negative weight based on the expected response. A matching response receives positive weight. A non-matching response receives zero or negative weight (if configured).

Contact Duration

The system evaluates contact duration against the configured minimum duration threshold before scoring begins. The threshold is applied after form selection and before metric scoring. The system checks contact duration against the configured threshold at the evaluation form or scorecard level. If both are configured, the evaluation form-level threshold takes precedence.
Contact Duration StatusAssigned ResultNotes
Meets or Exceeds ThresholdEvaluated normally.
Falls Below ThresholdBelow ThresholdExcluded from scoring and quality metrics.
Duration UnresolvedDuration unavailableExcluded from evaluation.
The system classifies interactions as Evaluated, Below Threshold, or Excluded (duration unavailable). Duration-based exclusions don’t remove contacts from Interaction Explorer or Conversation Mining, and authorized supervisors can manually evaluate them. Threshold updates apply only to newly ingested contacts.
The minimum duration threshold applies only to the ingested contacts from supported sources, such as Native CCAI, Agent AI, and Express (FTP). Previously ingested contacts are not re-evaluated when the threshold is updated.

Duration Calculation By Channel

ChannelDuration Measured As
VoiceFull call duration, including hold time.
ChatTime between the first and last message timestamps.
Quality AI Express (FTP)Based on the start_time and end_time fields.
The system can exclude a contact for one evaluation form while evaluating it for another when thresholds differ.

Scoring Type Selection

Scoring type determines how you assign weights to evaluation metrics.

Percentage-Based

Use Percentage-Based to assign weights as percentages (total must equal 100%), recommended for forms with fewer than 20 metrics. Example: A form with Greeting (20%), Verification (30%), and Resolution (50%) totals 100%.

Points-Based

Use Points-Based to assign weights as points (no cap on positive points, but negative points must not exceed positive points), recommended for complex forms with 20+ metrics (ideally 40+) and supports manual evaluation metrics. Example: A critical compliance metric can carry 50 points, while a minor greeting metric can carry 5 points. A failed compliance check can apply negative scoring.

Scoring Formula (Points-Based)

Kore Evaluation Score calculates the weighted impact of met and not-met metrics, subtracts penalties, divides by total positive weights, and multiplies the result by 100. Kore Evaluation Score = [(∑(Myi × Wyi) − ∑(Mni × Wni)) / ∑(Wyi)] × 100 Where: Myi, Mni: Binary indicators (1 or 0) representing whether a metric is adhered or not.
  • Wyi: 1 if the metric is adhered, otherwise 0.
  • Wni: 1 if the metric is not adhered, otherwise 0.

Scoring Logic

  • Pass: Final score ≥ Pass Score threshold.
  • Fail: Final score < Pass Score threshold.
  • Fatal error: Sets the score to 0 and marks the interaction as failed, regardless of other metric scores.

Scoring Systems Comparison

Quality AI supports two scoring methods: Percentage-Based and Points-Based.
FeaturePercentage-BasedPoints-Based
Best forSmaller forms (under ~20 metrics)Larger forms (20+ metrics)
Total weightMust equal 100%No fixed maximum
ScalabilityLimited by 100% capHigh flexibility
Weight per metricDecreases as metrics increaseAssign any point value
Weight precisionMay require fractional valuesUses whole-number allocation
Negative scoringSupported within the 100% total constraintAllowed, but can’t exceed total positive weight
Final scoreDirect percentage (0-100)Normalized to percentage (0-100)

Weight Assignment Rules

ConfigurationPercentage-basedPoints-based
If the expected correct response is YesPositive % for Yes; zero or negative % for NoPositive points for Yes; zero or negative points for No
If the expected correct response is NoPositive % for No; zero or negative % for YesPositive points for No; zero or negative points for Yes
ValidationTotal positive weight must equal 100%; negative weight allowed within the 100% structureNo upper limit on total positive points; total negative points must not exceed total positive points
Manual Evaluation MetricsNot supportedSupported

Fatal Error Behavior

When a fatal metric fails, the system immediately sets the final score to 0, ignores all other metric results, and marks the interaction as failed.

Form Selection Logic

During evaluation, the system resolves the applicable form using Queue → Channel → Contact Direction, with metric availability and scoring dependent on supported language, channel, and direction. If contact direction metadata is missing, the system defaults to Inbound before form matching. The contact direction must match the configured assignment for the interaction. If multiple forms match the same Queue → Channel → Contact Direction combination, the system applies the most recently versioned active assignment. If no matching form exists, evaluation is skipped without applying a fallback form. Evaluation outcomes are classified as Evaluated or Skipped (no matching form).

Managing Evaluation Forms

This section guides you through editing and updating the existing evaluation forms.

Edit and Delete Existing Evaluation Forms

Steps to edit or delete the existing evaluation forms:
  1. Use the three-dot menu to Edit or Delete the evaluation form and update the required details.
  2. Before deleting an evaluation form, remove linked queue assignments, dependent metrics (if required), and resolve attribute dependencies. If the form is still in use, the system displays a warning.
  3. Select Update.

Warnings and Error Handling

Switching Scoring Types

Changing the scoring type clears all existing metric weights and outcome configurations, requiring full reconfiguration of all metrics.

Language Configuration Warnings

Changes to language settings can affect speech recognition accuracy and metric results.

Unsupported Language Error (Form Level)

This error occurs if a form language is not supported by one or more metrics. Example: A form uses English and Dutch metrics. Adding Hindi triggers a validation error if the metrics do not support Hindi. To resolve this,
  1. Review the language configuration for each metric used in the form.
  2. Update each metric to support the new language (for example, Hindi).
  3. Verify all metrics support the selected language.
  4. Add the language to the form after updating all metrics.

Language Selection Behavior

The system applies an AND condition across all selected languages and displays only By-Question metrics that support all selected languages. For example, A form uses English and Dutch metrics. Adding Hindi triggers a validation error if the metrics don’t support Hindi.

Metric-Level Language Limitation

This warning displays when you add or update a metric that doesn’t support a language already configured in the form. To resolve this,
  1. Update the metric language configuration, or
  2. Select a metric that supports all languages configured in the form.

Channel Mode Change

Switching between Voice and Chat removes unsupported metrics (such as speech-based metrics in Chat). After switching channels,
  1. Update the remaining metrics to support the new channel.
  2. Adjust the corresponding weights for proper evaluation.
  3. Select Update to save the changes.