Skip to main content
Evaluation Forms enable QA Managers to create standardized assessments for Voice and Chat interactions. Each form aligns scoring with operational goals and ensures consistent, compliant evaluations across agent queues. Each queue supports one form per channel. QA Managers can mark metrics for manual scoring (excluding them from scorecards) and set a minimum duration to filter out short or incomplete contacts from AutoQA. This supports direction-aware queue assignment, allowing the same form to be applied to inbound, outbound, or both interaction types per queue, ensuring accurate evaluation criteria.

Key Features

FeatureDescription
Multi-language supportLocalized metrics for accurate global team assessments.
Flexible scoringPercentage-based (simpler forms) or points-based (complex evaluations).
Direction-Aware queue assignmentAssign evaluation forms to queues based on Inbound, Outbound, or Both interaction directions.
Advanced scoring optionsHighlight critical issues with negative scoring, fatal metrics, and pass-score thresholds.
Channel-specific configurationSeparate settings for Voice and Chat.
Queue and channel assignmentAssign forms to specific queues and channels.
AutoQA and manual auditsEvaluate interactions automatically or manually.
Minimum Duration ThresholdExclude contacts below a configured duration from AutoQA scoring.
Versioned AssignmentsApply updates only to future interactions while preserving historical scoring.

Evaluation Forms Structure

Evaluation Forms and Evaluation Metrics are the two core components:
  • Evaluation Forms — defines the overall scoring structure, scoring type, language, channel, pass threshold, and queue assignments.
  • Assignments — Maps the queues and conversation sources.
  • Evaluation Metrics — defines the individual quality parameters used to measure agent performance.
QA managers create forms with weighted metrics totaling 100% and assign them to queues and channels for AutoQA scoring and manual audits.

How It Works

QA Managers create evaluation forms with weighted metrics (100% for percentage-based forms or flexible points for points-based forms). The system assigns these forms to specific queues and channels for auditing and AutoQA scoring.

Access Evaluation Forms

Navigate to Quality AI > CONFIGURE > Evaluation Forms. Evaluation Forms

Evaluation Forms Elements

The Evaluation Forms display the following list of elements:
ColumnDescription
NameEvaluation form name.
DescriptionShort description of the form.
QueuesAssigned and unassigned queues.
ChannelChannel mode assigned to the form (Voice or Chat).
Created ByForm creator.
Pass ScoreMinimum score for the agent to pass.
StatusEnable or disable scoring for a form.
SearchQuick search by name.
Enable Auto QA in Quality AI Settings before creating evaluation forms.

Create a New Evaluation Form

The creation process has three sections: General Settings, Assignments, and Evaluation Metrics. Creating an evaluation form involves the following three sections:

General Settings

  1. Select the Evaluation Forms tab.
  2. Select + New Evaluation Forms.
  3. Enter a Name and optional Description.
  4. Select the required Language.
  5. Select a Channel type: Chat to display only chat metrics (excluding speech and voice-specific Playbook metrics), or Voice to display all applicable voice metrics, including speech and Playbook metrics.
  6. Select a Scoring Type (Percentage or Points).
  7. (Optional) Turn on to Set the minimum duration required to complete evaluations.
  8. Enter a threshold value in minutes (MIN) and seconds (SEC).
  9. Set the minimum Pass Score required for agents.
  10. Select Next.
General Settings Configuration

Assignments

Assign queues to the evaluation form and define the interaction direction for evaluation.
  1. Search and select queues.
  2. Select Add Queues to assign them to the evaluation form.
  3. Select Conversation Source: Quality AI Express for Express-based processing, CCAI Integration to ingest data from Contact Center AI, or Agent AI Integration to process interactions from Agent AI.
  4. For each selected queue, use the direction checkboxes to define applicability (Inbound, Outbound, or Both)
  5. Add or remove queue assignments as needed.
  6. Select Next.
Queue Configuration
If you assign CCAI or Agent AI queues together with Quality AI Express queues in the same form, By Playbook and By Dialog metrics become unavailable.

Queue Assignments with Contact Direction

The Assignments step for evaluation forms now exposes Inbound and Outbound direction columns alongside each queue.
ColumnDescription
QueuesName of the queue and its source integration (for example, Contact Center AI (CCAI)).
InboundCheckbox to assign this form to inbound contacts for the queue.
OutboundCheckbox to assign this form to outbound contacts for the queue.
DeleteRemoves the queue from the assignment.

Queue Assignment Rules

Assign one form per queue, channel, and contact direction, showing only accessible queues; use Inbound for incoming interactions, Outbound for campaigns or follow-ups, and Both to apply the same form to both.

Evaluation Metrics

Evaluation metrics define the criteria used for audits and AutoQA scoring. Manual metrics are human-scored and assess qualitative aspects such as tone, empathy, and judgment.

Add Configure Metrics

  1. Search and select the required metrics, then add them to the form.
  2. Select Edit to configure each metric.
  3. Choose the correct Response (what constitutes a match for this metric).
  4. Assign Weightage value based on the selected scoring type: use percentage values for percentage-based forms and points for points-based forms. Points Metrics
  5. Select the correct Response and Outcome scoring that defines a match for each metric.
  6. Reorder metrics to control their display sequence in the audit interface.
  7. Remove metrics that are no longer required.
  8. Turn on Fatal Error (optional) for compliance-critical metrics.
  9. Select Create.
The system automatically calculates total positive and negative scores.
Manual metrics apply only to points-based forms and don’t affect agent attributes or scorecards.

Evaluation Form Assignment and Scoring Behavior

  • Forms are selected based on queue, channel, and contact direction.
  • If both directions are selected, one form applies to inbound and outbound interactions.
  • Assignments are versioned; changes apply only to future evaluations.
  • Metric availability depends on language, channel, and direction; only supported metrics are shown.
  • Scores update automatically when weights or outcomes change.
  • Disabling minimum duration includes all contacts.
  • Manual metrics are used for supervisor audits and excluded from automated scoring.
  • For CCAI chat queues, Outbound is disabled (not supported).
  • If both directions are unchecked, evaluation is skipped for the queue.

Metric Card Configuration

Trigger Scoring Disabled

When trigger scoring is off, the metric card displays the following controls:
  • Weightage: Enter a numeric percentage for the metric’s contribution to the form’s total score.
  • Fatal Error Toggle: Marks the outcome as a fatal error, which fails the entire evaluation if the metric is not met.

Trigger Scoring Enabled

When trigger scoring is on, the metric card expands to show outcome-level sub-weight controls.
  • Scoring Rows with Outcome-level Weightage: Displays Yes and No rows, each with a Weightage input field.
  • Correct Response: Available on specific outcome rows to mark the expected correct response.
  • Fatal Error Toggle: Marks a non-adherent outcome as a fatal error.
You must configure negative scoring at the outcome level.

Outcome Configuration

For each metric, define the outcomes (for example, Yes or No) and assign a positive, zero, or negative weight based on the expected response. A matching response receives positive weight. A non-matching response receives zero or negative weight (if configured).

How Minimum Duration Threshold Works

Before scoring begins, the evaluation engine checks the contact duration against the threshold configured on the evaluation form or scorecard, depending on the active evaluation workflow.

Contact Duration

The system evaluates contact duration before scoring.
Contact Duration StatusAssigned ResultNotes
Meets or exceeds thresholdEvaluated normally.
Falls below thresholdBelow ThresholdExcluded from scoring and quality metrics.
Duration UnresolvedDuration unavailableExcluded from evaluation.

Duration Calculation By Channel

ChannelDuration Measured As
VoiceFull call duration, including hold time.
ChatTime between the first and last message timestamps.
Quality AI Express (FTP)Based on the start_time and end_time fields.
The system excludes a contact for one scorecard but evaluates it for another when thresholds differ. If you disable this setting, the system evaluates all contacts, and supervisors manually evaluate any excluded contacts.

Scoring Type Selection

Scoring type determines how you assign weights to evaluation metrics.

Percentage-Based

Use Percentage-Based to assign weights as percentages (total must equal 100%), recommended for forms with fewer than 20 metrics.

Points-Based

Use Points-Based to assign weights as points (no cap on positive points, but negative points must not exceed positive points), recommended for complex forms with 20+ metrics (ideally 40+) and supports manual evaluation metrics.

Scoring Formula (Points-Based)

Kore Evaluation Score = [∑(Myi × Wyi) − ∑(Mni × Wni) / ∑(Wyi)] × 100 Where:
  • Myi, Wyi = Adhered metrics and positive points
  • Mni, Wni = Non-adhered metrics and negative points

Scoring Logic

  • Pass: Pass-score ≥ Pass Score threshold.
  • Fail: Fail-score < Pass Score threshold.
  • Fatal error: Sets the score to 0 and marks the interaction as failed, regardless of other metric scores.

Scoring Systems Comparison

Quality AI supports two scoring methods: Percentage-Based and Points-Based.
FeaturePercentage-BasedPoints-Based
Best forSmaller forms (under ~20 metrics)Larger forms (20+ metrics)
Total weightMust equal 100%No fixed maximum
ScalabilityLimited by 100% capHigh flexibility
Weight per metricDecreases as metrics increaseAssign any point value
Weight precisionMay require fractional valuesUses whole-number allocation
Negative scoringManaged within 100%Allowed, can’t exceed total positive
Final scoreDirect percentage (0-100)Normalized to percentage (0-100)

Weight Assignment Rules By Scoring Type

ConfigurationPercentage-basedPoints-based
If Correct Response = YesPositive % for Yes; zero or negative % for NoPositive points for Yes; zero or negative points for No
If Correct Response = NoPositive % for No; zero or negative % for YesPositive points for No; zero or negative points for Yes
ValidationTotal positive weight must equal 100%; negative weight allowed within the 100% structureNo upper limit on total positive points; total negative points must not exceed total positive points
Manual Evaluation MetricsNot supportedSupported

Fatal Error Behavior

Fatal error configuration remains the same for both scoring types. When a fatal metric fails, the system sets the final score to 0, ignores all other metric scores, and marks the interaction as failed.

Managing Evaluation Forms

This section guides you through editing and updating the existing evaluation forms.

Edit and Delete Existing Evaluation Forms

Steps to edit or delete the existing evaluation forms:
  1. Use the three-dot menu to Edit or Delete the evaluation form and update the required details.
  2. Before deleting an evaluation form, remove linked queue assignments, dependent metrics (if required), and resolve attribute dependencies. If the form is still in use, the system displays a warning.
  3. Select Update.

Warnings and Error Handling

Switching Scoring Types

Changing the scoring type clears all existing metric weights and requires you to reconfigure all metrics after switching. Confirm the warning prompt before proceeding. Make sure the percentage-based totals equal 100% and points-based values meet validation rules.

Language Configuration Warnings

Changes to language settings can affect speech recognition accuracy and metric results.

Unsupported Language Error (Form Level)

This error occurs when you add a new language to a form, but one or more associated metrics do not support it. The system blocks the update until all metrics support the selected language. Example: Adding Hindi to a form configured with metrics that support only English and Dutch triggers this error. To resolve this,
  1. Review the language configuration for each metric used in the form.
  2. Update each metric to support the new language (for example, Hindi).
  3. Verify that all required metrics support the language.
  4. Add the language to the form after updating all metrics.

Language Selection Behavior

Evaluation forms support multi-language selection. The system applies an AND condition across all selected languages. The system displays only By-Question metrics configured for all selected languages. For example, if you select English and Dutch, the system shows only metrics available in both languages.

Metric-Level Language Limitation

This warning appears when you add or update a metric that doesn’t support a language already configured on the form. To resolve this,
  1. Configure the required language in the metric, or
  2. Select a metric that already supports all languages configured on the form.

Channel Mode Change

When you switch the channel between Voice and Chat, a warning appears. The system automatically deletes speech-based metrics when you switch channels. To resolve a channel mode change,
  1. Update the remaining metrics to support the new channel.
  2. Adjust the corresponding weights for proper evaluation.
  3. Select Update to save the changes.