Evaluation Forms enable QA Managers to create standardized assessments for Voice and Chat interactions. Each form aligns scoring with operational goals and ensures consistent, compliant evaluations across agent queues.
Each queue supports one form per channel. QA Managers can mark metrics for manual scoring (excluding them from scorecards) and set a minimum duration to filter out short or incomplete contacts from AutoQA.
This supports direction-aware queue assignment, allowing the same form to be applied to inbound, outbound, or both interaction types per queue, ensuring accurate evaluation criteria.
Key Features
| Feature | Description |
|---|
| Multi-language support | Localized metrics for accurate global team assessments. |
| Flexible scoring | Percentage-based (simpler forms) or points-based (complex evaluations). |
| Direction-Aware queue assignment | Assign evaluation forms to queues based on Inbound, Outbound, or Both interaction directions. |
| Advanced scoring options | Highlight critical issues with negative scoring, fatal metrics, and pass-score thresholds. |
| Channel-specific configuration | Separate settings for Voice and Chat. |
| Queue and channel assignment | Assign forms to specific queues and channels. |
| AutoQA and manual audits | Evaluate interactions automatically or manually. |
| Minimum Duration Threshold | Exclude contacts below a configured duration from AutoQA scoring. |
| Versioned Assignments | Apply updates only to future interactions while preserving historical scoring. |
Evaluation Forms and Evaluation Metrics are the two core components:
- Evaluation Forms — defines the overall scoring structure, scoring type, language, channel, pass threshold, and queue assignments.
- Assignments — Maps the queues and conversation sources.
- Evaluation Metrics — defines the individual quality parameters used to measure agent performance.
QA managers create forms with weighted metrics totaling 100% and assign them to queues and channels for AutoQA scoring and manual audits.
How It Works
QA Managers create evaluation forms with weighted metrics (100% for percentage-based forms or flexible points for points-based forms). The system assigns these forms to specific queues and channels for auditing and AutoQA scoring.
Navigate to Quality AI > CONFIGURE > Evaluation Forms.
The Evaluation Forms display the following list of elements:
| Column | Description |
|---|
| Name | Evaluation form name. |
| Description | Short description of the form. |
| Queues | Assigned and unassigned queues. |
| Channel | Channel mode assigned to the form (Voice or Chat). |
| Created By | Form creator. |
| Pass Score | Minimum score for the agent to pass. |
| Status | Enable or disable scoring for a form. |
| Search | Quick search by name. |
Enable Auto QA in Quality AI Settings before creating evaluation forms.
The creation process has three sections: General Settings, Assignments, and Evaluation Metrics.
Creating an evaluation form involves the following three sections:
General Settings
- Select the Evaluation Forms tab.
- Select + New Evaluation Forms.
- Enter a Name and optional Description.
- Select the required Language.
- Select a Channel type: Chat to display only chat metrics (excluding speech and voice-specific Playbook metrics), or Voice to display all applicable voice metrics, including speech and Playbook metrics.
- Select a Scoring Type (Percentage or Points).
- (Optional) Turn on to Set the minimum duration required to complete evaluations.
- Enter a threshold value in minutes (MIN) and seconds (SEC).
- Set the minimum Pass Score required for agents.
- Select Next.
Assignments
Assign queues to the evaluation form and define the interaction direction for evaluation.
- Search and select queues.
- Select Add Queues to assign them to the evaluation form.
- Select Conversation Source: Quality AI Express for Express-based processing, CCAI Integration to ingest data from Contact Center AI, or Agent AI Integration to process interactions from Agent AI.
- For each selected queue, use the direction checkboxes to define applicability (Inbound, Outbound, or Both)
- Add or remove queue assignments as needed.
- Select Next.
If you assign CCAI or Agent AI queues together with Quality AI Express queues in the same form, By Playbook and By Dialog metrics become unavailable.
The Assignments step for evaluation forms now exposes Inbound and Outbound direction columns alongside each queue.
| Column | Description |
|---|
| Queues | Name of the queue and its source integration (for example, Contact Center AI (CCAI)). |
| Inbound | Checkbox to assign this form to inbound contacts for the queue. |
| Outbound | Checkbox to assign this form to outbound contacts for the queue. |
| Delete | Removes the queue from the assignment. |
Queue Assignment Rules
Assign one form per queue, channel, and contact direction, showing only accessible queues; use Inbound for incoming interactions, Outbound for campaigns or follow-ups, and Both to apply the same form to both.
Evaluation Metrics
Evaluation metrics define the criteria used for audits and AutoQA scoring. Manual metrics are human-scored and assess qualitative aspects such as tone, empathy, and judgment.
-
Search and select the required metrics, then add them to the form.
-
Select Edit to configure each metric.
-
Choose the correct Response (what constitutes a match for this metric).
-
Assign Weightage value based on the selected scoring type: use percentage values for percentage-based forms and points for points-based forms.
-
Select the correct Response and Outcome scoring that defines a match for each metric.
-
Reorder metrics to control their display sequence in the audit interface.
-
Remove metrics that are no longer required.
-
Turn on Fatal Error (optional) for compliance-critical metrics.
-
Select Create.
The system automatically calculates total positive and negative scores.
Manual metrics apply only to points-based forms and don’t affect agent attributes or scorecards.
- Forms are selected based on queue, channel, and contact direction.
- If both directions are selected, one form applies to inbound and outbound interactions.
- Assignments are versioned; changes apply only to future evaluations.
- Metric availability depends on language, channel, and direction; only supported metrics are shown.
- Scores update automatically when weights or outcomes change.
- Disabling minimum duration includes all contacts.
- Manual metrics are used for supervisor audits and excluded from automated scoring.
- For CCAI chat queues, Outbound is disabled (not supported).
- If both directions are unchecked, evaluation is skipped for the queue.
Metric Card Configuration
Trigger Scoring Disabled
When trigger scoring is off, the metric card displays the following controls:
-
Weightage: Enter a numeric percentage for the metric’s contribution to the form’s total score.
-
Fatal Error Toggle: Marks the outcome as a fatal error, which fails the entire evaluation if the metric is not met.
Trigger Scoring Enabled
When trigger scoring is on, the metric card expands to show outcome-level sub-weight controls.
- Scoring Rows with Outcome-level Weightage: Displays Yes and No rows, each with a Weightage input field.
- Correct Response: Available on specific outcome rows to mark the expected correct response.
- Fatal Error Toggle: Marks a non-adherent outcome as a fatal error.
You must configure negative scoring at the outcome level.
Outcome Configuration
For each metric, define the outcomes (for example, Yes or No) and assign a positive, zero, or negative weight based on the expected response. A matching response receives positive weight. A non-matching response receives zero or negative weight (if configured).
How Minimum Duration Threshold Works
Before scoring begins, the evaluation engine checks the contact duration against the threshold configured on the evaluation form or scorecard, depending on the active evaluation workflow.
The system evaluates contact duration before scoring.
| Contact Duration Status | Assigned Result | Notes |
|---|
| Meets or exceeds threshold | — | Evaluated normally. |
| Falls below threshold | Below Threshold | Excluded from scoring and quality metrics. |
| Duration Unresolved | Duration unavailable | Excluded from evaluation. |
Duration Calculation By Channel
| Channel | Duration Measured As |
|---|
| Voice | Full call duration, including hold time. |
| Chat | Time between the first and last message timestamps. |
| Quality AI Express (FTP) | Based on the start_time and end_time fields. |
The system excludes a contact for one scorecard but evaluates it for another when thresholds differ. If you disable this setting, the system evaluates all contacts, and supervisors manually evaluate any excluded contacts.
Scoring Type Selection
Scoring type determines how you assign weights to evaluation metrics.
Percentage-Based
Use Percentage-Based to assign weights as percentages (total must equal 100%), recommended for forms with fewer than 20 metrics.
Points-Based
Use Points-Based to assign weights as points (no cap on positive points, but negative points must not exceed positive points), recommended for complex forms with 20+ metrics (ideally 40+) and supports manual evaluation metrics.
Kore Evaluation Score = [∑(Myi × Wyi) − ∑(Mni × Wni) / ∑(Wyi)] × 100
Where:
- Myi, Wyi = Adhered metrics and positive points
- Mni, Wni = Non-adhered metrics and negative points
Scoring Logic
- Pass: Pass-score ≥ Pass Score threshold.
- Fail: Fail-score < Pass Score threshold.
- Fatal error: Sets the score to 0 and marks the interaction as failed, regardless of other metric scores.
Scoring Systems Comparison
Quality AI supports two scoring methods: Percentage-Based and Points-Based.
| Feature | Percentage-Based | Points-Based |
|---|
| Best for | Smaller forms (under ~20 metrics) | Larger forms (20+ metrics) |
| Total weight | Must equal 100% | No fixed maximum |
| Scalability | Limited by 100% cap | High flexibility |
| Weight per metric | Decreases as metrics increase | Assign any point value |
| Weight precision | May require fractional values | Uses whole-number allocation |
| Negative scoring | Managed within 100% | Allowed, can’t exceed total positive |
| Final score | Direct percentage (0-100) | Normalized to percentage (0-100) |
Weight Assignment Rules By Scoring Type
| Configuration | Percentage-based | Points-based |
|---|
| If Correct Response = Yes | Positive % for Yes; zero or negative % for No | Positive points for Yes; zero or negative points for No |
| If Correct Response = No | Positive % for No; zero or negative % for Yes | Positive points for No; zero or negative points for Yes |
| Validation | Total positive weight must equal 100%; negative weight allowed within the 100% structure | No upper limit on total positive points; total negative points must not exceed total positive points |
| Manual Evaluation Metrics | Not supported | Supported |
Fatal Error Behavior
Fatal error configuration remains the same for both scoring types. When a fatal metric fails, the system sets the final score to 0, ignores all other metric scores, and marks the interaction as failed.
This section guides you through editing and updating the existing evaluation forms.
Steps to edit or delete the existing evaluation forms:
- Use the three-dot menu to Edit or Delete the evaluation form and update the required details.
- Before deleting an evaluation form, remove linked queue assignments, dependent metrics (if required), and resolve attribute dependencies. If the form is still in use, the system displays a warning.
- Select Update.
Warnings and Error Handling
Switching Scoring Types
Changing the scoring type clears all existing metric weights and requires you to reconfigure all metrics after switching. Confirm the warning prompt before proceeding. Make sure the percentage-based totals equal 100% and points-based values meet validation rules.
Language Configuration Warnings
Changes to language settings can affect speech recognition accuracy and metric results.
This error occurs when you add a new language to a form, but one or more associated metrics do not support it. The system blocks the update until all metrics support the selected language.
Example: Adding Hindi to a form configured with metrics that support only English and Dutch triggers this error.
To resolve this,
- Review the language configuration for each metric used in the form.
- Update each metric to support the new language (for example, Hindi).
- Verify that all required metrics support the language.
- Add the language to the form after updating all metrics.
Language Selection Behavior
Evaluation forms support multi-language selection. The system applies an AND condition across all selected languages.
The system displays only By-Question metrics configured for all selected languages.
For example, if you select English and Dutch, the system shows only metrics available in both languages.
Metric-Level Language Limitation
This warning appears when you add or update a metric that doesn’t support a language already configured on the form.
To resolve this,
- Configure the required language in the metric, or
- Select a metric that already supports all languages configured on the form.
Channel Mode Change
When you switch the channel between Voice and Chat, a warning appears. The system automatically deletes speech-based metrics when you switch channels.
To resolve a channel mode change,
- Update the remaining metrics to support the new channel.
- Adjust the corresponding weights for proper evaluation.
- Select Update to save the changes.