Back to list of all release notes This document provides information about the latest feature updates and enhancements introduced in Quality AI of AI for Service (XO) v11.x releases. For previous updates, see release notes of 2025.Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
v11.24.1 May 09, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Analyze
Cross-Queue Data Access and Conversation Download Permissions
The Cross Queue Data Access permission enables users to access analytics and evaluation data across all queues without explicit assignment. The Download Conversations permission controls which users can download recordings, ensuring secure and governed data access. Learn more→v11.24.0 April 25, 2026
Analyze
AI Justification Enhancements
AI justifications are now clearer and more context-aware, helping users better understand evaluation outcomes. For Not Applicable cases, the system explains when no trigger is detected. For Not Adhered cases, timestamps are shown only for violation scenarios where a specific interaction causes the issue (for example, when an agent uses rude language and violates professionalism). Learn more→Metric-Level Insights in Agent Dashboard
Metric-level insights now include a toggle to switch between Attribute View (default) and Metric View. Attribute View shows top and bottom attributes with drill-down metric details, while Metric View highlights the top five metrics by adherence with associated attributes for context. This improves visibility, analysis, and decision-making in coaching. Learn more→Configure
Custom Metadata Mapping from AI Agent to Agent Platform
AI Agent metrics now send custom metadata to the Agentic App via Quality AI. Users can configure additional fields, in addition to the default conversation ID, to include in the execute API payload. A new setup option lets users map custom field registry values to API headers, enabling the structured transfer of conversation-specific data for downstream use. Learn more→Coach
Group Coaching Assignments
QA Managers can now create group coaching assignments to address common performance gaps across multiple agents in one workflow. Shared coaching details are defined once, and interactions are assigned in bulk, while each agent receives an individual assignment. The Coaching Monitor provides a group-level view to track progress and the impact of coaching across agents. Learn more→v11.23.1 April 11, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Analyze
Custom Fields Support in Conversation Mining, Audit, and Reports
The platform now retains business-specific custom fields ingested via Express File (CSV columns or push API fields) and Agent AI integrations (custom data key-value pairs). Supervisors, QA teams, and API consumers can filter, analyze, and export conversations based on these fields — no prior configuration required. Learn more→Configure
Direction-Based Evaluation and Reporting for AutoQA and Conversation Intelligence
Quality AI now supports contact direction (Inbound and Outbound) as a dimension across AutoQA, Conversation Intelligence, dashboards, reports, APIs, and Conversation Mining. Supervisors can configure separate evaluation forms and scorecards by direction and channel at the queue level, ensuring conversations are assessed against criteria that match their operational context. Learn more→v11.23.0 March 28, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Analyze and Configure
Minimum Duration Threshold for AutoQA and Scorecards
You can now set a minimum interaction duration threshold in evaluation forms and agent scorecards to ensure that AutoQA evaluates only the meaningful conversations. Before scoring, the system checks each interaction’s duration and excludes short or incomplete ones from quality metrics. Contacts excluded from scoring and quality calculations remain visible. Learn more→v11.22.1 March 14, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Role Management
Update of Quality AI Permissions for Default Roles
App Developers and App Testers can now access Quality AI through their default roles, without needing custom roles. This update revises default role permissions to give developers and testers appropriate access while maintaining the right level of control. Learn more→Configure
Manual Evaluation Metric
A new Manual Evaluation metric type is now available for QA-only assessment of complex and nuanced scenarios. This metric is supported only in points-based evaluation forms and is excluded from AutoQA, Agent Attributes, and Agent Scorecards. Manual metrics are clearly labeled in reports and APIs, and unaudited conversations show no AutoQA response for these metrics.Dynamic By Question (Speaker-Based Adherence)
The Dynamic By Question metric now supports speaker-based answer adherence. Admins can configure the answer detection speaker — Agent or Customer — based on trigger rules. When the trigger speaker is an Agent, an optional scoring setting enables sub-weightages and partial scoring for both trigger and answer adherence, and auditors can manually evaluate both in the Audit Screen. Conversation Mining now includes a Not Applicable filter for Dynamic metrics, and reporting and heatmap logic treat trigger absence as Not Adhered when the scoring option is enabled.v11.22.0 February 28, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Configure
Configurable Crosstalk Evaluation for By Speech Metrics
The Crosstalk metric now detects simultaneous speech between the agent and customer, including customer interruptions. The Dynamic By Question metric supports speaker selection and sub-weight assignment for agent-triggered adherence, and answer detection can now be extended beyond the agent-enabling use cases such as customer confirmation and verification.Points-Based Scoring for Complex Evaluation Forms
Evaluation forms can now use points-based scoring, making it easier to build and manage complex forms with more than 20 metrics. QAs can assign weights by points rather than percentages, and all points-based forms include audit tracking for score changes and a full record of updates.GenAI Logs Enhancement in Audit Screen
The Audit Screen now displays detailed GenAI call logs at the conversation level for easier debugging. Logs are organized by GenAI feature in expandable dropdowns that show only enabled features, and can be filtered by Success or Failure status. Each log entry includes date and time, GenAI feature name, language, model name, integration type, prompt name, token usage, response duration, and full request and response payloads.v11.21.1 January 31, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Analytics
AI Justifications for Gen AI Question Metrics Extended to Reports and APIs
AI Justifications for Gen AI by Question metrics are now available in the Interaction Evaluations and Conversation Analytics reports, and through APIs. When enabled, the system provides AI-generated explanations for each evaluation score, extending this capability beyond the existing UI.Configure
Enhanced Taxonomy Builder, Topic Discovery, and Resolution Detection
Taxonomy Builder and Topic Discovery now offer improved usability with clearer visual hierarchy, contextual tooltips, sentiment and resolution-based bubble coloring, and enhanced filtering and navigation. Resolution Detection is now configurable at the app level. You can choose between topic-based detection for strict matching or LLM-based assessment for holistic evaluation. This flexibility helps you accurately classify interaction outcomes based on whether primary issues are resolved.Analyze
Conversation Intelligence Dashboard Updates
The Conversation Intelligence dashboard is now split into two specialized dashboards:- CX Insights introduces new widgets to help you understand customer experience, including Resolution Rate tracking and CSAT and DSAT Drivers powered by driver-impact scoring with detailed warnings.
- Performance Insights enhances agent monitoring with a trendline for the Kore Evaluation Score.
v11.21.0 January 17, 2026
This update includes enhancements and bug fixes. The key enhancements included in this release are summarized below.Analytics
Evaluation Form Summary Report Enhancements
The Evaluation Form Summary report now includes a Total Interactions column to show the overall interaction count. The Total Applicable Interactions column excludes inapplicable interactions for dynamic-by-question metrics where the trigger was absent. These enhancements improve calculation accuracy and ensure that totals and percentages align with Heatmap data.Analyze
Audit Allocation Enhancements
Audit Allocation management uses a dedicated “Allocations” menu and a “My Allocations” tab. QA managers can track auditor progress, create and edit custom allocations, assign interactions to agents by percentage or count, and reassign pending interactions to manage availability while maintaining consistent quality coverage.Configure
Agent Queue Management in the Quality AI UI
Users can assign agents to Agent AI and Quality AI Express queues directly in the Quality AI UI, eliminating the need for public APIs. This update supports platform-level users, provides visibility into queue IDs for API use, and keeps agent mappings entirely separated from Contact Center AI (CCAI) routing and configuration.SFTP Chat Script Timestamp Enhancements
Quality AI Express now supports offset-based timestamps for chat script ingestion via Secure File Transfer Protocol (SFTP). With this update, users can configure the chat script timestamp format at the app level under Conversation Sources. When users select offset-based timestamps, chat script can include message-level offsets relative to the conversation timeline without start or end time validation during ingestion. The system uses start and end dates solely for reporting and filtering. This change applies only to chat conversation ingestion and doesn’t affect voice conversation ingestion.