Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
This page provides an overview of all configuration options available in Quality AI, organized by category. Use the links below to jump directly to the topic you need.
General Settings
Configure app-level behavior, access controls, and language preferences for Quality AI.
| Configuration | Description |
|---|
| Quality AI Express | Set up Quality AI for third-party CCaaS platforms using file-based integration. |
| General Settings | Manage Auto QA scoring, agent access to interactions, and auditor anonymity. |
| Conversation Intelligence | Configure conversation intelligence features at the app level. |
| Language Settings | Set languages for evaluation metrics and conversation insights. |
Conversation Sources
Manage where Quality AI ingests voice and chat conversations from.
| Configuration | Description |
|---|
| Conversation Sources | Connect and manage conversation inputs from CCAI, Agent AI, and Quality AI Express. |
Evaluation Criteria
Define what Quality AI measures and how it scores agent performance.
| Configuration | Description |
|---|
| Evaluation Metrics | Define performance indicators for measuring interaction quality. |
| Evaluation Forms | Create standardized assessments aligned to queues and channels. |
| Auto QA Configuration | Best practices for Auto QA configurations |
Metrics Measurement Types
Choose the right measurement type for each evaluation metric.
| Metric Type | Description |
|---|
| By Question | Evaluate agent responses to specific questions, statically or dynamically. |
| By AI Agent | Use AI agents to assess multiple conversation aspects in a single evaluation call. |
| By Playbook Adherence | Measure how well agents follow established procedures and workflows. |
| By Dialog Task | Evaluate adherence to predefined scripts and behavioral steps. |
| By Value | Validate agent accuracy against backend data using LLM-powered entity recognition. |
| By Speech | Assess voice behaviors such as crosstalk, dead air, and speaking rate. |
| By Manual Evaluation | Evaluates agent performance through human-led reviews when automated detection is insufficient. |
| By Hold Etiquette | Evaluate how agents manage customer holds during voice interactions. |
| By Transfer Etiquette | Assess how agents handle internal customer transfers during voice interactions. |
| Auto QA Prompting Guide | Learn how to write effective prompts for LLM-based adherence detection. |
Agent Scorecards
Assess agent performance across custom attributes and scorecards.
| Configuration | Description |
|---|
| Agent Scorecards | Define agent-level evaluation criteria and combine metrics into agent attributes. |
Taxonomy Builder
Organize conversation topics to reflect your business priorities.
| Configuration | Description |
|---|
| Overview | Understand how Taxonomy Builder structures topic hierarchies for conversation analysis. |
| Setup Taxonomy | Create and manage topic hierarchies to categorize customer conversations. |
Connectors
Connect external data sources to Quality AI for conversation ingestion.
| Connector | Description |
|---|
| AWS S3 Connector | Pull recordings and transcripts from an S3 bucket into Quality AI Express. |