Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
My Dashboard provides agents with a personalized view of their performance across conversations, scorecards, sentiment, and resolution results. It highlights performance using Auto QA and audit scores. Agents can review strengths, gaps, and evaluated interactions based on their access permissions.
This also shows supervisor-assigned scorecards, sentiment trends, and resolution effectiveness based on agent interactions. Agents can monitor performance trends, identify coaching opportunities, and track improvement areas.
Access My Dashboard
Navigate to Quality AI > Analyze > My Dashboard.
To view dashboard data, enable Auto QA, Agent Scorecard, and Agent Access to Scored Interactions in Quality AI General Settings.
My Dashboard Filters
My Dashboard uses shared global filters across all widgets to refine data by language, date range, channel, and contact direction. All metrics update dynamically based on the selected filters.
| Filter | Description |
|---|
| Language | Filter by one or more configured languages. Options are based on evaluation metric settings in Configure > Settings > Language Settings. By default, all languages are selected. |
| Date Range | Filter by time period (default: last 7 days) to analyze and compare performance data. |
| Channel | Filter by Voice, Chat, or All Channels. Metrics update based on the selected channel. |
By default, all languages are selected. Metrics appear only for languages configured at the evaluation metric level.
Based on Language, Date Range, Channel, and Contact Direction applied, the following widgets update:
Filter-Driven Metrics
| Widget | Update |
|---|
| Total Audits | Shows audit count for selected languages only. |
| Avg. Audits per Agent | Shows average for selected languages. |
| Evaluation Score | Updates Manual and Auto QA scores. |
| Fail Statistics | Shows failure data for selected languages. |
| Performance Monitor | Updates performance metrics. |
Displays key performance indicators based on selected filters.
| Metric | Description |
|---|
| Total Interactions | Total interactions during the selected period. |
| Kore Evaluation Score | Average automated Kore evaluation score for completed interactions. |
| No. of Supervisor Audits | Total manually audited interactions completed by supervisors. |
| Supervisor Audit Score | Average score from manual supervisor audits. |
| Total Coaching Assignments | Number of coaching sessions assigned during the selected period. |
| No. of Fails | Total number of failed scorecards during the selected period. |
| Fatal Interactions | Interactions that failed critical compliance or quality criteria. |
Each widget shows the current value and % change (↑ improvement, ↓ decline).
Coaching Insights
Coaching Insights
Displays agent strengths and coaching needs using scorecard and evaluation data at attribute and metric levels. Available in My Dashboard, Supervisor View (Agent Dashboard), and the Evaluation tab.
The Coaching Insights section identifies where the agent performs well and where coaching is needed. It supports two views:
| Tab | Description |
|---|
| Agent Attribute | Displays performance grouped by agent attributes |
| Evaluation Metric | Displays performance grouped by individual evaluation metrics derived from forms assigned at the queue level |
Scorecard Selection
The Coaching Insights feature highlights an agent’s strengths and improvement areas based on the selected scorecards. Insights are recalculated when the scorecard selection is updated. If there are more than five attributes or opportunities, a scroll option appears to view the full list.
To configure a scorecard,
- Access the Select Scorecard dropdown.
- Choose one or more scorecards.
- Insights update automatically.
Agent Attribute Tab
Displays performance at the attribute level:
| Section | Description |
|---|
| Strongest Attributes | Top 5 highest-scoring attributes based on adherence |
| Opportunity Areas | Bottom 5 lowest-scoring attributes indicating coaching needs |
Each attribute displays as a labeled bar representing adherence across the selected period.
Click-through (Attribute to Metric Modal)
Selecting an attribute opens a metric-level breakdown:
| Column | Description |
|---|
| Evaluation Metric | Name of the metric mapped to the attribute |
| Adherence % | Percentage adherence, color-coded by performance |
Adherence Color Coding:
| Color | Range | Meaning |
|---|
| Green | High | Meets or exceeds expectations |
| Yellow | Moderate | Partially meets expectations |
| Blue | Positive | Strong performance |
| Orange | Low | Below expectations; coaching recommended |
| Red | Very Low | Significant performance gap |
| NA | Not Applicable | Not triggered or evaluated |
Evaluation Metric Tab
Displays performance at the metric level:
| Section | Description |
|---|
| Strongest Evaluation Metrics | Highest-performing individual metrics |
| Coaching Opportunity Metrics | Lowest-performing metrics requiring improvement |
Each metric appears as a labeled bar indicating adherence across the selected period. Metrics are based on evaluation forms assigned at the queue level.
Hover Insight (Metric → Attribute)
Hovering over a metric displays its mapped agent attribute.
Example: Hovering over Authentication displays: Agent Attribute: Authentication
This enables quick traceability from metrics to attributes without switching views.
Sentiment Insights
Shows customer sentiment across the agent’s own conversations. Helps agents recognize strengths and identify recurring issues.
| View | Description |
|---|
| Average Sentiment Score | Average sentiment across all topics, with positive or negative counts and trend indicators. |
| Top 5 Highest Sentiment L3 Topics | Top Five L3 topics with the highest sentiment scores, in descending order. |
| Top 5 Lowest Sentiment L3 Topics | Top Five L3 topics with the lowest sentiment scores, in ascending order. |
Visual indicators:
- Green = Positive sentiment
- Red = Negative sentiment
Drill-down options:
-
View All Topics: Opens Topic Discovery filtered to the agent’s conversations (agent-specific).
-
View Conversations: Opens Conversation Mining filtered to the selected topic.
Resolution Insights
Shows how effectively the agent resolves customer issues.
| View | Description |
|---|
| Average Resolution Rate | Agent’s overall resolution rate across all conversations. |
| Top 5 Highest Resolution L3 Topics | Five L3 topics with the highest resolution rates, in descending order. |
| Top 5 Lowest Resolution L3 Topics | Five L3 topics with the lowest resolution rates, in ascending order. |
| Resolved/Unresolved Breakdown | Counts and percentages of resolved and unresolved conversations per topic. |
Drill-down options:
-
View All Topics: Opens Topic Discovery filtered to the agent’s conversations.
-
View Conversations: Opens Conversation Mining with filters applied (selected L3 topic).
Scorecard Trend
Tracks performance over time based on selected scorecards and language preferences.
Default Settings
| Setting | Behavior |
|---|
| Default Selection | The oldest assigned scorecard is selected automatically. |
| Manual Override | Change the scorecard using the dropdown. |
| Multi-Scorecard Support | Compare performance across multiple scorecards. |
Language Settings
Each scorecard supports its own language settings. The language filter within a scorecard shows only its configured languages. When a scorecard is selected, all its associated languages are auto-selected.
Time Ranges
| Range | Period |
|---|
| Daily | Last 7 days from the current date. |
| Weekly | Last 7 weeks from the current week. |
| Monthly | Last 7 months from the current month. |
Metrics
| Metric | Description |
|---|
| Avg. Scorecard Score | Average score for the selected scorecard within the chosen date range. |
| Attributes | Attribute-level scores for the selected scorecard within the chosen date range. |
Evaluation Tab
The Evaluation tab allows agents to review their interactions and scores based on access settings.
Access to Evaluation Tab
Navigate to Quality AI > Analyze > My Dashboard > Evaluation.
The Evaluation tab displays automated interaction scores only when the following are enabled in Quality AI > General Settings: Auto QA, Agent Score Card, and Agent Access to Scored Interactions.
Available Interaction Views
The interactions displayed depend on the Agent Access to Scored Interactions configuration:
| Access Type | Columns Shown |
|---|
| Only Manually Audited Interactions | Date & Time, Queue, Supervisor Audit Score. |
| Manually Audited + Auto QA Scored | Date & Time, Queue, Auto QA Score (Kore Evaluation Score), Supervisor Audit Score. |
Evaluation Filters
The Evaluation tab uses the same common filters, including:
| Access Type | Available Filters |
|---|
| Only Manually Audited | Queues (limited to queues the agent belongs to). |
| Manually Audited + Auto QA | Queues and Audit Status (Audit Status visible only when Auto QA interactions are included). |
Common Filters
| Filter | Description |
|---|
| Queues | Filters interactions by queue. |
| Audit Status | Filters audited and unaudited interactions individually. |
| Filter Interactions | Displays the total number of filtered interactions. |
- Agents can view transcripts and comments only for interactions assigned to them.
- Comments appear within the Transcript section of the Audit tab.
Language Settings
The Language Settings section is view-only for agents.