Back to NLP Topics Conversation Testing simulates end-to-end conversational flows to evaluate dialog task execution and support regression testing. Create Test Suites to capture business scenarios and run them to validate app performance. The framework tracks transition coverage and measures how well the app understands queries and executes dialogs. Flow Health in the Health and Monitoring dashboard summarizes total coverage of dialog flows and conversation test results, helping you identify and fix missing transitions and intent gaps. Go to: Automation AI > Virtual Assistant > Testing > Regression Testing > Conversation TestingDocumentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Landing Page
The Conversation Testing page lists all Test Suites in a grid with the following columns:| Column | Description |
|---|---|
| Test Suite Name | Name of the suite. |
| Result | Execution status of the latest run. |
| Duration | Time taken for the latest execution. |
| Passed / Failed / Not Executed | Test case counts. |
| Pass % | Percentage of passed test cases. |
| Tags | Labels added during suite creation for filtering and organization. |
Only the results of the latest execution are shown. Earlier run history is not displayed on this page.
- Search and filter any column directly in the grid.
- Filter by intents covered across all test suites — useful for regression after intent definitions change.
Key Features
| Feature | Description |
|---|---|
| Create a Test Suite | Build test suites to capture conversation scenarios. |
| Test Editor | Edit and manage individual test cases and steps. |
| Test Case Assertion | Define assertions to validate app responses. |
| Test Case Execution Summary | View detailed results after running a test suite. |