Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The Kore.ai XO GPT module provides fine-tuned LLMs for enterprise conversational AI. These models are optimized for accuracy, safety, and production efficiency.
Current capabilities: Answer Generation, Conversation Summarization, User Query Rephrasing, AI Agent Response Rephrasing, Vector/Embedding Generation (Text and Image), and Intent Resolution (DialogGPT).
Benefits
| Benefit | Description |
|---|
| Better Accuracy | Smaller foundation models (under 10B parameters), fine-tuned for conversational AI, outperform prompting larger generative models directly. |
| Faster Responses | Smaller models co-hosted with the platform deliver low-latency responses suited for digital and voice production use cases. |
| Ready to Use | Pre-fine-tuned models deploy immediately—no in-house AI expertise or tuning cycles required. |
| Data Security | Fully integrated into the platform, enforcing enterprise-grade data confidentiality, privacy, and governance. |
Model Fine-Tuning Process
- Collect Data — Gather a task-relevant dataset to serve as training material.
- Select a Base LLM — Choose a pre-trained model suited to the task.
- Train — Adjust model parameters using the task-specific dataset to learn conversation patterns.
- Test and Refine — Evaluate on a validation dataset and iterate to achieve optimal results.
Live Model Versions and Supported Languages
The table below lists all currently deployed XO GPT models.
For non-English languages, XO GPT supports industry-established generic use cases. For additional language-specific support, use the Agent Platform.
| XO GPT Model | Supported Feature | Version | Base Model | Languages | Regions | Deployed |
|---|
| Answer Generation Model | Answer Generation | v3.0 | Llama-3.1-8B-Instruct | English, French, German, Japanese, Polish, Spanish | US, DE, EU | 6 May 2025 |
| Conversation Summarization Model | Conversation Summarization | v2.0 | Mistral 7B Instruct v0.2 | English, French, German, Japanese, Polish, Simplified Chinese, Spanish, Traditional Chinese, Turkish | US, DE, JP | 23 Sep 2025 (US/DE), 20 Dec 2024 (JP) |
| Response Rephrasing Model | Rephrase Dialog Responses | v1.0 | Mistral 7B Instruct v0.2 | English | US, DE | 1 Jun 2024 (US), 3 Sep 2024 (DE) |
| User Query Paraphrasing Model | Rephrase User Query | v1.0 | Mistral 7B Instruct v0.2 | English | US, DE | 1 Jun 2024 (US), 3 Sep 2024 (DE) |
| DialogGPT Model | DialogGPT - Conversation Orchestration | v1.1 | Llama-3.1-8B-Instruct | English, French, German, Japanese, Polish, Spanish | US, DE | 26 May 2025 |
Supported Features
| Feature | Description | Learn More |
|---|
| Answer Generation | Generates answers from data ingested into Search AI using RAG. | Search AI GenAI Features |
| Conversation Summary | Generates concise summaries of agent-user-human interactions; integrates with Contact Center and third-party apps via API. | Automation AI GenAI Features |
| DialogGPT - Conversation Orchestration | Manages conversation flow, identifies intent, and routes conversations to the correct AI Agent in universal apps. | DialogGPT Conversation Orchestration |
| Rephrase Dialog Responses | Rephrases AI Agent responses based on conversation context and user emotion for more empathetic interactions. | Automation AI GenAI Features |
| Rephrase User Query | Expands and rephrases user queries using app domain knowledge and conversation history to improve NLP accuracy. | Automation AI GenAI Features |
| Vector Generation (Image & Text) | Creates vector embeddings for text and image data in Search AI; converts queries to embeddings for vector search. | Search AI GenAI Features |
XO GPT Model Specifications
XO GPT models are fine-tuned for specific conversational AI tasks. The pages below cover each model’s design, benchmarks, fine-tuning parameters, and version history, along with shared information on the model building process and live deployment versions.
| Document | Description |
|---|
| Model Specifications | The model building process, benchmarks index, and roadmap. |
| Answer Generation Model | RAG-based model that generates accurate answers from domain-specific ingested data. |
| Conversation Summarization Model | Abstractive summarization model for agent-customer interaction transcripts. |
| DialogGPT Model | Intent prediction model for multi-turn conversation orchestration. |
| Response Rephrasing Model | Rephrases AI Agent responses to be more empathetic and contextually appropriate. |
| User Query Paraphrasing Model | Expands and rephrases user queries to improve downstream NLP accuracy. |
XO GPT Feedback Submission
Kore.ai incorporates customer feedback into ongoing model improvements. Effective feedback helps prioritize issues, identify recurring patterns, and drive targeted retraining cycles.
To submit feedback, open a support ticket with your sample set, error category, use case, and expected vs. actual outputs. See XO GPT Feedback Submission for the full guide, issue categories, and the feedback workflow.