Skip to main content

Documentation Index

Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Kore.ai XO GPT module provides fine-tuned LLMs for enterprise conversational AI. These models are optimized for accuracy, safety, and production efficiency. Current capabilities: Answer Generation, Conversation Summarization, User Query Rephrasing, AI Agent Response Rephrasing, Vector/Embedding Generation (Text and Image), and Intent Resolution (DialogGPT).

Benefits

BenefitDescription
Better AccuracySmaller foundation models (under 10B parameters), fine-tuned for conversational AI, outperform prompting larger generative models directly.
Faster ResponsesSmaller models co-hosted with the platform deliver low-latency responses suited for digital and voice production use cases.
Ready to UsePre-fine-tuned models deploy immediately—no in-house AI expertise or tuning cycles required.
Data SecurityFully integrated into the platform, enforcing enterprise-grade data confidentiality, privacy, and governance.

Model Fine-Tuning Process

  1. Collect Data — Gather a task-relevant dataset to serve as training material.
  2. Select a Base LLM — Choose a pre-trained model suited to the task.
  3. Train — Adjust model parameters using the task-specific dataset to learn conversation patterns.
  4. Test and Refine — Evaluate on a validation dataset and iterate to achieve optimal results.

Live Model Versions and Supported Languages

The table below lists all currently deployed XO GPT models.
For non-English languages, XO GPT supports industry-established generic use cases. For additional language-specific support, use the Agent Platform.
XO GPT ModelSupported FeatureVersionBase ModelLanguagesRegionsDeployed
Answer Generation ModelAnswer Generationv3.0Llama-3.1-8B-InstructEnglish, French, German, Japanese, Polish, SpanishUS, DE, EU6 May 2025
Conversation Summarization ModelConversation Summarizationv2.0Mistral 7B Instruct v0.2English, French, German, Japanese, Polish, Simplified Chinese, Spanish, Traditional Chinese, TurkishUS, DE, JP23 Sep 2025 (US/DE), 20 Dec 2024 (JP)
Response Rephrasing ModelRephrase Dialog Responsesv1.0Mistral 7B Instruct v0.2EnglishUS, DE1 Jun 2024 (US), 3 Sep 2024 (DE)
User Query Paraphrasing ModelRephrase User Queryv1.0Mistral 7B Instruct v0.2EnglishUS, DE1 Jun 2024 (US), 3 Sep 2024 (DE)
DialogGPT ModelDialogGPT - Conversation Orchestrationv1.1Llama-3.1-8B-InstructEnglish, French, German, Japanese, Polish, SpanishUS, DE26 May 2025

Supported Features

FeatureDescriptionLearn More
Answer GenerationGenerates answers from data ingested into Search AI using RAG.Search AI GenAI Features
Conversation SummaryGenerates concise summaries of agent-user-human interactions; integrates with Contact Center and third-party apps via API.Automation AI GenAI Features
DialogGPT - Conversation OrchestrationManages conversation flow, identifies intent, and routes conversations to the correct AI Agent in universal apps.DialogGPT Conversation Orchestration
Rephrase Dialog ResponsesRephrases AI Agent responses based on conversation context and user emotion for more empathetic interactions.Automation AI GenAI Features
Rephrase User QueryExpands and rephrases user queries using app domain knowledge and conversation history to improve NLP accuracy.Automation AI GenAI Features
Vector Generation (Image & Text)Creates vector embeddings for text and image data in Search AI; converts queries to embeddings for vector search.Search AI GenAI Features

XO GPT Model Specifications

XO GPT models are fine-tuned for specific conversational AI tasks. The pages below cover each model’s design, benchmarks, fine-tuning parameters, and version history, along with shared information on the model building process and live deployment versions.
DocumentDescription
Model SpecificationsThe model building process, benchmarks index, and roadmap.
Answer Generation ModelRAG-based model that generates accurate answers from domain-specific ingested data.
Conversation Summarization ModelAbstractive summarization model for agent-customer interaction transcripts.
DialogGPT ModelIntent prediction model for multi-turn conversation orchestration.
Response Rephrasing ModelRephrases AI Agent responses to be more empathetic and contextually appropriate.
User Query Paraphrasing ModelExpands and rephrases user queries to improve downstream NLP accuracy.

XO GPT Feedback Submission

Kore.ai incorporates customer feedback into ongoing model improvements. Effective feedback helps prioritize issues, identify recurring patterns, and drive targeted retraining cycles. To submit feedback, open a support ticket with your sample set, error category, use case, and expected vs. actual outputs. See XO GPT Feedback Submission for the full guide, issue categories, and the feedback workflow.