Back to Generative AI Features LLM-powered features for Search AI that enable answer generation, vector search, document enrichment, and query processing.Documentation Index
Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The platform regularly integrates new models from providers like OpenAI, Azure OpenAI, and Anthropic. To use a model not yet available as a pre-built integration, add it using Provider’s New LLM Integration.
Model Feature Matrix
(✅ Supported | ❌ Not supported | ✅* Supported but no default prompt | NA = Not Applicable)Answer Generation and Enrichment
For Enrich Chunks with LLM and Transform Documents with LLM, use templates from the prompt library to write custom prompts.
| Model | Answer Generation | Enrich Chunks with LLM | Transform Documents with LLM |
|---|---|---|---|
| Azure OpenAI - GPT-4 Turbo | ✅ | ✅* | ✅* |
| Azure OpenAI - GPT-4o | ✅ | ✅* | ✅* |
| Azure OpenAI - GPT-4o mini | ✅* | ✅* | ✅* |
| OpenAI - GPT-3.5 Turbo, GPT-4, GPT-4 Turbo | ✅ | ✅* | ✅* |
| OpenAI - GPT-4o | ✅ | ✅* | ✅* |
| OpenAI - GPT-4o mini | ✅* | ✅* | ✅* |
| Custom LLM | ✅* | ✅* | ✅* |
| XO GPT | ✅ | ❌ | ❌ |
| Amazon Bedrock | ✅* | ✅* | ✅* |
Query Processing
| Model | Metadata Extractor Agent | Query Rephrase (Adv Search) | Query Transformation | Rephrase User Query | Result Type Classification |
|---|---|---|---|---|---|
| Azure OpenAI - GPT-4 Turbo | ❌ | ❌ | ❌ | ✅* | ❌ |
| Azure OpenAI - GPT-4o | ✅ | ✅ | ✅ | ✅ | ✅ |
| Azure OpenAI - GPT-4o mini | ✅* | ✅* | ✅* | ✅* | ✅* |
| OpenAI - GPT-3.5 Turbo, GPT-4, GPT-4 Turbo | ❌ | ❌ | ❌ | ✅* | ❌ |
| OpenAI - GPT-4o | ✅ | ✅ | ✅ | ✅ | ✅ |
| OpenAI - GPT-4o mini | ✅* | ✅* | ✅* | ✅* | ✅* |
| Custom LLM (GPT-4o / GPT-4o mini underlying) | ✅* | ✅* | ✅* | ✅* | ✅* |
| XO GPT | ❌ | ❌ | ❌ | ✅ | ❌ |
| Amazon Bedrock | ✅* | ✅* | ✅* | ✅* | ✅* |
Vector Generation
| Model | Vector Generation - Text | Vector Generation - Image |
|---|---|---|
| Azure OpenAI (all models) | NA | NA |
| OpenAI (all models) | NA | NA |
| Custom LLM | ✅* | ✅* |
| XO GPT | ✅ | ✅ |
| Amazon Bedrock | NA | NA |
Features
Answer Generation
Generates an answer to the user’s question based on data ingested into the Search AI application. Relevant data is retrieved and inserted into the prompt; the configured LLM returns a formatted answer. Learn more.Enrich Chunks with LLM
Uses an external LLM to refine, update, or enrich chunks extracted from ingested content. Learn more.You must create a custom prompt to use this feature. All chunk fields are available for use in the prompt — click View Field Details when adding a Workbench Stage to see the full list.
Transform Documents with LLM
Uses an external LLM to enhance or update documents during the extraction process. Learn more.You must create a custom prompt to use this feature. All document fields are available for use in the prompt — click View Field Details when adding a Transformation Stage to see the full list.
Vector Generation - Text
Creates vector embeddings for ingested text data. When a user submits a query, it is converted into an embedding and a vector search retrieves the most relevant data, which is then passed to answer generation.Vector Generation - Image
Creates vector embeddings for ingested image data. When a user submits a query, it is converted into an embedding and a vector search retrieves the most relevant images, which are then passed to answer generation.Metadata Extractor Agent
Extracts relevant sources and fields from a query, maps them to structured data, and applies filters or boosts for accurate retrieval. Particularly useful for data from third-party applications. Learn more. If using a custom prompt, the LLM output must follow this structure:extractedMetaData: Array of sources and associated metadata.range: Optional date range filter.sourceIntent: Boolean indicating whether the source was explicitly specified.
Query Rephrase for Advanced Search API
Adds contextual information to user queries to enhance their relevance. Learn more. If using a custom prompt, the LLM output must follow this structure:rephrased_query: The reworded version of the original query.confidence: Confidence level in the quality of rephrasing.reasoning: Justification for the transformation.
Query Transformation
Identifies key terms within a query, removes noise, and prioritizes relevant documents. Learn more. If using a custom prompt, the LLM output must follow this structure:| Field | Description |
|---|---|
query_processing.original_query | The input query provided by the user. |
query_processing.keyword_search_query | Optimized for keyword-based search. |
query_processing.vector_search_query | Adapted for vector-based semantic search. |
core_terms | Key terms extracted from the query. Reserved for future use. |
semantic_expansions | Related or semantically similar terms. Reserved for future use. |
search_priority.must_include | Critical terms; strongest boost. Single-word terms allowed. |
search_priority.should_include | Moderate relevance boost. Must contain at least two words. |
search_priority.context_terms | Light contextual boosting. Must contain at least two words. |
Result Type Classification
Used in Agentic RAG to determine whether the user seeks a specific answer or a list of search results. Learn more. If using a custom prompt, the LLM output must follow this structure:query_type:TYPE_1(Search Results) orTYPE_2(Answers).confidence: Certainty level (for example, High, Medium, Low).reasoning: Brief explanation for the chosen type.
Rephrase User Query
Reconstructs incomplete or ambiguous user inputs using conversation history, improving intent detection and entity extraction accuracy. Handles three scenarios:| Scenario | Description | Example |
|---|---|---|
| Completeness | Completes an incomplete query using conversation context. | ”How about Orlando?” → “What’s the weather forecast for Orlando tomorrow?” |
| Co-referencing | Resolves pronouns or vague references using prior context. | ”Every six hours.” → “I take ibuprofen every six hours.” |
| Completeness + Co-referencing | Handles both issues together. | ”What about interest rates?” → “What are the interest rates for personal and home loans?” |