GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
What's new in SearchAssist
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
Azure Storage Connector
Confluence Cloud Connector
Confluence Server Connector
Custom Connector
DotCMS Connector
Dropbox Connector
Google Drive Connector
Oracle Knowledge Connector
Salesforce Connector
ServiceNow Connector
SharePoint Connector
Zendesk Connector
RACL
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Custom Configurations
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Team
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Custom Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices
Smart Hibernation

SearchAssist APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

About Answers

What are Answers?

Answers are specific pieces of information extracted or generated by a search application in response to a user query. 

Difference between Search Results and Answers

Search results usually present a list of documents, web pages, or content retrieved in response to a user query, ranked based on their relevance to the query. On the other hand, Answers aim to directly address user queries and provide an exact, precise piece of information as the response. 

Types of Answers in SearchAssist

  1. Extractive Answers: Extractive answers involve selecting and presenting relevant chunks of text directly from the source documents that contain the answer to the user’s query. Extractive answers preserve the original wording and structure of the content
  2. Generative Answers: Generative answers involve using the retrieved chunks to generate answers to the query based on the understanding of the question and the relevant information in the source documents.
    • Answers are usually paraphrased to reply to the exact user query.
    • LLMs are used to generate answers from retrieved chunks.

Overview of Answer Generation Process

The answer-generation process mainly consists of the following steps:

  1. Content Ingestion: Involves processing of source documents that will be used for generating answers.
  2. Chunking: Involves breaking down the source documents into smaller, meaningful units called chunks.
  3. Generating Vector embeddings: Involves converting chunks into multi-dimensional vectors representing the chunks.
  4. Chunk Retrieval: Involves selecting the most relevant chunks of text from the vector space based on their similarity to the user query.
  5. Answer Generation: Involves generating a response to the user query based on the retrieved chunks. 

Important terms related to Answers

  • Chunks: Chunks refer to portions or segments of ingested data that are processed or evaluated as a single entity. 
  • Chunking: In the context of answers, Chunking is the process of segmenting large content units into smaller segments. SearchAssist uses different chunking strategies for Generative and Extractive Answers. So, when both types of Answers are enabled, two sets of chunks are created from the content and stored in the Answer Index.  
  • Chunking strategy refers to the rules used for chunk generation. Currently, SearchAssist uses the following two strategies for chunking.
    • Text-based chunking: This technique is based on the concept of tokenization. A fixed number of consecutive tokens are identified as one chunk, the next set of tokens as the next chunk, and so on. SearchAssist uses this technique for Generative answers.
    • Rule-based chunking: This technique uses the headers and content in a document to identify chunks. The header and the text between the header and the next header are treated as a chunk. SearchAssist uses this technique for Extractive answers.
  • Embeddings: Generating Embeddings is the process of creating multi-dimensional vectors from the chunks. These embeddings are then stored in a vector database or vector store. There are different embedding models for generating embeddings. SearchAssist uses MPNet embeddings for English-only use cases, and LaBSE embeddings for multi-lingual use cases. The vector store used in both cases is ElasticSearch.
  • Chunk retrieval: It is the process of retrieving the most relevant chunks corresponding to a user query. We support the following two techniques to retrieve chunks. You can experiment with both retrieval methods to find the one that provides optimum results for you.
    • Vector retrieval: In this method,  the idea is to find vectors that are more similar to the query vector. The chunks corresponding to the vectors most similar to the query vector are then used for generating answers.
    • Hybrid retrieval: This method combines the keyword-based retrieval technique with the vector retrieval techniques, leveraging the strengths of both approaches. 
  • Retrieval Augmented Generation or RAG : This is a method that allows you to extract insights from fragmented, unstructured data and formulate responses based on the information. It involves retrieving both the content and context from a dataset and utilizing this knowledge to generate responses. This technique significantly enhances the precision and relevance of the generated answers.
  • Tokens: Tokens refer to a group of characters. In computing terminology, a token is the smallest unit of data. Roughly, 1 token ~= 4 chars in English.
  • Answer Index: Index refers to searchable content from which SearchAssist generates results. Answer Index is the searchable content from which Answers are generated. Similarly, Search Index is the searchable content from which Search results are generated.

About Answers

What are Answers?

Answers are specific pieces of information extracted or generated by a search application in response to a user query. 

Difference between Search Results and Answers

Search results usually present a list of documents, web pages, or content retrieved in response to a user query, ranked based on their relevance to the query. On the other hand, Answers aim to directly address user queries and provide an exact, precise piece of information as the response. 

Types of Answers in SearchAssist

  1. Extractive Answers: Extractive answers involve selecting and presenting relevant chunks of text directly from the source documents that contain the answer to the user’s query. Extractive answers preserve the original wording and structure of the content
  2. Generative Answers: Generative answers involve using the retrieved chunks to generate answers to the query based on the understanding of the question and the relevant information in the source documents.
    • Answers are usually paraphrased to reply to the exact user query.
    • LLMs are used to generate answers from retrieved chunks.

Overview of Answer Generation Process

The answer-generation process mainly consists of the following steps:

  1. Content Ingestion: Involves processing of source documents that will be used for generating answers.
  2. Chunking: Involves breaking down the source documents into smaller, meaningful units called chunks.
  3. Generating Vector embeddings: Involves converting chunks into multi-dimensional vectors representing the chunks.
  4. Chunk Retrieval: Involves selecting the most relevant chunks of text from the vector space based on their similarity to the user query.
  5. Answer Generation: Involves generating a response to the user query based on the retrieved chunks. 

Important terms related to Answers

  • Chunks: Chunks refer to portions or segments of ingested data that are processed or evaluated as a single entity. 
  • Chunking: In the context of answers, Chunking is the process of segmenting large content units into smaller segments. SearchAssist uses different chunking strategies for Generative and Extractive Answers. So, when both types of Answers are enabled, two sets of chunks are created from the content and stored in the Answer Index.  
  • Chunking strategy refers to the rules used for chunk generation. Currently, SearchAssist uses the following two strategies for chunking.
    • Text-based chunking: This technique is based on the concept of tokenization. A fixed number of consecutive tokens are identified as one chunk, the next set of tokens as the next chunk, and so on. SearchAssist uses this technique for Generative answers.
    • Rule-based chunking: This technique uses the headers and content in a document to identify chunks. The header and the text between the header and the next header are treated as a chunk. SearchAssist uses this technique for Extractive answers.
  • Embeddings: Generating Embeddings is the process of creating multi-dimensional vectors from the chunks. These embeddings are then stored in a vector database or vector store. There are different embedding models for generating embeddings. SearchAssist uses MPNet embeddings for English-only use cases, and LaBSE embeddings for multi-lingual use cases. The vector store used in both cases is ElasticSearch.
  • Chunk retrieval: It is the process of retrieving the most relevant chunks corresponding to a user query. We support the following two techniques to retrieve chunks. You can experiment with both retrieval methods to find the one that provides optimum results for you.
    • Vector retrieval: In this method,  the idea is to find vectors that are more similar to the query vector. The chunks corresponding to the vectors most similar to the query vector are then used for generating answers.
    • Hybrid retrieval: This method combines the keyword-based retrieval technique with the vector retrieval techniques, leveraging the strengths of both approaches. 
  • Retrieval Augmented Generation or RAG : This is a method that allows you to extract insights from fragmented, unstructured data and formulate responses based on the information. It involves retrieving both the content and context from a dataset and utilizing this knowledge to generate responses. This technique significantly enhances the precision and relevance of the generated answers.
  • Tokens: Tokens refer to a group of characters. In computing terminology, a token is the smallest unit of data. Roughly, 1 token ~= 4 chars in English.
  • Answer Index: Index refers to searchable content from which SearchAssist generates results. Answer Index is the searchable content from which Answers are generated. Similarly, Search Index is the searchable content from which Search results are generated.