GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
What's new in SearchAssist
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
Azure Storage Connector
Confluence Cloud Connector
Confluence Server Connector
Custom Connector
DotCMS Connector
Dropbox Connector
Google Drive Connector
Oracle Knowledge Connector
Salesforce Connector
ServiceNow Connector
SharePoint Connector
Zendesk Connector
RACL
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Custom Configurations
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Team
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Custom Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices
Smart Hibernation

SearchAssist APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Debug Tool

SearchAssist provides answer snippets through two answer-generation models. To get the most relevant answers from the generative model, you need to fine-tune your prompts sent to the LLM model. This necessitates having comprehensive information about the data used in answer generation. Similarly, to get the most suitable answers using the extractive model, you need to learn about the chunks used. SearchAssist has a built-in debugger that provides insights into the answer-generation process for both the Generative Model as well as the Extractive Model. This debug information can help you assess and improve the answers generated in response to search queries. 

The debugger is available in the Preview widget. The debug option is available after you perform a search in the preview mode and the application provides you with the results.

Click the debug icon to get insights into the answer generation process. The debugger is displayed on the left of the preview widget, as shown below. There are two tabs on this page: Answer and Search Results. The Answer tab gives insight into the answer generation process and the Search Results tab shows the search data for the query in JSON format. 

The information corresponding to the Extractive Model is displayed only in JSON view currently. For the Generative Model, information is displayed both in the summary widget as well as in the JSON view.

The information corresponding to extractive answers is displayed under the extractive_answers tag in the JSON view and information corresponding to generative answers is displayed under the generative_answers tag in the JSON view. 

The Summary view provides debug information only for generative answers currently. 

The Qualified Chunks page under the Answer tab provides information about the chunks of information found relevant to the search query. You can view the chunks under three tabs as discussed below. 

All Qualified – All the chunks of information found from different sources that are relevant to the search query. The total number of chunks is displayed in the header itself. 

Sent to LLM – This lists all the chunks sent to the LLM model for answer generation. 

Used for answering – This lists the chunks used for the answer generation.

For each chunk in each of the lists, at the top of the chunk, it displays a score for the chunk that indicates the relevance of the chunk to the search query. The higher the score, the more relevant the chunk for the answer. Of all the qualified chunks, the top five chunks, based on this score, are sent to the LLM model for answer generation. It also displays the contents of the chunk, the source of the chunk, the page number of the data in the original source, and the chunk title. It is also marked if the chunk is sent to the LLM model for answer generation.

Under the Prompt tab, you can find the details of the prompt or the textual inputs sent to the LLM model. The top section shows the internal parameters that are used by SearchAssist to fine-tune the output from the LLM model, along with their values.

The prompt sent to the LLM model for answer generation is displayed in the bottom pane. A prompt consists of a set of instructions along with a few shot examples sent to the LLM model and enables it to generate desired results accurately.  You can use this section to verify if the prompt is defining the objective, tone of response, and format of the response properly. To edit this prompt, go to the Answer Snippets page under the Indices tab.

Under the Response tab, you can view the response of the LLM model in JSON format.

To view all the information under the Answer tab in JSON format, click the JSON view link.

Debug Tool

SearchAssist provides answer snippets through two answer-generation models. To get the most relevant answers from the generative model, you need to fine-tune your prompts sent to the LLM model. This necessitates having comprehensive information about the data used in answer generation. Similarly, to get the most suitable answers using the extractive model, you need to learn about the chunks used. SearchAssist has a built-in debugger that provides insights into the answer-generation process for both the Generative Model as well as the Extractive Model. This debug information can help you assess and improve the answers generated in response to search queries. 

The debugger is available in the Preview widget. The debug option is available after you perform a search in the preview mode and the application provides you with the results.

Click the debug icon to get insights into the answer generation process. The debugger is displayed on the left of the preview widget, as shown below. There are two tabs on this page: Answer and Search Results. The Answer tab gives insight into the answer generation process and the Search Results tab shows the search data for the query in JSON format. 

The information corresponding to the Extractive Model is displayed only in JSON view currently. For the Generative Model, information is displayed both in the summary widget as well as in the JSON view.

The information corresponding to extractive answers is displayed under the extractive_answers tag in the JSON view and information corresponding to generative answers is displayed under the generative_answers tag in the JSON view. 

The Summary view provides debug information only for generative answers currently. 

The Qualified Chunks page under the Answer tab provides information about the chunks of information found relevant to the search query. You can view the chunks under three tabs as discussed below. 

All Qualified – All the chunks of information found from different sources that are relevant to the search query. The total number of chunks is displayed in the header itself. 

Sent to LLM – This lists all the chunks sent to the LLM model for answer generation. 

Used for answering – This lists the chunks used for the answer generation.

For each chunk in each of the lists, at the top of the chunk, it displays a score for the chunk that indicates the relevance of the chunk to the search query. The higher the score, the more relevant the chunk for the answer. Of all the qualified chunks, the top five chunks, based on this score, are sent to the LLM model for answer generation. It also displays the contents of the chunk, the source of the chunk, the page number of the data in the original source, and the chunk title. It is also marked if the chunk is sent to the LLM model for answer generation.

Under the Prompt tab, you can find the details of the prompt or the textual inputs sent to the LLM model. The top section shows the internal parameters that are used by SearchAssist to fine-tune the output from the LLM model, along with their values.

The prompt sent to the LLM model for answer generation is displayed in the bottom pane. A prompt consists of a set of instructions along with a few shot examples sent to the LLM model and enables it to generate desired results accurately.  You can use this section to verify if the prompt is defining the objective, tone of response, and format of the response properly. To edit this prompt, go to the Answer Snippets page under the Indices tab.

Under the Response tab, you can view the response of the LLM model in JSON format.

To view all the information under the Answer tab in JSON format, click the JSON view link.