GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
Current Version
Recent Updates
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
SharePoint Connector
Confluence Connector
Zendesk Connector
ServiceNow Connector
Salesforce Connector
Azure Storage Connector
Google Drive Connector
Dropbox Connector
Oracle Knowledge Connector
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices

SearchAssist PUBLIC APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Simulation and Testing

SearchAssist also provides you with a simulator to test the snippets engine and find the most suitable configuration or model for your business needs.  It provides an answer snippet flow analysis both as text and in JSON format. You can enable one or more models, assess the performance of the results to the user queries using the simulator, and choose the most appropriate model. 

To start the simulator, click the Simulate button on the top right corner of the page.

Enter a sample query and click Test. Depending on the model enabled for the answer snippet and the priority set, the simulator displays the results from the models as shown in the sample below.

The model marked as the Presented answer indicates the model that has been set to higher priority and hence will be displayed to the user on the search results page. The simulator also shows the time each model takes to find the snippets and the overall similarity score for the snippet in the case of the extractive model.  

Response when the extractive model is set to a higher priority.

Response when the generative model is set to a higher priority.

The answer snippets flow analysis or simulation results are also displayed in a developer-friendly JSON format. This format provides additional information about the snippets and will be further enhanced in the future. Click JSON view to view the information in JSON format.

The response in the JSON format is shown below.

Simulation and Testing

SearchAssist also provides you with a simulator to test the snippets engine and find the most suitable configuration or model for your business needs.  It provides an answer snippet flow analysis both as text and in JSON format. You can enable one or more models, assess the performance of the results to the user queries using the simulator, and choose the most appropriate model. 

To start the simulator, click the Simulate button on the top right corner of the page.

Enter a sample query and click Test. Depending on the model enabled for the answer snippet and the priority set, the simulator displays the results from the models as shown in the sample below.

The model marked as the Presented answer indicates the model that has been set to higher priority and hence will be displayed to the user on the search results page. The simulator also shows the time each model takes to find the snippets and the overall similarity score for the snippet in the case of the extractive model.  

Response when the extractive model is set to a higher priority.

Response when the generative model is set to a higher priority.

The answer snippets flow analysis or simulation results are also displayed in a developer-friendly JSON format. This format provides additional information about the snippets and will be further enhanced in the future. Click JSON view to view the information in JSON format.

The response in the JSON format is shown below.