GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
What's new in SearchAssist
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
SharePoint Connector
Confluence Cloud Connector
Confluence Server Connector
Zendesk Connector
ServiceNow Connector
Salesforce Connector
Azure Storage Connector
Google Drive Connector
Dropbox Connector
Oracle Knowledge Connector
DotCMS Connector
RACL
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Custom Configurations
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Team
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Custom Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices

SearchAssist APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Introduction to Experiments

You can evaluate the performance of your SearchAssist app by running an A/B test to compare index and search configuration variations. A variant in this context is a unique combination of index and search configurations assigned to one SearchAssist app.  The SearchAssist platform lets you quickly set up and run experiments to continuously test variations and improve search relevance.

Consider the following scenarios:

  • Scenario 1  You configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data?
  • Scenario 2 You deployed a SearchAssist app and analyzed its performance. You want to tweak the index and/or search configuration. You clone the existing configuration and make the necessary changes. How can you make sure these changes lead to better search results?

Using Experiments, you can find the most effective index and search configuration combination. Each experiment can hold up to four variants (A, B, C, and D) and split traffic flow randomly among them and for a fixed period. SearchAssist helps you:

  • create up to 4 variants using unique combinations of previously created indices and search configurations
  • run live tests in the same SearchAssist app by equally splitting live traffic among the variants
  • evaluate variant performance
  • measure outcomes on metrics like clicks and click-through rates

Internally, every search is associated with a unique user identifier. This serves two purposes:

  • Ensuring randomness, SearchAssist creates sets of users for each variant. New users are randomly routed into one of the variants, based on a hash of their unique user identifier.
  • Each user continues with the same assigned variant for the duration of the experiment, thus ensuring test reliability.

SearchAssist gives you granular control over an experiment by:

  • specifying the percent of traffic flow throughput rate diverted to each variant, and/or
  • setting the duration of an experiment.

Introduction to Experiments

You can evaluate the performance of your SearchAssist app by running an A/B test to compare index and search configuration variations. A variant in this context is a unique combination of index and search configurations assigned to one SearchAssist app.  The SearchAssist platform lets you quickly set up and run experiments to continuously test variations and improve search relevance.

Consider the following scenarios:

  • Scenario 1  You configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data?
  • Scenario 2 You deployed a SearchAssist app and analyzed its performance. You want to tweak the index and/or search configuration. You clone the existing configuration and make the necessary changes. How can you make sure these changes lead to better search results?

Using Experiments, you can find the most effective index and search configuration combination. Each experiment can hold up to four variants (A, B, C, and D) and split traffic flow randomly among them and for a fixed period. SearchAssist helps you:

  • create up to 4 variants using unique combinations of previously created indices and search configurations
  • run live tests in the same SearchAssist app by equally splitting live traffic among the variants
  • evaluate variant performance
  • measure outcomes on metrics like clicks and click-through rates

Internally, every search is associated with a unique user identifier. This serves two purposes:

  • Ensuring randomness, SearchAssist creates sets of users for each variant. New users are randomly routed into one of the variants, based on a hash of their unique user identifier.
  • Each user continues with the same assigned variant for the duration of the experiment, thus ensuring test reliability.

SearchAssist gives you granular control over an experiment by:

  • specifying the percent of traffic flow throughput rate diverted to each variant, and/or
  • setting the duration of an experiment.