GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
What's new in SearchAssist
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
Azure Storage Connector
Confluence Cloud Connector
Confluence Server Connector
Custom Connector
DotCMS Connector
Dropbox Connector
Google Drive Connector
Oracle Knowledge Connector
Salesforce Connector
ServiceNow Connector
SharePoint Connector
Zendesk Connector
RACL
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Custom Configurations
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Team
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Custom Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices
Smart Hibernation

SearchAssist APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Define Experiments

Experiments are the best way to test multiple index and search configuration combinations and pick the best one that fulfills your business needs. SearchAssist allows you to create experiments with two or more variants to continuously improve search relevance and user experience.

Overview

Consider the following scenarios:

Scenario 1: You have configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data?

Scenario 2: You have deployed your search application and have analyzed its performance. You want to tweak the index or search configuration or both a little. You have cloned the existing configuration and made the necessary changes. How can you ensure that these tweaks would work?

Using Experiments, you can find out which index/search configuration is more effective. It helps you:

  • Create variants (up to 4) using a combination of previously created indices and search configurations.
  • Run them live by splitting the traffic to each variant.
  • See which one performs best based on the metrics like clicks and click-through rate.

Internally, every search is associated with a unique user identifier. This serves two purposes:

  • It ensures randomness. The application creates sets of users – one for each variant. Whenever a new user arrives, they are randomly distributed into either of the variants, based on a hash of their unique user identifier.
  • It maintains the same distribution. If a user is assigned a variant, they continue with the same variant thus ensuring that the experiment conclusion is reliable.

You can further control the experiments by:

  • Specifying the percent of traffic assigned to each of the variants.
  • Setting the duration of an experiment.

Add Experiments

To add an experiment, follow the below steps:

  1. Click the Analytics tab on the top and select Experiments from the drop-down list.
  2. On the Experiments page, click + Add Experiment on the top-right.
  3. On the New Experiment dialog box, enter a name in the Name of Experiment field.
  4. Click + Add Variant.
  5. Enter a name in the Variant Name field.
  6. Select an index configuration from the Index drop-down list.
  7. Select a configuration from the Search Configuration drop-down list.
  8. Repeat steps  4 through 7 for multiple variants for a maximum of 4.
  9. Under the Traffic section, adjust the slider to set a value indicating the percent of the user query traffic that should be assigned to each variant.
  10. In the Duration field, enter the number of days the experiment needs to be run, up to a maximum of 90 days.
  11. Click Add.

Run Experiments

After the addition of experiments, you can run the experiments to measure the search results based on the index and search configuration.

To run experiments, follow the below steps:

  1. Click the Analytics tab on the top and select Experiments from the drop-down list.
  2. On the Experiments page, you can find the list of experiments.
  3. Click Run on the respective experiment. Note you can run only one experiment at any given point in time.
  4. After the experiment starts running, the status changes to Active, and the Duration is displayed.

Note: You cannot run multiple experiments.

Actions

Once you run an experiment, you can:

  1. Monitor the progress by clicking on the experiment to view the following metrics against each variant:
    • Variant name
    • Index assigned
    • Search configuration
    • Traffic assigned
    • Number of users
    • Number of searches
    • Number of clicks
    • Click-through rate i.e. number of unique clicks per appearances
  2. You can choose to pause or stop the experiment.
  3. You can delete a paused or stopped experiment.
  4. You can edit the configuration of an experiment that is yet to run.

Define Experiments

Experiments are the best way to test multiple index and search configuration combinations and pick the best one that fulfills your business needs. SearchAssist allows you to create experiments with two or more variants to continuously improve search relevance and user experience.

Overview

Consider the following scenarios:

Scenario 1: You have configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data?

Scenario 2: You have deployed your search application and have analyzed its performance. You want to tweak the index or search configuration or both a little. You have cloned the existing configuration and made the necessary changes. How can you ensure that these tweaks would work?

Using Experiments, you can find out which index/search configuration is more effective. It helps you:

  • Create variants (up to 4) using a combination of previously created indices and search configurations.
  • Run them live by splitting the traffic to each variant.
  • See which one performs best based on the metrics like clicks and click-through rate.

Internally, every search is associated with a unique user identifier. This serves two purposes:

  • It ensures randomness. The application creates sets of users – one for each variant. Whenever a new user arrives, they are randomly distributed into either of the variants, based on a hash of their unique user identifier.
  • It maintains the same distribution. If a user is assigned a variant, they continue with the same variant thus ensuring that the experiment conclusion is reliable.

You can further control the experiments by:

  • Specifying the percent of traffic assigned to each of the variants.
  • Setting the duration of an experiment.

Add Experiments

To add an experiment, follow the below steps:

  1. Click the Analytics tab on the top and select Experiments from the drop-down list.
  2. On the Experiments page, click + Add Experiment on the top-right.
  3. On the New Experiment dialog box, enter a name in the Name of Experiment field.
  4. Click + Add Variant.
  5. Enter a name in the Variant Name field.
  6. Select an index configuration from the Index drop-down list.
  7. Select a configuration from the Search Configuration drop-down list.
  8. Repeat steps  4 through 7 for multiple variants for a maximum of 4.
  9. Under the Traffic section, adjust the slider to set a value indicating the percent of the user query traffic that should be assigned to each variant.
  10. In the Duration field, enter the number of days the experiment needs to be run, up to a maximum of 90 days.
  11. Click Add.

Run Experiments

After the addition of experiments, you can run the experiments to measure the search results based on the index and search configuration.

To run experiments, follow the below steps:

  1. Click the Analytics tab on the top and select Experiments from the drop-down list.
  2. On the Experiments page, you can find the list of experiments.
  3. Click Run on the respective experiment. Note you can run only one experiment at any given point in time.
  4. After the experiment starts running, the status changes to Active, and the Duration is displayed.

Note: You cannot run multiple experiments.

Actions

Once you run an experiment, you can:

  1. Monitor the progress by clicking on the experiment to view the following metrics against each variant:
    • Variant name
    • Index assigned
    • Search configuration
    • Traffic assigned
    • Number of users
    • Number of searches
    • Number of clicks
    • Click-through rate i.e. number of unique clicks per appearances
  2. You can choose to pause or stop the experiment.
  3. You can delete a paused or stopped experiment.
  4. You can edit the configuration of an experiment that is yet to run.