You can evaluate the performance of your SearchAssist app by running an A/B test to compare index and search configuration variations. A variant in this context is a unique combination of index and search configurations assigned to one SearchAssist app. The SearchAssist platform lets you quickly set up and run experiments to continuously test variations and improve search relevance.
Consider the following scenarios:
- Scenario 1 You configured an index, and tuned search configurations to optimize search results. You have run it against test data in a controlled environment, but will these settings work with real-time data?
- Scenario 2 You deployed a SearchAssist app and analyzed its performance. You want to tweak the index and/or search configuration. You clone the existing configuration and make the necessary changes. How can you make sure these changes lead to better search results?
Using Experiments, you can find the most effective index and search configuration combination. Each experiment can hold up to four variants (A, B, C, and D) and split traffic flow randomly among them and for a fixed period. SearchAssist helps you:
- create up to 4 variants using unique combinations of previously created indices and search configurations
- run live tests in the same SearchAssist app by equally splitting live traffic among the variants
- evaluate variant performance
- measure outcomes on metrics like clicks and click-through rates
Internally, every search is associated with a unique user identifier. This serves two purposes:
- Ensuring randomness, SearchAssist creates sets of users for each variant. New users are randomly routed into one of the variants, based on a hash of their unique user identifier.
- Each user continues with the same assigned variant for the duration of the experiment, thus ensuring test reliability.
SearchAssist gives you granular control over an experiment by:
- specifying the percent of traffic flow throughput rate diverted to each variant, and/or
- setting the duration of an experiment.