GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
What's new in SearchAssist
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
SharePoint Connector
Confluence Cloud Connector
Confluence Server Connector
Zendesk Connector
ServiceNow Connector
Salesforce Connector
Azure Storage Connector
Google Drive Connector
Dropbox Connector
Oracle Knowledge Connector
DotCMS Connector
RACL
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Custom Configurations
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Team
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Custom Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices

SearchAssist APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Custom Integration

Custom Integration provides a mechanism that enables you to integrate SearchAssist with any third-party LLM. This provides flexibility to the customers to use the LLM of their choice which offers numerous advantages in terms of flexibility, performance, security, scalability, and innovation.

Integration Overview

The diagram below summarizes how SearchAssist can interact with custom LLM. This is implemented using an Answering Service. Answering Service is an interface used to communicate between SearchAssist and the custom LLM.

  1. When a user sends a query to SearchAssist, the application searches for the relevant chunks.
  2. The relevant chunks along with the user query are sent to the Answering Service. 
  3. The Answering Service further adds all the config information required by the custom LLM to generate the corresponding answer and forwards the request. 
  4. Custom LLM generates the response based on the inputs and sends it back to the Answering Service. 
  5. The Answering Service then sends the response back as Answer to the SearchAssist application. 
  6. The SearchAssist application then displays it to the user. 

Note that a sample Answering Service is offered by Kore.  You can use it to communicate with your custom LLM or enhance it further as per your requirements. However, it is important to adhere to the request format (in which SearchAssist sends a user query along with relevant chunks) and the response format (in which SearchAssist expects the answer from the Answering Service) for seamless communication.

To enable communication with custom LLM

  1. Run the Answering Service. 
  2. Enable Custom Integration in the SearchAssist application.  

Running Answering Service

  • Download and Set Up: You can download the sample service code or implement your service. The sample service can be downloaded from here. It is a Node.js service. If you plan to implement your own Answering Service, adhere to the request and response format published in the sample service for seamless interaction with SearchAssist. 
  • Configuration: The sample service allows you to integrate with OpenAI and Azure OpenAI for answering. Make necessary configuration changes to enable communication with any custom LLM. For more information, refer to this
  • Run the service: Install the necessary packages and run the service.

Enabling Custom Integration in SearchAssist

Go to the Integrations page under the Manage tab. Select Custom Integration.

On the Authorization tab, enter the following details of the Answering Service.

  • Endpoint: The URL where the Answering Service is hosted.
  • API token: Token for authenticating requests sent to the Answering service. 
  • Custom Integration Scope: Currently, this is limited to Answers only. 

To test the configuration parameters and communication with the Answering Service, enter sample values for the user query and relevant chunks and click on the Test button.

If the connection is successful, you will see the response from the service below the sample values.

 

Next, go to the Answer Snippets page and select Custom Integration as the Generative Model. 

Configure the Similarity Score threshold and the number of chunks to be sent to the LLM.

Note: Currently, Custom Integration cannot be used along with the Extractive Answer Model. You can either enable the Extractive Model or Custom Integration for Generative Answers. When both Extractive Model and Generative model with custom integration are enabled, you will not be able to receive Extractive Answers.

Custom Integration

Custom Integration provides a mechanism that enables you to integrate SearchAssist with any third-party LLM. This provides flexibility to the customers to use the LLM of their choice which offers numerous advantages in terms of flexibility, performance, security, scalability, and innovation.

Integration Overview

The diagram below summarizes how SearchAssist can interact with custom LLM. This is implemented using an Answering Service. Answering Service is an interface used to communicate between SearchAssist and the custom LLM.

  1. When a user sends a query to SearchAssist, the application searches for the relevant chunks.
  2. The relevant chunks along with the user query are sent to the Answering Service. 
  3. The Answering Service further adds all the config information required by the custom LLM to generate the corresponding answer and forwards the request. 
  4. Custom LLM generates the response based on the inputs and sends it back to the Answering Service. 
  5. The Answering Service then sends the response back as Answer to the SearchAssist application. 
  6. The SearchAssist application then displays it to the user. 

Note that a sample Answering Service is offered by Kore.  You can use it to communicate with your custom LLM or enhance it further as per your requirements. However, it is important to adhere to the request format (in which SearchAssist sends a user query along with relevant chunks) and the response format (in which SearchAssist expects the answer from the Answering Service) for seamless communication.

To enable communication with custom LLM

  1. Run the Answering Service. 
  2. Enable Custom Integration in the SearchAssist application.  

Running Answering Service

  • Download and Set Up: You can download the sample service code or implement your service. The sample service can be downloaded from here. It is a Node.js service. If you plan to implement your own Answering Service, adhere to the request and response format published in the sample service for seamless interaction with SearchAssist. 
  • Configuration: The sample service allows you to integrate with OpenAI and Azure OpenAI for answering. Make necessary configuration changes to enable communication with any custom LLM. For more information, refer to this
  • Run the service: Install the necessary packages and run the service.

Enabling Custom Integration in SearchAssist

Go to the Integrations page under the Manage tab. Select Custom Integration.

On the Authorization tab, enter the following details of the Answering Service.

  • Endpoint: The URL where the Answering Service is hosted.
  • API token: Token for authenticating requests sent to the Answering service. 
  • Custom Integration Scope: Currently, this is limited to Answers only. 

To test the configuration parameters and communication with the Answering Service, enter sample values for the user query and relevant chunks and click on the Test button.

If the connection is successful, you will see the response from the service below the sample values.

 

Next, go to the Answer Snippets page and select Custom Integration as the Generative Model. 

Configure the Similarity Score threshold and the number of chunks to be sent to the LLM.

Note: Currently, Custom Integration cannot be used along with the Extractive Answer Model. You can either enable the Extractive Model or Custom Integration for Generative Answers. When both Extractive Model and Generative model with custom integration are enabled, you will not be able to receive Extractive Answers.