GETTING STARTED
SearchAssist Overview
SearchAssist Introduction
Onboarding SearchAssist
Build your first App
Glossary
Release Notes
Current Version
Recent Updates
Previous Versions

CONCEPTS
Managing Sources
Introduction
Files
Web Pages
FAQs
Structured Data 
Connectors
Introduction to Connectors
SharePoint Connector
Confluence Connector
Zendesk Connector
ServiceNow Connector
Salesforce Connector
Azure Storage Connector
Google Drive Connector
Dropbox Connector
Oracle Knowledge Connector
Virtual Assistants
Managing Indices
Introduction
Index Fields
Traits
Workbench
Introduction to Workbench
Field Mapping
Entity Extraction
Traits Extraction
Keyword Extraction
Exclude Document
Semantic Meaning
Snippet Extraction
Custom LLM Prompts
Index Settings
Index Languages
Managing Chunks
Chunk Browser
Managing Relevance
Introduction
Weights
Highlighting
Presentable
Synonyms
Stop Words
Search Relevance
Spell Correction
Prefix Search
Personalizing Results
Introduction
Answer Snippets
Introduction
Extractive Model
Generative Model
Enabling Both Models
Simulation and Testing
Debugging
Best Practices and Points to Remember
Troubleshooting Answers
Answer Snippets Support Across Content Sources
Result Ranking
Facets
Business Rules
Introduction
Contextual Rules
NLP Rules
Engagement
Small Talk
Bot Actions
Designing Search Experience
Introduction
Search Interface
Result Templates
Testing
Preview and Test
Debug Tool
Running Experiments
Introduction
Experiments
Analyzing Search Performance
Overview
Dashboard
User Engagement
Search Insights
Result Insights
Answer Insights

ADMINISTRATION
General Settings
Credentials
Channels
Collaboration
Integrations
OpenAI Integration
Azure OpenAI Integration
Billing and Usage
Plan Details
Usage Logs
Order and Invoices

SearchAssist PUBLIC APIs
API Introduction
API List

SearchAssist SDK

HOW TOs
Use Custom Fields to Filter Search Results and Answers
Add Custom Metadata to Ingested Content
Write Painless Scripts
Configure Business Rules for Generative Answers

Custom LLM Prompts

SearchAssist allows you to use generative AI to enrich and enhance data. You can use this stage to configure prompts that direct Generative AI in producing the desired output. Providing structured prompts to the AI model can yield precise results and align better with the tasks at hand. 

Use the following fields to configure the LLM Prompt stage. 

Stage Name: Give an appropriate name to the stage for easy identification.

Stage Type: Set the stage type to Custom LLM Prompt.

Integration: This drop-down will list all the LLM integrations. Select the integration for which you want to define the prompts. If no LLM is integrated with the application, go to the Integrations page and configure an LLM engine with the application before defining this stage. 

Configuration

  • Condition: Under the configuration, first define the condition to specify the data on which this processing is to be done. For example, if you want to create a summary field for only file type of content, you can set the field to sys_content_type and operator as equal to and value as file. This implies that the only files will undergo processing in this stage. You can define one or more conditions to select specific data.
  • Outcome:As an outcome, define the prompt you want to use and the target field to store the outcome. Continuing the above example to create a summary field, define a prompt like “Write a summary of {{file_content}}. Try to use only the information provided.” and set the target field as ‘file_summary’. If the new field does not already exist, the application will add it as a new field. 

You can choose prompts from the list of sample prompts provided by SearchAssist. Sample Prompts are designed for some of the frequently used generative AI tasks.

Note that to make use of the index fields in the prompts, enclose them in double-quotes. While sending the prompt to the third-party application, the index field in double quotes is replaced with its actual content.

After making the changes, click the Save Configuration button. To test and verify the behavior of the stage, use the Simulate option. This will apply the configuration on the test files in the simulator and show the results as per the stage configuration. The simulator also shows errors, if there are any, in the configuration. 

Custom LLM Prompts

SearchAssist allows you to use generative AI to enrich and enhance data. You can use this stage to configure prompts that direct Generative AI in producing the desired output. Providing structured prompts to the AI model can yield precise results and align better with the tasks at hand. 

Use the following fields to configure the LLM Prompt stage. 

Stage Name: Give an appropriate name to the stage for easy identification.

Stage Type: Set the stage type to Custom LLM Prompt.

Integration: This drop-down will list all the LLM integrations. Select the integration for which you want to define the prompts. If no LLM is integrated with the application, go to the Integrations page and configure an LLM engine with the application before defining this stage. 

Configuration

  • Condition: Under the configuration, first define the condition to specify the data on which this processing is to be done. For example, if you want to create a summary field for only file type of content, you can set the field to sys_content_type and operator as equal to and value as file. This implies that the only files will undergo processing in this stage. You can define one or more conditions to select specific data.
  • Outcome:As an outcome, define the prompt you want to use and the target field to store the outcome. Continuing the above example to create a summary field, define a prompt like “Write a summary of {{file_content}}. Try to use only the information provided.” and set the target field as ‘file_summary’. If the new field does not already exist, the application will add it as a new field. 

You can choose prompts from the list of sample prompts provided by SearchAssist. Sample Prompts are designed for some of the frequently used generative AI tasks.

Note that to make use of the index fields in the prompts, enclose them in double-quotes. While sending the prompt to the third-party application, the index field in double quotes is replaced with its actual content.

After making the changes, click the Save Configuration button. To test and verify the behavior of the stage, use the Simulate option. This will apply the configuration on the test files in the simulator and show the results as per the stage configuration. The simulator also shows errors, if there are any, in the configuration.