GETTING STARTED
Introduction to SmartAssist
Glossary
Minimum System and Browser Requirements
SmartAssist Lifecycle Management
SmartAssist Setup Guide
Sign Up for SmartAssist
Setup SmartAssist for Use With AgentAssist
Release Notes
Recent Updates
Previous Versions
Frequently Asked Questions (FAQ)

EXPERIENCE DESIGNERS
Flow Designer
Introduction
Create Experience Flows
Navigate the Flow Designer
Experience Flow Nodes
Introduction
Node Types
Start
IVR Menu
IVR Digit Input
Conversational Input
Split
Check Agent Availability
Check Business Hours
Message Prompt
Run Automation
Agent Transfer
Connect to API
Go to Flow
Deflect to Chat
Script Task
Set Queue
End Flow
Waiting Experience
Conversation Automation
Testing Widget
Use Cases
Overview
Questions & Answers
Conversations

ADMINISTRATORS
Account Management
Switch Account
Invite Developers to an Account
Routing
Skills
Skill Groups
Queues
Hours of Operation
Default Flows
User Management
Users
Agent Groups
Agent Settings
Role Management
Agent Status
System Setup
Channels
Voice
Chat
Email
Limitations With Accounts Using AudioCodes
Agent Transfer
Surveys
Agent Forms
Dispositions
Language & Speech
Overview
Language Management
Voice Preferences
Hold Audio
Standard Responses
Widgets
Utils
AgentAssist Settings
SearchAssist
Widget Theming (Layout Customization)
Advanced Settings
Co-Browse Settings
Community WFM (Beta)
Automatic Conversation Summary (Beta)
Intelligent Agent Tools
API Reference
API Setup
API List
Integrations
Genesys Voice Bot
Genesys + Kore Voice Automation - Manual Installation Guide
Voice Automation - Integration with Amazon Connect
Voice Automation NiceCX (CX One) - SIP Integration
Talkdesk Voice Automation
Kore Voice Automation (IVA) Integration with Zoom Contact Center (CC)
ID R&D Integration With Kore
Audit Report

AGENTS
Agent Console
Introduction
Conversation Tray
Incoming Interactions
Interacting with Customers
Additional Tools
My Dashboard

SUPERVISORS
Dashboard
Automation
Queues and Agents
Interactions
Monitor Queues, Agents, Interactions, and Service Levels
Manage Layout

BUSINESS USERS
Reports
Introduction
Reports List

ID R&D Integration with Kore

Introduction

This document explains the integration steps between ID R&D and the Kore platform. It details the functionality of ID R&D and offers step-by-step instructions on effectively using this integration while constructing conversational flows.

Prerequisites

You need to enable ID R&D for your account to use ID R&D with a Kore bot.

If you’re an existing customer, request Kore Support to enable your account to be used with Voice Biometrics. It may take 3-5 business days to complete the request.

Integration Architecture

ID R&D offers the following core functionalities:

  1. Generate voice templates from voice files,
  2. Compare voice templates to identify a probable match.

Kore manages all other operational aspects, including access governance, performance management, deployment, and maintenance. Consequently, due to performance and security considerations, ID R&D functions as an independent service within the broader Kore architecture. Kore has developed additional scaffolding to integrate this service into our platform.

The diagram below describes this integration at a very high level:

The Kore Voice Gateway manages calls coming into the Kore Contact Center, which are then forwarded to the separately deployed ID R&D instances to generate voice templates.

Interaction Flow

The diagram below illustrates a sample interaction flow for a voice biometrics use case. Below are the supported methods that a bot developer can incorporate within their bot to implement voice biometrics use cases.

Supported Methods

As part of the integration, developers have access to the following methods. These methods can be used with regular dialogs to build a voice biometric flow.

  1. Is Enrolled (voiceBiometricUtils.isEnrolled):
    1. Collect a unique identifier as part of the conversation.
    2. Search for the unique identifier in the list of registered users, with the bot ID, and organization ID.
    3. If the unique identifier exists in the database, request verification from the user; otherwise, prompt for enrollment.
  1. Enrolment (voiceBiometricUtils.enrollment):
    1. Gather voice input from the user.
    2. Assess the voice input quality
      • These values are compared against the predefined thresholds in the configuration to perform a voice quality check.
      • If the quality check indicates that more data is required, the same response is shared as a response code to the function call. Bot developers need to ensure that they handle this scenario in their conversation design.  determine if more data is required.
    3. Construct a voice template using the audio file and the identifier collected in the previous step.
      • This action yields an encoded string (voice_template) linked to the identifier from step 1.
      • The voice template is stored along with the bot ID, organization ID, and unique customer ID.
  1. Verification (voiceBiometricUtils.verification)
    1. Verify the existence and enrollment status of a user ID.
    2. Replicate the steps for collecting user voice input from the enrollment process for a live call, and generate a voice template. However, this voice template is not retained in the database.
    3. Use the voice template created in step 2 and stored against the user, and dispatch both to ID R&D using the match_voice_templates API.
      • This API returns a match score and a probability, which are evaluated against the values configured in the system.

ID R&D Integration with Kore

Introduction

This document explains the integration steps between ID R&D and the Kore platform. It details the functionality of ID R&D and offers step-by-step instructions on effectively using this integration while constructing conversational flows.

Prerequisites

You need to enable ID R&D for your account to use ID R&D with a Kore bot.

If you’re an existing customer, request Kore Support to enable your account to be used with Voice Biometrics. It may take 3-5 business days to complete the request.

Integration Architecture

ID R&D offers the following core functionalities:

  1. Generate voice templates from voice files,
  2. Compare voice templates to identify a probable match.

Kore manages all other operational aspects, including access governance, performance management, deployment, and maintenance. Consequently, due to performance and security considerations, ID R&D functions as an independent service within the broader Kore architecture. Kore has developed additional scaffolding to integrate this service into our platform.

The diagram below describes this integration at a very high level:

The Kore Voice Gateway manages calls coming into the Kore Contact Center, which are then forwarded to the separately deployed ID R&D instances to generate voice templates.

Interaction Flow

The diagram below illustrates a sample interaction flow for a voice biometrics use case. Below are the supported methods that a bot developer can incorporate within their bot to implement voice biometrics use cases.

Supported Methods

As part of the integration, developers have access to the following methods. These methods can be used with regular dialogs to build a voice biometric flow.

  1. Is Enrolled (voiceBiometricUtils.isEnrolled):
    1. Collect a unique identifier as part of the conversation.
    2. Search for the unique identifier in the list of registered users, with the bot ID, and organization ID.
    3. If the unique identifier exists in the database, request verification from the user; otherwise, prompt for enrollment.
  1. Enrolment (voiceBiometricUtils.enrollment):
    1. Gather voice input from the user.
    2. Assess the voice input quality
      • These values are compared against the predefined thresholds in the configuration to perform a voice quality check.
      • If the quality check indicates that more data is required, the same response is shared as a response code to the function call. Bot developers need to ensure that they handle this scenario in their conversation design.  determine if more data is required.
    3. Construct a voice template using the audio file and the identifier collected in the previous step.
      • This action yields an encoded string (voice_template) linked to the identifier from step 1.
      • The voice template is stored along with the bot ID, organization ID, and unique customer ID.
  1. Verification (voiceBiometricUtils.verification)
    1. Verify the existence and enrollment status of a user ID.
    2. Replicate the steps for collecting user voice input from the enrollment process for a live call, and generate a voice template. However, this voice template is not retained in the database.
    3. Use the voice template created in step 2 and stored against the user, and dispatch both to ID R&D using the match_voice_templates API.
      • This API returns a match score and a probability, which are evaluated against the values configured in the system.