- Improve accuracy
- Reduce latency
- Better handle domain-specific vocabulary
- Support multilingual deployments
- Supported for real-time models from OpenAI and Azure OpenAI providers.
- Applicable across the app.
Enable Transcription Settings
- Go to Agent Platform → Orchestrator
- Enable Voice-to-Voice interactions
- Open Voice AI Model Settings
- Scroll to Input Audio Transcription
- Configure:
- Transcription Language
- Transcription Prompt
- By default, the transcription language is set to auto-detect, and the prompt is empty
- Click Save
Transcription Configurations
Transcription Language
Specifies the language of user speech.- Default: Auto-detect
- Format: ISO-639-1 (for example,
en,hi,ta)
- Use Auto-detect for global/multi-language apps
- Set a specific language for:
- Known user base
- Better accuracy
- Lower latency
Transcription Prompt
Provides context to the ASR model that transcribes asynchronously. For example, it helps recognize:- Product names
- Acronyms
- Industry-specific terms
This prompt only affects transcription, not conversation responses.