Create a PII Detection Pattern
- Go to Settings > PII & Guardrails.
- Click + New Pattern.
- Enter a Pattern Name — a unique name to identify the rule.
- Set Status to Enabled or Disabled. Disabled patterns are skipped during processing.
- Select a Detection Method and configure the fields for the selected method (see below):
- Detect using Regex— uses a regular expression to detect a specific type of sensitive information.
- Detect using AI — uses ML-based entity detection to identify and anonymize sensitive data.
- Under Access Control, select which platform components can access the original unredacted value: Users, Code Tools, Workflow Tools, MCP Tools, Knowledge, Events, Pre-Processor, and Proxy Agent. Components that are not selected receive the redacted or anonymized value.
- Use Test Pattern to enter a sample input and validate how the rule detects and handles sensitive data.
- Click Create.
Detect using Regex
| Field | Description |
|---|---|
| Regex Definition | Regular expression used to detect the PII entity. For example, to detect email addresses: [a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,} |
| Redaction Method | How detected PII is handled. Options are: • Replace with random string — replaces the detected value with a system-generated unique string. • Replace with predefined text — replaces the detected value with fixed text you specify. • Partially mask the value — reveals specific characters and masks the rest. Configure Mask Character, Characters to Skip from Start (visible characters at the beginning), Characters to Skip from End (visible characters at the end). |
* and 12 characters skipped from the end, john.doe@example.com becomes ****.***@example.com.
Detect using AI
| Field | Description |
|---|---|
| Anonymization Entities | Select one or more entity categories to anonymize, such as EMAIL_ADDRESS, DATE, PERSON, CREDIT_CARD, SSN, UUID, or PHONE_NUMBER. |
| Included Values | Add values that must always be anonymized, even if the ML model does not detect them. Detected values are replaced with tokens such as [REDACTED_CUSTOM1]. Use this for company names, project codenames, or unusual names that the model may miss. |
| Ignored Values | Add values that should never be anonymized, even if detected by the ML model. Use this for public figures, product names that resemble person names, or domain-specific terms that should pass through unredacted. |
| Detection Sensitivity | Adjust the confidence threshold to control how aggressively the ML model flags entities. Strict — catches more entities including ambiguous mentions but may produce false positives. Permissive — flags only high-confidence detections, reducing false positives but potentially missing edge cases. |
| Replace with Synthetic Data | When enabled, detected values are replaced with realistic-looking synthetic values — for example, replacing [PERSON] with “John Smith” — to preserve natural language flow for the model. The mapping between original and synthetic values is retained for deanonymization. |
PII Handling in Voice Interactions
PII handling behavior depends on how voice is integrated with the platform.| Voice mode | PII handling |
|---|---|
| Direct voice calls to the platform | PII handling and guardrails are not applied to the audio stream. User inputs go directly to the model; any PII shared may be present in model processing and logs. |
| Voice via AI for Service (ASR/TTS) | PII handling is applied to the text transcript generated by ASR. Detected PII is masked, redacted, or anonymized before agent processing. Voice output is generated from the filtered text, so guarded PII is not reintroduced by TTS. |