Skip to main content
This document provides information about the latest feature updates and enhancements introduced in the recent Agent Platform releases. For previous updates, see these release notes.

v1.8.1 April 10, 2026

This update includes new features and enhancements summarized below.

Multi-Agent Orchestration

Expanded A2A Protocol Support

The Agent Platform now supports JSON-RPC transport binding for A2A v1.0, in addition to the existing HTTP-JSON support. Full support for A2A v0.3 is also introduced, covering both HTTP-JSON and JSON-RPC transport bindings, ensuring broader compatibility and flexibility across integrations. Learn more →

Expanded Real-time V2V Model Support for Adaptive Network

The Agent Platform now supports additional real-time models for Voice-to-Voice (V2V) interactions within the Adaptive Network Pattern framework, extending beyond existing OpenAI support to include:
  • Azure OpenAI
  • Grok
  • Ultravox
Note: Gemini models aren’t supported in Adaptive Network. Learn more →

Playground Enhancements

The enhanced Playground significantly improves the testing and validation experience for developers:
  • Inline Artifacts Support: Playground now supports inline rendering of artifacts, allowing developers to preview outputs without full deployment. When enabled, artifacts appear within the conversation flow, improving visibility and simplifying the validation of structured data. AI for Service-supported templates are rendered using native UI components, while non-supported artifacts are displayed as raw JSON. Learn More
  • Configurable Session Metadata: Playground now supports session-level customization, allowing users to configure metadata and key runtime settings directly within the testing interface. Developers can provide metadata to simulate a real runtime context and configure settings such as artifacts display, streaming, thought streaming, and document upload—eliminating context switching and enabling more efficient testing. This feature is currently in preview and can be enabled upon request.

Configurable Transcription for Realtime Sessions

The Platform now supports configuring transcription language and prompts for real-time voice-to-voice sessions, improving the accuracy and efficiency of speech-to-text processing. These settings, applied at the app level, allow developers to specify the input language and provide domain-specific context, enabling more accurate transcriptions and better recognition of specialized vocabulary. By default, the ASR model operates in autodetect language mode. Learn more →

AI Engineering Tools

Expanded Model Support

The Platform now supports additional AI models, including:
  • OpenAI: gpt-5.4, gpt-5.4-nano, gpt-5.4-mini, gpt-realtime-1.5.
  • Azure OpenAI: GPT-5.3-Chat, GPT-5.4, GPT-5.4-Nano, GPT-5.4-Mini, GPT-Realtime-1.5.
Learn more →

AI Safety, Security, and Governance

Data Anonymization: Module-Level Control for Anonymization and Deanonymization

The Platform introduces a new unified guardrail framework that consolidates PII Guardrails and Anonymization / Deanonymization into a single configuration interface. Entities are defined once per entity, using either regex-based PII detection or ML-based anonymization, with both layers executing in a unified processing sequence across all platform stages. Access controls are now configurable at the module level, specifying whether users, code tools, workflow tools, MCP tools, events, pre-processors, and proxy agents receive original or redacted values at each processing stage. Learn more →

Native mTLS Support for OAuth 2.0 Client Credential Auth Profiles

OAuth 2.0 Client Credential auth profiles now support mutual TLS (mTLS) natively, enabling secure connections to systems that require mTLS without external tools or custom workarounds. The platform can present a client certificate for both token requests and API calls, ensuring compatibility with enterprise systems that enforce mTLS.

v1.8.0 March 29, 2026

This update includes new features and enhancements summarized below. Multi-Agent Orchestration Voice-to-Voice Support for Adaptive Network Voice-to-voice models are now supported in the Adaptive Network, enabling seamless processing of spoken input and generation of spoken responses. This enhances conversational experiences by enabling more natural, real-time voice interactions. Complete App Export and Import Agent Platform now supports full application export, packaging all components, including workflow tools, into a single file for seamless migration across environments. The import process includes upfront validation before execution and automatic rollback on failure, ensuring imports either complete fully or not at all. This eliminates the risk of partial or inconsistent application states after a failed import. Learn more → Pre-Processor Execution Control Users can configure execution control for pre-processors, choosing whether they run once per session or on every agent invocation. This reduces latency and avoids redundant processing. Existing configurations default to Always Run, ensuring backward compatibility. Learn more → Response Processors for Output Transformation Agent Platform introduces the Response Processor, a new capability that gives full control over how responses are shaped and delivered across channels. This feature enables channel-based, structured responses via templates, allowing you to define the exact response format for each channel. Admins can modify the existing artifacts key to reshape the output on the fly, or update it entirely with a customized structured response tailored to the target channel. Developers can further apply custom formatting, enrichment, and business logic via code, with full access to the response context, including inputs, outputs, and artifacts, all without changing the underlying logic. Learn more → AI Safety, Security, and Governance Expanded Model Support The Platform now supports additional AI models, including:
  • OpenAI: gpt-5.3-chat-latest
  • Anthropic: claude-sonnet-4-6
  • Grok Realtime (Available via custom integration)
Learn more →

v1.7.0 March 8, 2026

This update includes new features and enhancements summarized below. Multi-Agent Orchestration Enhanced App Creation Journey A new AI-assisted App Creation wizard walks users through building an application in a few simple steps. Users can build from scratch, import from the Marketplace, or provide a few instructions and let AI generate the complete app definition for review. This reduces time-to-value and makes onboarding easier for new users. A2A Protocol Support Agent Platform now supports the A2A (Agent-to-Agent) Protocol, enabling agentic apps to connect with external A2A-compliant agents without custom adapters. Developers can connect external agents using an A2A server URL. The platform automatically retrieves their details and handles communication translation. External agents can be included in workflows and managed by supervisors just like native agents. Learn more → MCP Enhancements MCP integration now adds another layer of security in this update:
  • Refresh of MCP Server Configuration: Users can refresh the MCP Server configurations to fetch the latest tool definitions, applying silent updates when no changes are detected and flagging impact when tools are affected.
  • Editable MCP Server Name and URL: The MCP server name and URL can be updated after configuration, eliminating the need to recreate the server when endpoints change.
  • Consistent Tool Naming: MCP tools now keep their original server-defined names in Agent Platform without prefixing with the MCP Server Name. A prefix is added when duplicate tool names are identified across all the tools, including MCP Servers.
  • Enum Parameter Support: Agentic apps now support enums as parameters for MCP tools.
Learn more → Namespace Enhancements A default namespace is now automatically associated with every variable. Variables remain part of the default namespace context even when custom namespaces are used, ensuring consistent access and simpler scope management. Learn more → Selective Tool Response Configuration Developers can now extract specific values from tool responses using simple path notation, while still retaining the option to send the complete tool response. This provides greater control over outputs while maintaining backward compatibility. Learn more → Event Configuration Enhancements Developers now have greater control over system event messages during agent interactions. Event messages for ‘End of Conversation’ and ‘Agent Handoff’ are now optional, and AI-generated message prompts can be edited directly in the UI. Content, memory, and environment variables are now supported in both custom messages and AI prompts, resolved dynamically at runtime for greater flexibility and personalization. Learn more → Agent Activation Control Agents can now be temporarily disabled without deleting them. Disabled agents are excluded from runtime orchestration but remain fully editable, with their configuration preserved across versions and environments. Learn more → No-Code & Pro-Code Tools Enhanced Workflow Tools Versioning Tool versions are now automatically created and deployed as part of the app versioning process. Previously, all app versions used the same version of the workflow tool. If a tool was updated, every app using that tool received the update—whether it was intended or not. Now, each app version keeps its own tool version, created automatically when users create an app version. Key updates:
  • Automatic Version Snapshots: When users create an app version, the workflow tools used in that app are automatically versioned. This captures the complete tool configuration at that moment.
  • Run Multiple Versions: Users can run multiple versions of the same tool simultaneously. For example, v1.0 and v2.0 of a tool can run side by side in different app versions.
  • Keep Apps Independent: Different app versions automatically use their corresponding tool versions. A production app can remain on a stable version while a beta app uses the latest updates.
Learn more → AI Engineering Tools Expanded Model Support The platform now supports additional AI models, giving users greater flexibility in choosing the right model for their use case. New models include:
  • Azure OpenAI: GPT-Realtime, GPT-Realtime-Mini, GPT-5.1, GPT-5.1-Chat, GPT-5.2, and GPT-5.2-Chat
  • OpenAI: gpt-image-1.5
  • Anthropic: claude-opus-4.6
Learn more → Integration with Microsoft Foundry Model Catalog The platform now supports direct integration with the Microsoft Foundry model catalog, enabling users to discover and use models deployed there. Model setup is simplified with a single Target URI and Service Principal–based authentication. Users can browse available projects, view deployed models, and add them as external models without manual API configuration. A new External Credentials section in Settings centralizes authentication details to streamline access and management. Learn more → Other Improvements Centralized SSO and MFA Management Enhancements Authentication settings now include a unified interface for configuring Single Sign-On (SSO) and Multi-Factor Authentication (MFA) at the organization level. Administrators can enable or disable SSO, select supported protocols and providers, and exclude specific users from SSO requirements to maintain fallback access. MFA policies are now context-aware. When SSO is enabled, MFA applies only to excluded users, with SSO users managed by the identity provider. When SSO is disabled, MFA can be enforced organization-wide. Supported MFA methods include authenticator apps (TOTP), SMS, and email. Learn more → Favorite Workspaces for Quick Access Users can mark up to three workspaces as favorites, pinning them to the top of the workspace list for quick access and easy switching.

v1.6.0 January 31, 2026

This update includes new features and enhancements summarized below. Multi-Agent Orchestration Enhanced Agent Creation Flow Agent creation now supports three paths: building agents from scratch, importing pre-built agents from the Marketplace, or adding externally deployed agents for orchestration. Each path provides a tailored setup flow with agent-specific configurations. This streamlines agent onboarding with guided experiences and eliminates the previous two-step enablement process across apps and agent profiles. Learn more → AI-Assisted Prompt Refinement The prompt editor now includes AI-assisted refinement, enabling users to easily improve and optimize prompts directly within the editor. This feature reduces iteration cycles and improves prompt accuracy through clearer, more effective definitions, making prompt writing faster and easier. Note: This feature is in preview and can be enabled upon request. No-code & Pro-Code Tools Enhanced Access Control for Tool Logs Tool-level role management has been enhanced with separate permissions for tool log visibility, allowing administrators to control access to the tool log list and detailed execution logs independently. These permissions support three access levels - detailed access, view-only, and no access, providing finer control over log. Environment Variable for Workflow Tools in Agentic Apps Workflow Tools created within or scoped to Agentic Apps can now use environment variables defined at the app level. Access is managed through namespaces—when you attach a namespace to a tool, all environment variables within that namespace become available for use. Workflow Tools created outside an Agentic App and not linked to any app cannot access namespaces or app-level environment variables. Learn more → AI Engineering Tools Vertex AI Model Integration Agent Platform now offers secure connections to Google Vertex AI-hosted Gemini models (2.5 and 3.0 families). You can configure connections manually or via cURL import with automated credential extraction for both AI Studio and Vertex AI formats. A guided setup includes built-in validation, connection testing, and error handling. The Platform stores all credentials securely using encryption. This integration works across Agentic Apps, Workflow Tools, and Prompts. Learn more → Expanded Model Support The Agent Platform now supports additional AI models, giving users greater flexibility in selecting the right model for their use case. New models include:
  • Google: gemini-3-pro-preview, gemini-3-pro-image-preview, gemini-3-flash-preview, gemini-2.5-flash-native-audio-preview-12-2025, gemini-2.5-flash-native-audio-preview-09-2025, gemini-2.5-flash-preview-09-2025, gemini-2.5-flash-lite-preview-09-2025, gemini-2.5-flash-lite, and gemini-2.5-flash-image.
  • OpenAI: gpt-realtime-mini-2025-10-06, gpt-audio-mini-2025-10-06, gpt-audio-2025-08-28, gpt-realtime-2025-08-28, gpt-4o-audio-preview-2025-06-03, gpt-4o-realtime-preview-2025-06-03, o3-2025-04-16, o4-mini-2025-04-16, gpt-4o-search-preview-2025-03-11, o3-mini-2025-01-31, gpt-4o-realtime-preview-2024-12-17, gpt-4o-mini-audio-preview-2024-12-17, gpt-4o-audio-preview-2024-12-17, o1-2024-12-17, gpt-4o-2024-11-20, gpt-4o-2024-08-06, gpt-4o-mini-2024-07-18, gpt-4o-2024-05-13, gpt-4-turbo-2024-04-09, gpt-4.1-nano, gpt-4.1-mini, gpt-4.1, gpt-4-turbo, gpt-3.5-turbo-0125, gpt-4o-mini-transcribe, gpt-4o-mini-audio-preview, gpt-4o-audio-preview, gpt-4o-realtime-preview, gpt-audio-mini, gpt-audio, gpt-image-1-mini, gpt-image-1, gpt-realtime-mini, gpt-realtime, o4-mini, o3, and o1.
Learn more →

v1.5.0 January 17, 2026

This update includes new features and enhancements summarized below. Multi-Agent Orchestration Direct Real-Time Voice Integration for Single-Agent Apps The Single Agent Orchestration Pattern now supports real-time models, which significantly reduce response latency when your agentic app contains only one agent. The Platform now automatically bypasses the supervisor routing layer and connects users directly to the agent, eliminating unnecessary orchestration overhead. This improvement is especially beneficial for voice interactions with real-time models where speed is critical, and it works automatically without requiring any configuration changes. Customizable Waiting Messages The Waiting Experience feature enhances voice interactions by streaming natural filler messages during processing delays, reducing perceived latency and ensuring smoother conversations. This feature is now publicly available and includes a customizable prompt editor for creating AI-generated dynamic waiting messages. This feature is supported only in ASR/TTS mode (not available for real-time models). Tool Output Artifacts in Response Payload You can now configure tools to include their outputs as artifacts in the final response payload. This new capability allows you to capture specific tool execution results and make them available under the ‘artifacts’ key in the response, enabling downstream channels and applications to access structured data for custom processing, display logic, or integration workflows. Artifact inclusion is configurable at the individual tool level, giving you precise control over which tool outputs are exposed in the response. No-code & Pro-Code Tools PII Handling for Workflow Tools The Agent Platform extends existing PII handling to Workflow Tools, ensuring sensitive data is securely processed while preventing exposure in logs, traces, or model outputs. Before a Workflow Tool starts execution, input fields are automatically scanned for declared PII patterns. Inputs identified as PII are masked as configured and passed to the tool in redacted form. If the Workflow tools are granted access to the original value in the PII configuration:
  • The tool can securely unredact and use the PII internally for execution.
  • All monitoring, debugging logs, and execution traces continue to display only masked values.
Improved Context Variable Selection in the Flow Builder Selecting context variables is now faster and more intuitive. When users type {{ in any field that supports context variables, a dynamic dropdown appears showing all available variables grouped by node, including environment variables defined at the workflow-tool level. This eliminates the hassle of manually entering the full path. Coming Soon: Support for selecting and referencing agentic app-level environment variables in Workflow Tools is currently in progress and will be available in an upcoming release. AI Engineering Tools Expanded Model Support The Agent Platform now supports additional AI models, giving users greater flexibility in selecting the right model for their use case. New models include:
  • OpenAI Models: gpt-5.2-chat-latest, gpt-5.2-2025-12-11, gpt-5.2, gpt-5.1-chat-latest, gpt-5.1-2025-11-13, and gpt-5.1.
  • Anthropic: claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929, and claude-opus-4-5-20251101
Open-Source Model Support for Agentic Apps Agentic apps now support open-source models, offering flexible, cost-effective alternatives for building AI agents. You can use the following models directly within your agentic applications:
  • meta-llama/Llama-3.1-8B-Instruct
  • meta-llama/Llama-3.2-1B-Instruct
  • meta-llama/Llama-3.2-3B-Instruct
  • mistralai/Mistral-7B-Instruct-v0.3
  • mistralai/Mistral-Nemo-Instruct-2407
  • XiaomiMiMo/MiMo-VL-7B-RL
These models offer diverse capabilities across different sizes and specializations, letting you optimize for performance, cost, or specific use cases while maintaining full access to Platform orchestration, tools, and knowledge features. Other Improvements Ability to Configure Default Role for New Users Workspace admins can now set a default role for new Platform users added via email, AD sync, or API in Users Management → Settings. This streamlines onboarding by assigning the correct permissions immediately, eliminating the need for manual role updates after provisioning. This setting applies only to new users. For existing users, change roles in Users Management → Users.