Skip to main content
Coordinate how agents work together to handle user requests.
The orchestrator is the intelligence layer that manages agent interactions. It:
  • Interprets user intent
  • Selects the appropriate agent(s)
  • Coordinates multi-agent workflows
  • Resolves conflicts between outputs
  • Delivers unified responses

Orchestration Patterns

Choose a pattern based on your use case complexity.
  • Single Agent: One agent handles all requests. Best for focused, well-defined domains.
  • Supervisor: Central orchestrator coordinates multiple specialized agents. Best for complex, parallelizable tasks.
  • Adaptive Network: Agents dynamically hand off to each other. Best for sequential, multi-domain workflows.

Pattern Comparison

AspectSingle AgentSupervisorAdaptive Network
ComplexityLowMediumMedium-High
Agents1MultipleMultiple
CoordinationNoneCentralizedDecentralized
ExecutionSequentialParallelSequential
LatencyLowestMediumLow
Best forSimple tasksComplex decompositionDynamic hand-offs

How Orchestration Works

Choosing the Right Pattern

Use Single Agent when:
  • Your app has one primary capability
  • Tasks don’t require coordination between specialists
  • You want minimal orchestration overhead
  • Response latency is critical
Example: A leave management bot where one agent handles all employee requests. Use Supervisor when:
  • Tasks can be broken into independent subtasks
  • You need parallel execution for speed
  • Multiple specialists should contribute to responses
  • You want centralized control and conflict resolution
Example: A customer service app where billing, orders, and technical support agents work in parallel. Use Adaptive Network when:
  • Tasks flow naturally between domains
  • You need dynamic routing based on context
  • Agents should autonomously decide when to hand off
  • Sequential expertise is required
Example: An employee onboarding app where HR, IT, and Finance agents hand off based on the current step.

Orchestrator Responsibilities

Task Decomposition

Breaking complex requests into manageable subtasks:
User: "I need to cancel my order and get a refund"

Decomposition:
├── Subtask 1: Look up order details (Order Agent)
├── Subtask 2: Process cancellation (Order Agent)
└── Subtask 3: Initiate refund (Billing Agent)

Agent Delegation

Routing tasks to appropriate specialists:
Request: "What's my order status and can I upgrade my shipping?"

Delegation:
├── Order Agent: Retrieve order status
└── Shipping Agent: Process shipping upgrade

Conflict Resolution

Handling inconsistencies between agent outputs:
Conflict:
├── Agent A: "Item is in stock"
└── Agent B: "Item ships in 2 weeks"

Resolution: Check inventory system → Provide accurate status

Context Management

Maintaining conversation state across agents:
Context:
├── User ID: 12345
├── Session ID: abcde
└── Previous Turns:
    ├── Turn 1: User provides order number
    ├── Turn 2: Agent A uses order number
    └── Turn 3: Agent B receives context, doesn't ask again


Orchestrator Configuration

Navigate to App > Orchestrator. For each orchestration pattern, configure the following.

Default AI Model

AI model to be used for operations across the app. Select any of the configured models. The default settings of the model are shown. Click the Settings icon to update the default settings.

Voice-to-Voice Interactions

Enable this field to allow users to interact with the app via real-time voice conversations. Once enabled, also provide the AI model that processes speech and generates voice responses. The platform supports various models. See the list of supported models and how to add an external model to the platform.
Adaptive Network Orchestration does not support Gemini real time models.
Click the Settings icon to update the voice model settings.

Voice AI Model Settings

SettingDescriptionKey Notes
VoiceVoice used for audio responsesDepends on model/provider
Input Audio FormatFormat of incoming audioExample: pcm16, must match client input
Output Audio FormatFormat of generated audioMust match playback capability
Speech SpeedSpeed of generated speech1.0 = default
Max Response Output TokensMax tokens per responseControls response length & latency
TemperatureControls randomness/creativityLower = deterministic, higher = creative
Max TokensMax tokens generatedLimits total response size
Noise Reduction TypeFilters input audio noiseNear Field (close mic), Far Field (room audio)
VAD TypeSpeech detection methodExample: Server VAD
ThresholdSensitivity of detectionLower = more sensitive
Prefix PaddingAudio before speech detectionPrevents clipping
Silence DurationSilence before speech endsLower = faster response
Create ResponseAuto-generate responseTrue/False
Interrupt ResponseAllow interruptionTrue/False
Transcription LanguageLanguage for speech-to-textDefault: Auto-detect; improves accuracy if set
Transcription PromptContext for ASR modelHelps recognize domain-specific terms and improve accuracy. Learn More.

Behavioral Instructions

Use this section to set the guidelines for the agent’s behavior. These instructions will be added to the orchestrator and the system prompt of each agent. Click Modify Instructions, then enter the prompt.

Response Processor

The Response Processor is an application-level feature that lets you run a custom script on every agent response before it is delivered to the end user. It executes as the final stage in response generation, after the agent generates its output and before the output leaves the platform. Because it is configured at the app level on the Orchestrator page, the script applies consistently across all agents in the app, regardless of which agent handled the request. Using the Response Processor to Generate Artifacts Developers can write artifact payloads directly inside the Response Processor code. When the processor runs, it constructs the desired payload, writes it into the artifacts key, and the platform appends it for delivery. This is particularly useful when:
  • The artifact needs to be assembled from multiple tool outputs or session variables rather than a single tool response.
  • The payload structure depends on business logic better handled centrally at the application level.
  • No tool is needed — the processor can produce artifacts independently based on input context alone.
Using the Response Processor to Transform Existing Artifacts When tools have already populated the artifacts array, the Response Processor can be used to enrich or transform it before delivery:
  • Reorder elements to control which artifact renders first.
  • Filter artifacts based on channel, user segment, or business logic.
  • Transform or enrich payload data before delivery to the client.
  • Merge multiple tool outputs into a single consolidated artifact.
  • Add metadata, wrapper keys, or channel-specific formatting.
When a Response Processor is active, streaming is not supported. Artifacts and the text response are delivered as a complete payload after the processor finishes. If the processor fails, the original untransformed response is returned, and the error is logged.

Adding a Response Processor

Click Add Script to open the script editor. The following are available as input to the script:
  1. Input — The original user input that triggered this agent run.
  2. Output — The agent’s response generated by the agent.
  3. Artifacts — Array of tool outputs and structured data returned during the run by different tools.
Response Processing Script: Provide the script that updates the output before delivering to the end user. You can use JavaScript or Python for this scripting.
  • Access input, output or artifacts in the script using $ prefix.
  • Access environment variables using env keyword : env.<variable-name>.
  • Access content variables as: content.<variable-name>.
  • Access memory stores in the script using these methods.
Namespace: Select the namespaces to be provided to the script. All the variables within the namespace are available to the script for processing. Use Test Response Processor to validate the script’s behavior. Sample Script

// Extracts account balance amount from the textual response and only sends back the value, instead of complete textual response.

console.log("[PostProcessor] Input received:", $input); 
console.log("[PostProcessor] Output received:", $output);
console.log("[PostProcessor] Artifacts:", $artifacts);
console.log(env.name) //access environment variable
console.log(content.new) //access content variable

const originalOutput = $output;
let modifiedOutput = originalOutput;

const outputData = $output;

let finalOutput;

// Extract balance amount(number) from text (e.g., "Your balance is 7236")
const match = outputData.match(/\d+/);

if (match) {
    finalOutput = {
      balance: Number(match[0])
    };
} else {
    finalOutput = {
      message: outputData
    };
}

return {
  output: finalOutput
};

Single Agent Configuration

In a Single Agent setup, all user requests are routed directly to the agent. Since no supervisor agent is involved, the agent’s prompt serves as the primary instruction set for the underlying model. When processing a request, the platform constructs a single consolidated prompt by combining the following components in order, and sends it to the model:
  1. Agent Prompt — The core instructions that define the agent’s role and behavior.
  2. Behavioral Instructions — Guidelines that control tone, constraints, and response style.
  3. Tools Assigned to the Agent — Tool definitions available for the agent to invoke.
  4. Events Enabled in the Application — Event-related context.

Supervisor Configuration

In addition to the configurations discussed above, configure the following for the Supervisor pattern:
  • Orchestrator Prompt — A set of instructions for the supervisor of the app. This includes instructions and requirements that guide the orchestrator’s decision-making process.
  • Orchestration Prompt for Voice-to-Voice Interactions — This prompt serves as instructions for the supervisor in case of voice interactions.

Adaptive Network Configuration

In addition to the configurations discussed above, configure the following for the Adaptive Network pattern:
  • Initial Agent — Select the agent that serves as the first point of contact for each task. This agent receives the user’s request, processes the initial requirements, and begins task execution.
For this pattern, configure the agents with the delegation rules.