Skip to main content

Documentation Index

Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Coordinate how agents work together to handle user requests.
The orchestrator is the intelligence layer that manages agent interactions. It:
  • Interprets user intent
  • Selects the appropriate agents
  • Coordinates multi-agent workflows
  • Resolves conflicts between outputs
  • Delivers unified responses

Orchestration Patterns

Choose a pattern based on your use case complexity.
  • Single Agent: One agent handles all requests. Best for focused, well-defined domains.
  • Supervisor: Central orchestrator coordinates multiple specialized agents. Best for complex, parallelizable tasks.
  • Adaptive Network: Agents dynamically hand off to each other. Best for sequential, multi-domain workflows.

Pattern Comparison

AspectSingle AgentSupervisorAdaptive Network
ComplexityLowMediumMedium-High
Agents1MultipleMultiple
CoordinationNoneCentralizedDecentralized
ExecutionSequentialParallelSequential
LatencyLowestMediumLow
Best forSimple tasksComplex decompositionDynamic hand-offs

How Orchestration Works

Choosing the Right Pattern

Use Single Agent when:
  • Your app has one primary capability
  • Tasks don’t require coordination between specialists
  • You want minimal orchestration overhead
  • Response latency is critical
Example: A leave management bot where one agent handles all employee requests. Use Supervisor when:
  • Tasks can become independent subtasks
  • You need parallel execution for speed
  • Multiple specialists should contribute to responses
  • You want centralized control and conflict resolution
Example: A customer service app where billing, orders, and technical support agents work in parallel. Use Adaptive Network when:
  • Tasks flow naturally between domains
  • You need dynamic routing based on context
  • Agents should autonomously decide when to hand off
  • Sequential expertise is needed
Example: An employee onboarding app where HR, IT, and Finance agents hand off based on the current step.

Orchestrator Responsibilities

Task Decomposition

Breaking complex requests into manageable subtasks:
User: "I need to cancel my order and get a refund"

Decomposition:
├── Subtask 1: Look up order details (Order Agent)
├── Subtask 2: Process cancellation (Order Agent)
└── Subtask 3: Initiate refund (Billing Agent)

Agent Delegation

Routing tasks to appropriate specialists:
Request: "What's my order status and can I upgrade my shipping?"

Delegation:
├── Order Agent: Retrieve order status
└── Shipping Agent: Process shipping upgrade

Conflict Resolution

Handling inconsistencies between agent outputs:
Conflict:
├── Agent A: "Item is in stock"
└── Agent B: "Item ships in 2 weeks"

Resolution: Check inventory system → Provide accurate status

Context Management

Maintaining conversation state across agents:
Context:
├── User ID: 12345
├── Session ID: abcde
└── Previous Turns:
    ├── Turn 1: User provides order number
    ├── Turn 2: Agent A uses order number
    └── Turn 3: Agent B receives context, doesn't ask again


Orchestrator Configuration

Navigate to App > Orchestrator. For each orchestration pattern, configure the following.

Default AI Model

The default AI model that the app uses for operations. Select any of the configured models. The default settings of the model are shown. Click the Settings icon to update the default settings.

Voice-to-Voice Interactions

Enable this field to allow users to interact with the app via real-time voice conversations. Once enabled, also provide the AI model that processes speech and generates voice responses. The platform supports various models. See the list of supported models and how to add an external model to the platform.
Adaptive Network Orchestration does not support Gemini real time models.
Click the Settings icon to update the voice model settings.

Voice AI Model Settings

SettingDescriptionKey Notes
VoiceVoice used for audio responsesDepends on model/provider
Input Audio FormatFormat of incoming audioExample: pcm16, must match client input
Output Audio FormatFormat of generated audioMust match playback capability
Speech SpeedSpeed of generated speech1.0 = default
Max Response Output TokensMax tokens per responseControls response length & latency
TemperatureControls randomness/creativityLower = deterministic, higher = creative
Max TokensMax tokens generatedLimits total response size
Noise Reduction TypeFilters input audio noiseNear Field (close mic), Far Field (room audio)
VAD TypeSpeech detection methodExample: Server VAD
ThresholdSensitivity of detectionLower = more sensitive
Prefix PaddingAudio before speech detectionPrevents clipping
Silence DurationSilence before speech endsLower = faster response
Create ResponseAuto generate responseTrue/False
Interrupt ResponseAllow interruptionTrue/False
Transcription LanguageLanguage for speech-to-textDefault: Auto detect; improves accuracy if set
Transcription PromptContext for ASR modelHelps recognize domain-specific terms and improve accuracy. Learn More.

Behavioral Instructions

Use this section to set the guidelines for the agent’s behavior. These instructions will be added to the orchestrator and the system prompt of each agent. Click Modify Instructions, then enter the prompt.

Input Processor

The Input Processor is an app-level feature that executes a custom script before orchestration begins. It’s used to preprocess user input, enrich context, and populate memory variables that can be used by agents during execution. When enabled, this processor is executed before the welcome event is triggered.
Input Processor is not supported for voice interactions.
Use the Input Processor to:
  • Transform input: Apply custom logic before agent execution to modify, sanitize, or normalize the raw user input.
  • Initialize session and populate memory : Initialize session-level variables and configurations and populate memory with variables that are required throughout the application.
  • Enrich Context: Enrich requests with additional context, for example, user metadata, defaults.

Input Processor Execution modes

  • Always Run : Executes on session creation and every user input. Use this when preprocessing is required consistently for all interactions.
  • Run Once: Executes only once per session. Use this when initialization logic is needed only at the start of a session.

Adding Input Processor

  • Click Add Script to open the script editor. The user Input that triggered this agent run is available to the script for use.
  • Provide the script that updates the output before delivering to the end user. You can use JavaScript or Python for this scripting.
    • Access input in the script using $ prefix.
    • Access environment variables using env keyword : env.<variable-name>. Note that the namespace to which the variable belongs must be selected for the input processor.
    • Access content variables as: content.<variable-name>.
    • Access memory stores in the script using these methods.
  • Select namespaces to make their variables available to the script.
  • Use Test Input Processor to validate the script’s behavior.
Sample Script
// Load customer profile at session start
const userId = context.userId;

// Fetch customer profile from memory or external store
const customerProfile = await memory.get_content("customerProfile_" + userId);

const accountType = customerProfile.accountType;
const tier = customerProfile.tier;
const preferredLanguage = customerProfile.preferredLanguage;

// Store key details in session memory for agent access
await memory.set_content("sessionContext", {
  userId: userId,
  accountType: accountType,       // e.g. "savings", "current", "credit"
  tier: tier,                     // e.g. "standard", "premium", "private"
  preferredLanguage: preferredLanguage  // e.g. "en", "fr", "de"
});

return {
  status: "Session initialized",
  userId: userId,
  accountType: accountType,
  tier: tier
};

Response Processor

The Response Processor is an app-level feature that executes a custom script on every agent response before delivering it to the end user. It executes as the final stage in response generation—after the agent produces its output but before that output leaves the platform. Configured on the Orchestrator page, the script applies uniformly across all agents in the app, regardless of which agent handled the request. Using the Response Processor to Generate Artifacts Use the Response Processor to write artifact payloads directly in code. When the processor runs, it constructs the payload, writes it to the artifacts key, and the platform appends it for delivery. This approach is useful when:
  • The artifact must be assembled from multiple tool outputs or session variables.
  • The payload structure depends on business logic that’s better handled centrally.
  • No tool is involved—the processor can generate artifacts independently, using only the input context.
Using the Response Processor to Transform Existing Artifacts When tools have already populated the artifacts array, the Response Processor can enrich or transform it before delivery:
  • Reorder elements to control render priority.
  • Filter artifacts by channel, user segment, or business logic.
  • Transform or enrich the payload before delivery to the client.
  • Merge multiple tool outputs into a single consolidated artifact.
  • Annotate with metadata, wrapper keys, or channel-specific formatting.
When a Response Processor is active, streaming is disabled. Artifacts and the text response are delivered as a complete payload after processing. If the processor fails, the original untransformed response is returned, and the error is logged.

Adding a Response Processor

Click Add Script to open the script editor. The following are available as input to the script:
  1. Input — The original user input that triggered this agent run.
  2. Output — The agent’s response generated by the agent.
  3. Artifacts — An array of tool outputs and structured data returned during the run by different tools.
Response Processing Script: Provide the script that updates the output before delivering to the end user. You can use JavaScript or Python for this scripting.
  • Access input, output or artifacts in the script using $ prefix.
  • Access environment variables using env keyword : env.<variable-name>.
  • Access content variables as: content.<variable-name>.
  • Access memory stores in the script using these methods.
Namespace: Select namespaces to make their variables available to the script. Use Test Response Processor to validate the script’s behavior. Sample Script

// Extracts account balance amount from the textual response and only sends back the value, instead of complete textual response.

console.log("[PostProcessor] Input received:", $input); 
console.log("[PostProcessor] Output received:", $output);
console.log("[PostProcessor] Artifacts:", $artifacts);
console.log(env.name) //access environment variable
console.log(content.new) //access content variable

const originalOutput = $output;
let modifiedOutput = originalOutput;

const outputData = $output;

let finalOutput;

// Extract balance amount(number) from text (e.g., "Your balance is 7236")
const match = outputData.match(/\d+/);

if (match) {
    finalOutput = {
      balance: Number(match[0])
    };
} else {
    finalOutput = {
      message: outputData
    };
}

return {
  output: finalOutput
};

Single Agent Configuration

In a Single Agent setup, all user requests are routed directly to the agent. Since no supervisor agent is involved, the agent’s prompt serves as the primary instruction set for the underlying model. When processing a request, the platform constructs a single consolidated prompt by combining the following components in order, and sends it to the model:
  1. Agent Prompt — The core instructions that define the agent’s role and behavior.
  2. Behavioral Instructions — Guidelines that control tone, constraints, and response style.
  3. Tools Assigned to the Agent — Tool definitions available for the agent to invoke.
  4. Events Enabled in the Application — Event-related context.

Supervisor Configuration

In addition to the configurations discussed above, configure the following for the Supervisor pattern:
  • Orchestrator Prompt — A set of instructions for the supervisor of the app. This includes instructions and requirements that guide the orchestrator’s decision-making process.
  • Orchestration Prompt for Voice-to-Voice Interactions — This prompt serves as instructions for the supervisor in case of voice interactions.

Adaptive Network Configuration

In addition to the configurations discussed above, configure the following for the Adaptive Network pattern:
  • Initial Agent — Select the agent that serves as the first point of contact for each task. This agent receives the user’s request, processes the initial requirements, and begins task execution.
For this pattern, configure the agents with the delegation rules.