Skip to main content
Nodes are the building blocks of a workflow. Each node performs a specific operation—running an AI model, calling an API, routing execution, or handling human tasks—and passes its output to the next node through the workflow context.

How Nodes Perform Operations

When a workflow runs, each node receives input from the workflow context, processes it, and writes its output back. Downstream nodes reference that output using the {{context.steps.NodeName.output}} syntax. The platform tracks each node’s execution state in the steps object of the context:
FieldDescription
inputData received by the node at execution time.
outputResult produced by the node.
statusCode / isSuccessfulWhether execution succeeded, failed, or was skipped.
logsDebug messages generated during execution.
executionTimeTime taken to complete the node.
The steps map is populated sequentially as the workflow runs:
  1. The steps object starts empty.
  2. An entry is created for each node as it begins execution.
  3. Outputs and timing data are appended on completion.
  4. On failure, error logs and status are recorded, and execution follows the configured failure path.

Node Types at a Glance

CategoryNodesPurpose
ControlStart, End, DelayDefine entry and exit points for the workflow, and introduce delays to pause execution for a defined duration.
AIText-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text, Doc Intelligence, Agentic AppRun AI model tasks across text, images, audio, and documents.
IntegrationAPI, Function, DocSearchConnect to external systems and run custom code.
LogicCondition, Loop, Split, MergeControl branching, iteration, and parallel execution.
HumanHumanRoute tasks to people for review, approval, or input.
UtilityDelay, Variable, LogPause execution, transform variables, and add logging.
ScannerInput Scanner, Output ScannerValidate prompts and responses using guardrail policies.

Control Nodes

Start Node

Every workflow begins with a Start Node. It defines how the workflow is triggered and validates the input schema. Configure triggers—API call, schedule, or event—directly in this node.
Start Node:
  trigger: api | schedule | event | manual
  input_schema:
    type: object
    properties:
      document: { type: string }
  validation:
    - required: document

End Node

The End Node terminates the workflow and returns the final output. Map outputs from any upstream node to the End Node’s output schema.
End Node:
  output_schema:
    type: object
    properties:
      result: { type: string }
      status: { type: string }
  output_mapping:
    result: "{{ai_node.output}}"
    status: "completed"

Delay Node

The Delay Node pauses workflow execution for a defined duration before proceeding to the next node. Use it to throttle processing, wait for an upstream system to become ready, or introduce a controlled interval between workflow steps.
FieldDescription
Node NameA descriptive label shown on the canvas to identify the node’s purpose.
Timeout (seconds)Duration to pause, as an integer between 30 and 86,400 seconds. Values outside this range trigger a validation error.

AI Nodes

AI nodes are multimodal components that use LLMs and AI models for specialized tasks. Each node processes inputs—text, image, audio, or documents—and generates outputs that downstream nodes can reference.

Text-to-Text

Generates, transforms, or analyzes text using an LLM.
Text-to-Text Node:
  model: gpt-4
  prompt: |
    Summarize the following text in 3 bullet points:

    {{input.text}}
  temperature: 0.7
  max_tokens: 500
  output_variable: summary
Use cases: summarization, content generation, translation, classification, entity extraction.

Text-to-Image

Generates images from text prompts.
Text-to-Image Node:
  model: dall-e-3
  prompt: "{{input.image_description}}"
  size: 1024x1024
  quality: standard
  output_variable: generated_image
Use cases: marketing visuals, product mockups, AI-driven design workflows.

Image-to-Text

Extracts text or descriptions from images using vision models.
Image-to-Text Node:
  model: gpt-4-vision
  image: "{{input.image_url}}"
  prompt: |
    Describe the contents of this image in detail.
    Extract any visible text.
  output_variable: image_analysis
Use cases: OCR, image captioning, visual QA, content moderation.

Audio-to-Text

Converts spoken audio to written text using OpenAI Whisper-1. Supports transcription in multiple languages and translation to English.
Audio-to-Text Node:
  model: whisper-1
  audio: "{{input.audio_url}}"
  language: auto
  timestamps: true
  output_variable: transcription
Supported formats: M4a, Mp3, Webm, Mp4, Mpga, Wav, Mpeg. File size limit: 25 MB maximum. Split larger files at logical points to prevent mid-sentence breaks and processing delays. Translation: Transcribes and translates non-English audio to English. Reverse translation (English to other languages) is not currently supported. Note: Whisper processes up to 224 tokens in the input prompt. Input exceeding this limit is ignored. Use cases: meeting transcription, customer support automation, subtitle generation, voice command processing.

Doc Intelligence

Extracts structured data from documents using OCR and AI models.
Doc Intelligence Node:
  document: "{{input.document_url}}"
  extraction_schema:
    - field: invoice_number
      type: string
    - field: amount
      type: number
    - field: line_items
      type: array
  ocr_model: default
  output_variable: extracted_data
Use cases: invoice processing, form extraction, contract analysis, receipt processing.

Agentic App Node

Integrates a deployed Agentic App into the workflow. When the workflow reaches this node, input is passed to the Agentic App, which interprets the task, performs the required processing, and returns output for downstream nodes. Key behaviors:
  • Performs one turn of communication per execution. For multi-step interactions, add multiple Agentic App Nodes in sequence.
  • The Agentic App must be deployed and must belong to the same workspace as the workflow.
  • Output is accessible at {{context.steps.NodeName.output}}.
Agentic App Node:
  app_id: "customer-support-bot"
  input:
    query: "{{input.customer_question}}"
    context: "{{input.customer_info}}"
  output_variable: agent_response
Use cases: incident triage, invoice review and routing, complex decisions on unstructured data.

Integration Nodes

API Node

Calls external REST or SOAP APIs. Supports both synchronous and asynchronous execution, flexible authentication, and custom headers and payloads.
API Node:
  method: POST
  url: "https://api.example.com/orders"
  headers:
    Authorization: "Bearer {{env.API_KEY}}"
    Content-Type: "application/json"
  body:
    order_id: "{{input.order_id}}"
    action: "update_status"
  timeout: 30s
  retry:
    count: 3
    delay: 1s
  output_variable: api_response
Integration types:
TypeBehaviorTimeout Range
SynchronousWaits for response before proceeding.5-180 seconds (default: 60s).
AsynchronousContinues processing without waiting.30-300 seconds (default: 60s). Use No timeout for long-running processes such as approval workflows.
Body formats: application/json, application/xml, application/x-www-form-urlencoded, Custom. Authorization options:
  • Pre-authorize: Use a system-level token or client credentials already authorized in advance. The same credentials apply to all users.
  • Allow users to authorize: Each user provides their own credentials at runtime. Useful for user-specific services such as Google Drive.
Access API node output using: {{context.steps.Start.APINodeName}}. Use cases: data enrichment, document retrieval, webhook triggers, compliance checks, external notifications.

Function Node

Runs custom code within the workflow.
Function Node:
  runtime: python3.9
  code: |
    def handler(input, context):
        # Process data
        result = input['value'] * 2
        return {'doubled': result}
  input:
    value: "{{previous_node.output}}"
  output_variable: function_result
Supported runtimes: Python 3.9+, Node.js 18+.

DocSearch Node

Searches a document repository and returns relevant results.
DocSearch Node:
  source: search_ai_app
  query: "{{input.search_query}}"
  filters:
    document_type: "policy"
  top_k: 5
  output_variable: search_results

Logic Nodes

Condition Node

Routes workflow execution based on whether defined conditions are met. Supports IF, ELSE IF, and ELSE paths with AND/OR logic.
Condition Node:
  conditions:
    - name: high_value
      expression: "{{amount}} > 10000"
      next: high_value_path
    - name: medium_value
      expression: "{{amount}} > 1000"
      next: medium_value_path
  default: standard_path
Condition types:
  • IF: Routes to a specific path when criteria are met.
  • ELSE IF: Evaluates additional criteria when the IF condition is not met.
  • ELSE: Fallback path when no conditions are satisfied.
Conditions support context variables ({{context.variable}}), previous node outputs ({{context.steps.NodeName.output}}), and static values.
A Condition Node can be called a maximum of 10 times in a single workflow run. Exceeding this limit results in an error.

Loop Node

Iterates over a collection, executing a set of nodes for each item.
Loop Node:
  collection: "{{input.items}}"
  item_variable: current_item
  max_iterations: 100
  parallel: false
  body:
    - node: process_item
    - node: store_result

Split Node

Executes multiple branches in parallel.
Split Node:
  branches:
    - name: email_notification
      nodes: [send_email]
    - name: slack_notification
      nodes: [send_slack]
    - name: update_database
      nodes: [db_update]
  wait_for_all: true

Merge Node

Combines results from parallel branches into a single output for downstream processing.
Merge Node:
  inputs:
    - branch: email_notification
      variable: email_result
    - branch: slack_notification
      variable: slack_result
  output_variable: merged_results

Human Node

Routes a task to a human for action. The workflow pauses at this node until the assigned person completes the task, then resumes from the next node.
Human Node:
  task_type: review | approval | input | classification
  title: "Review AI Extraction"
  instructions: |
    Review the extracted data and correct any errors.
  assignees:
    - role: reviewer
    - user: specific@email.com
  fields:
    - name: invoice_number
      type: text
      value: "{{ai_extract.invoice_number}}"
      editable: true
    - name: amount
      type: number
      value: "{{ai_extract.amount}}"
      editable: true
    - name: approved
      type: boolean
      default: false
  timeout:
    duration: 24h
    action: escalate
  escalation:
    assignees:
      - role: supervisor
Task types:
TypeDescription
ReviewReview and edit AI-generated output.
ApprovalApprove or reject a workflow decision.
InputProvide missing information required to continue the workflow.
ClassificationCategorize or label data for downstream processing.

Utility Nodes

Delay Node

Pauses workflow execution for a fixed duration or until a scheduled time.
Delay Node:
  duration: 5m
  # Or wait until a specific time
  until: "{{input.scheduled_time}}"

Variable Node

Sets or transforms variables in the workflow context.
Variable Node:
  operations:
    - set: formatted_date
      value: "{{format_date(input.date, 'YYYY-MM-DD')}}"
    - set: full_name
      value: "{{input.first_name}} {{input.last_name}}"

Log Node

Logs messages and variable values for debugging.
Log Node:
  level: info | debug | warn | error
  message: "Processing order {{order_id}}"
  data:
    - order_id
    - customer_email

Scanner Nodes

Input and output scanners validate prompts and responses using guardrail policies configured in your workspace. Input scanners evaluate what is sent to an LLM node; output scanners evaluate what the LLM returns. Prerequisite: Scanners must be deployed before you can add them to a workflow. To add a scanner:
  1. Open your workflow and click Guardrails in the left navigation.
  2. In the Input Scanners section, click Add Scanner, select scanners from the list, and click Done.
  3. Click a scanner to configure its settings.
  4. Repeat for output scanners as needed.
Common scanner settings:
ScannerSettings
ToxicityRisk score threshold; option to end flow if threshold is exceeded.
RegexPatterns to ban, match type, end flow if threshold is exceeded.

Error Handling

Try-Catch Pattern

Use try-catch blocks to handle node failures and route execution based on the error type.
Error Handling:
  try:
    - node: api_call
    - node: process_response
  catch:
    - condition: "{{error.type}} == 'timeout'"
      action: retry
      max_retries: 3
    - condition: "{{error.type}} == 'validation'"
      action: route_to_human
    - default:
      action: fail_workflow
      message: "{{error.message}}"

Retry Configuration

Retry:
  enabled: true
  max_attempts: 3
  delay: exponential
  initial_delay: 1s
  max_delay: 30s
  retryable_errors:
    - timeout
    - rate_limit
    - server_error
Timeout precedence: Workflow timeout > Node timeout > Model timeout.