Skip to main content
Custom Script Analytics provides run-level and log-level visibility into custom scripts in your account. Track execution metrics and logs for custom scripts deployed across API nodes, Function nodes, and direct API calls. Navigation: Go to SettingsMonitoringCustom Scripts.

Prerequisites

  • At least one custom script must be deployed and executed via API call or API/Function node. If no script has been deployed and executed yet, or if it has been deployed but not run, no data is displayed.
  • If a previously executed script is undeployed, existing run and log data remains accessible. No new runs or logs are generated until the script is redeployed and executed again.

Dashboard overview

The dashboard has two tabs for analyzing script performance:
  • All Runs — Run-level data including status, deployed version, response time, function, and source.
  • Logs — Log-level data for functions executed within the script, including input, output, errors, and debug data.
In All Runs, all columns except Executed On can be used as filters. In Logs, all columns except Timestamp can be used as filters.

All Runs

The All Runs tab shows performance metrics and run-level metadata for the selected script.

Performance metrics

MetricDescription
Total RunsTotal executions since deployment. Indicates usage volume and billing impact.
Response Time (P90)90% of runs complete within this time. Lower values indicate reliable performance.
Response Time (P99)99% of runs complete within this time. Higher values suggest performance outliers or issues.
Failure RatePercentage of failed runs. For example, 1 failure in 3 runs = 33.33%.
Failure rate

Run-level data

ColumnDescription
Run IDUnique identifier for the script run.
StatusSuccess, Failed, or In Progress.
Deployment VersionVersion number, incrementing with each deployment.
Response TimeExecution duration. Empty for failed or in-progress runs.
FunctionName of the executed function.
Executed OnDate and time of execution.
Source TypeTool or API (from endpoint).
SourceName of the triggering source.

Best practices for All Runs

  • Identify runs with low or high response times. Use P90 and P99 thresholds to isolate underperforming runs.
  • Analyze the Source and Source Type to diagnose failures, delayed response times, and other issues.
  • Click a run record to open the record view for that run.

Logs

The Logs tab shows execution logs captured during script runs. Log visibility depends on how the script is configured:
  • Default logging (print(), console.log()): Logs appear only after the run completes.
  • korelogger: Logs populate in real-time, with structured log levels (Info, Debug, Warning, Error). See Enhanced logging.
  • Failed runs can generate logs if logging is implemented correctly.

Performance metrics

Total Logs indicates the total number of logs recorded during execution. This metric helps determine:
  • Script activity level — how many actions or events were logged.
  • Debugging depth — more logs indicate detailed logging, which aids debugging.
  • Execution complexity — a high log count may indicate multiple operations or functions.
  • Error visibility — whether sufficient logging is available to trace issues.

Log-level data

ColumnDescription
Log IDUnique log identifier.
Log LevelStdout, Stderr, Info, Debug, Warning, or Error. See Enhanced logging for gVisor-supported log levels.
Log MessageRecorded message for the specific action.
TimestampDate and time of the log entry.

Best practices for Logs

  • Analyze the input and output for each run (identified by Run ID) using log data: Log ID, Log level, Log message, and Timestamp.
  • Use the input and output code editors in the record view to analyze and troubleshoot logs.

Record view

Click any run in All Runs to open the record view. The record view shows:
  • Run ID
  • Log-level details: Log ID, Log level, Log message, and Timestamp
  • JSON editors showing the script’s input and the function’s output
  • Navigation buttons (or use K for previous, J for next)
Record view Use the record view to:
  • Trace a specific run for debugging.
  • Inspect input and output values.
  • Identify failures, performance bottlenecks, unexpected inputs or outputs, and misconfigured logic.

Enhanced logging

The Platform supports two logging options for custom scripts running on the gVisor service.
  • Standard logging (print() in Python, console.log() in JavaScript): Logs appear in the Logs tab only after the script execution completes (success or failure).
  • korelogger: Logs stream in real-time as they are generated. Recommended for live monitoring and debugging due to its log-level control and immediate visibility.

Option 1: Standard logging

Standard logging uses default logging functions. Logs are captured as stdout during script execution. Example (Python):
def check_print_function():
    print("Checking print function...")
    print("Print function is working!")
return
Output captured in stdout:
Checking print function...
Print function is working!
The korelogger library supports structured log levels and enables real-time log streaming. Logs are also captured in stdout in this format:
<LOG_LEVEL> :: <LOG_MESSAGE>
You can modify the log format as required.
Example (Python):
import korelogger
def call_openai_chat(prompt):
    korelogger.debug("Debug log using korelogger")
    korelogger.info("Info log using korelogger")
    korelogger.warning("Warning log using korelogger")
    korelogger.error("Error log using korelogger")
    return
Output captured in stdout:
DEBUG :: Debug log using korelogger
INFO :: Info log using korelogger
WARNING :: Warning log using korelogger
ERROR :: Error log using korelogger
Log trace format:
{
    "name": "gvisor_info_log",
    "context": {
        "trace_id": "0x3453665abxxxxxxxxxxxxxxxxxxxxxxx",
        "span_id": "0x7e3xxxxxxxxxxxx",
        "trace_state": "[]"
    },
    "kind": "SpanKind.INTERNAL",
    "parent_id": null,
    "start_time": "2025-05-14T06:07:27.238927Z",
    "end_time": "2025-05-14T06:07:27.238966Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "traceparent": "00-abxxxxxxxxxxxxxxxxxxxxxxxxxx0-12345xxxxxxxxxxxf-01",
        "run_id": "run_12345",
        "deployment_id": "deploy_67890",
        "source": "api_call",
        "source_type": "test",
        "log.message": "Using korelogger to log",
        "log.level": "INFO",
        "log.trace_id": "00-abcdef12345xxxxxxxxxxxxxxxxxxxx0-12345xxxxxxxxxxf-01",
        "log.meta.msg": "Using korelogger to log",
        "log.meta.pid": "41",
        "log.meta.logid": "4XXXXXX5-5XX0-4XX6-bXX8-4XXXXXXXXXX6"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "service.name": "gvisor-py-normal",
            "service.instance.id": "4XXXXXX1-9XX9-4XXb-9XXc-aXXXXXXXXXX1",
            "deployment.environment": "rnd-xxx.example.com"
        },
        "schema_url": ""
    }
}
Each log entry uses these identifying markers:
FieldDescription
traceparentLinks related operations together.
run_idIdentifies each script execution.
deployment_idTracks which version of the script ran.
sourceShows where the log came from.
source_typeCategorizes the type of source.
Log messages and levels are available as log.message and log.level in the attributes field.
You can modify the structure of the attributes field as required.

Export runs and logs

Export All Runs or Logs data as a .csv file. The export reflects the selected date range and applied column filters.
  1. Select the All Runs or Logs tab.
  2. Click the Ellipses button (top-right) and select Export.
Files are saved with these naming conventions:
  • Runs data: <scriptname>_runs_data (example: Qbalance_runs_data)
  • Logs data: <scriptname>_logs_data (example: Qbalance_logs_data)
Runs schema: Runs schema Logs schema: Logs schema
Each user’s export runs independently. One user’s cancellation or adjustment does not affect another user’s export.