Skip to main content

Documentation Index

Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Monitor performance and gain insights into your app’s usage.

Overview

The analytics dashboard provides real-time visibility into how your agentic app performs—tracking users, sessions, messages, and resource consumption across agents, tools, and models.

Dashboard Components

Key Metrics

Usage Overview
UsersSessionsMessagesTokens
1,234
(↑ 12%)
3,456
(↑ 8%)
45,678
(↑ 15%)
2.3M
(↑ 18%)
MetricDescription
UsersUnique active users in the period
SessionsTotal conversation sessions
MessagesMessages exchanged (user + agent)
TokensTotal token consumption
Compare current period to previous: Daily/hourly breakdowns, Week-over-week comparisons, or Growth trends.

Run Analytics

Track execution across your app’s components.

Agent Runs Example

AgentRunsAvg ResponseTokensSuccess
Support Agent1,2342.3s450K98.5%
Billing Agent5671.8s180K99.1%
Order Agent8903.1s320K97.8%

Tool Runs Example

Tool TypeToolRunsAvg TimeSuccess
Workflowget_order890450ms99.2%
Codevalidate_input1,200120ms99.8%
MCPcrm_lookup456890ms96.5%
Knowledgefaq_search2,100340ms99.9%

Model Runs Example

ModelInvocationsAvg LatencyTokensCost
gpt-4o3,4001.2s1.8M$45.20
gpt-3.51,2000.4s320K$0.64

Traces

Traces provide detailed visibility into individual request lifecycles. A trace represents a single request-response cycle within the session — one user message and everything the agent did to respond to it. A session with multiple user turns contains multiple traces. An observation (Generation, Span, event) is an individual step within a trace — a model call, a tool invocation, a preprocessor run, or an event execution.

What’s in a Trace

Trace: req_abc123
├── Start: 2024-01-15 14:30:22.123
├── End: 2024-01-15 14:30:25.456
├── Duration: 3.333s

├── Events
│   ├── [14:30:22.123] Request received
│   ├── [14:30:22.145] Agent selected: Support Agent
│   ├── [14:30:22.200] Tool invoked: get_order_status
│   ├── [14:30:22.650] Tool response received
│   ├── [14:30:22.700] LLM generation started
│   ├── [14:30:25.400] LLM generation completed
│   └── [14:30:25.456] Response sent

├── Spans
│   ├── Agent Processing: 3.2s
│   ├── Tool Execution: 450ms
│   └── LLM Generation: 2.7s

└── Generations
    └── Support Agent response
        ├── Model: gpt-4o
        ├── Input tokens: 1,234
        ├── Output tokens: 256
        └── Latency: 2.7s

Trace Benefits

  • Debug request flow issues.
  • Identify bottlenecks.
  • Understand agent behavior.
  • Optimize performance.

Sessions

Sessions track continuous user interactions. Each session captures one complete user interaction from start to finish. Within a session, the execution is broken down into traces and observations. Each session records the full execution — user messages, agent decisions, tool calls, model invocations, and the final response. Use sessions to debug unexpected agent behavior, trace failures, and inspect model-level inputs and outputs. Voice Event Logs Voice event logs provide end-to-end visibility into real-time voice interactions, capturing the prompt sent to the model, tool and agent invocations, token usage, and request/response payloads for each call. These logs cover the full interaction lifecycle for sessions that originate from AI for Service, surfacing telemetry directly within the session view. To access voice event logs, open a voice session and click View Event Logs in the banner at the top of the session panel.
Voice event logs are currently available for OpenAI real-time models only, across all orchestration patterns. Logs are not yet available for Gemini, Azure OpenAI, and Ultravox models.
Session View
Session: sess_xyz789
├── User: user_456
├── Started: 2024-01-15 14:25:00
├── Duration: 12 minutes
├── Traces: 5

├── Trace 1: "What's my order status?"
│   └── Agent: Support Agent, Duration: 3.3s

├── Trace 2: "When will it arrive?"
│   └── Agent: Support Agent, Duration: 2.1s

├── Trace 3: "Can I change the address?"
│   └── Agent: Order Agent, Duration: 4.5s

├── Trace 4: "What's the cost?"
│   └── Agent: Billing Agent, Duration: 1.8s

└── Trace 5: "Thanks, that's all"
    └── Agent: Support Agent, Duration: 0.8s

Total Cost: $0.12

Generations

Track individual LLM outputs within traces.

Generation Details

FieldValue
Modelgpt-4o
Input tokens1,234
Output tokens256
Latency2.7s
Cost$0.032
Temperature0.7

Quality Assessment

  • Review response quality.
  • Identify hallucinations.
  • Track instruction following.

Filtering

Customize your analytics view:
FilterOptions
Time RangeLast hour, Last 24 hours, Last 7 days, Last 30 days, or Custom range
EnvironmentDraft (development), Staging, or Production
DimensionsBy agent, tool, model, or user segment

Exporting Data

Download analytics for external analysis:

Available Exports

  • CSV: Spreadsheet-compatible
  • JSON: Programmatic analysis
  • PDF: Shareable reports

Export Options

export:
  format: csv
  date_range: last_30_days
  include:
    - sessions
    - traces
    - generations
    - tool_runs
  filters:
    environment: production
    agent: Support Agent

Alerts

Configure notifications for important events:

Alert Types

alerts:
  - name: High error rate
    condition: error_rate > 5%
    window: 1 hour
    action: email

  - name: Slow responses
    condition: avg_latency > 5s
    window: 15 minutes
    action: slack

  - name: Cost spike
    condition: daily_cost > $100
    window: 1 day
    action: email

Audit Logs

Track all changes made across your account.

What’s Logged

  • User actions (create, update, delete)
  • Configuration changes
  • Deployments
  • Access events

Log Entry

Event: Tool Updated
User: alice@company.com
Time: 2024-01-15 14:30:00
Details:
  Tool: get_order_status
  Changes:
    - timeout: 30s → 60s
    - description: Updated

Compliance Uses

  • Track who changed what.
  • Maintain audit trail.
  • Support security reviews.

Best Practices

Monitor Key Metrics

Focus on metrics that matter:
  • Success rate: Are requests completing successfully?
  • Latency: Is performance acceptable?
  • Cost: Is spending within budget?
  • User satisfaction: Are users getting help?

Set Baselines

Establish normal ranges to detect anomalies:
baselines:
  success_rate: 95-99%
  avg_latency: 1-3s
  daily_cost: $20-50

Review Regularly

  • Daily: Quick health check
  • Weekly: Trend analysis
  • Monthly: Deep dive and optimization

Act on Insights

Use analytics to drive improvements:
  • Slow agent? Optimize tools or prompts.
  • High error rate? Review configurations.
  • Cost spike? Check token usage patterns.