Documentation Index Fetch the complete documentation index at: https://koreai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Building Applications
This guide walks you through building a multi-agent application with the AgenticAI Core SDK. You’ll define agents, tools, memory stores, and an orchestrator, then launch the application server.
Prerequisites
AgenticAI Core SDK installed and configured.
A configured LLM provider connection (OpenAI, Anthropic, or Azure OpenAI).
Python 3.8+ with async/await support.
Application components
Component Purpose Guide AppTop-level container for all components. This guide AgentAutonomous unit that handles tasks. Creating Agents ToolFunction an agent can invoke. Working with Tools MemoryStorePersistent data store scoped to users, sessions, or the app. Memory Stores AppConfigurationsStreaming, attachments, filler messages. This guide Orchestrator Routes messages between agents and users. Custom Orchestration
Build the application
1. Define the application
Create an App instance with a name and orchestration type:
from agenticai_core.designtime.models import App, OrchestratorType
app = App(
name = "Customer Service Bot" ,
description = "Multi-agent customer service application" ,
orchestrationType = OrchestratorType. CUSTOM_SUPERVISOR
)
2. Create agents
Define agents for each domain or function in your application:
from agenticai_core.designtime.models import Agent
from agenticai_core.designtime.models.llm_model import LlmModel, LlmModelConfig
from agenticai_core.designtime.models.prompt import Prompt
support_agent = Agent(
name = "SupportAgent" ,
description = "Handles general support queries" ,
role = "WORKER" ,
sub_type = "REACT" ,
type = "AUTONOMOUS" ,
llm_model = LlmModel(
model = "gpt-4o" ,
provider = "Open AI" ,
connection_name = "Default Connection" ,
modelConfig = LlmModelConfig(
temperature = 0.7 ,
max_tokens = 1600
)
),
prompt = Prompt(
system = "You are a helpful support agent." ,
custom = "Assist customers with their inquiries."
)
)
See all 24 lines
See Creating Agents for roles, sub-types, and prompt configuration.
Register tools using the @Tool.register decorator. Agents can invoke them during task execution:
from agenticai_core.designtime.models.tool import Tool
@Tool.register (
name = "search_knowledge_base" ,
description = "Search the knowledge base for information"
)
def search_knowledge_base ( query : str ) -> dict :
# Your implementation
results = perform_search(query)
return { "results" : results}
@Tool.register (
name = "create_ticket" ,
description = "Create a support ticket"
)
def create_ticket ( title : str , description : str ) -> dict :
# Your implementation
ticket_id = create_support_ticket(title, description)
return { "ticket_id" : ticket_id, "status" : "created" }
See all 19 lines
See Working with Tools for custom, inline, and library tool types.
4. Set up memory stores
Configure persistent storage for conversation state, user preferences, or application-wide data:
from agenticai_core.designtime.models.memory_store import (
MemoryStore, Namespace, NamespaceType,
RetentionPolicy, RetentionPeriod, Scope
)
conversation_memory = MemoryStore(
name = "Conversation History" ,
technical_name = "conversationHistory" ,
type = "hotpath" ,
description = "Stores conversation messages" ,
schema_definition = {
"type" : "object" ,
"properties" : {
"messages" : { "type" : "array" },
"context" : { "type" : "object" }
}
},
strict_schema = False ,
namespaces = [
Namespace(
name = "session_id" ,
type = NamespaceType. DYNAMIC ,
value = " {session_id} " ,
description = "Session identifier"
)
],
scope = Scope. SESSION_LEVEL ,
retention_policy = RetentionPolicy(
type = RetentionPeriod. SESSION ,
value = 1
)
)
See all 32 lines
See Memory Stores for scopes, retention policies, and namespaces.
Set application-wide settings such as streaming, file attachments, and filler messages:
from agenticai_core.designtime.models.app_configuration import (
AppConfigurations, FillerMessages, FillerMessageMode, StaticConfig
)
config = AppConfigurations(
streaming = True ,
attachments = {
"enabled" : True ,
"maxFileCount" : 5 ,
"maxFileSize" : 10
},
filler_messages = FillerMessages(
enabled = True ,
initial_delay = 1000 ,
interval_duration = 2000 ,
max_message_count = 3 ,
mode = FillerMessageMode. STATIC ,
static_config = StaticConfig(
messages = [
{ "type" : "text" , "value" : "Processing your request..." },
{ "type" : "text" , "value" : "Searching for information..." },
{ "type" : "text" , "value" : "Almost there..." }
]
)
)
)
See all 26 lines
6. Assemble the application
Combine all components into the App instance:
app = App(
name = "Customer Service Bot" ,
description = "Multi-agent customer service application" ,
orchestrationType = OrchestratorType. CUSTOM_SUPERVISOR ,
agents = [support_agent, billing_agent, technical_agent],
memory_stores = [conversation_memory, user_preferences],
configurations = config
)
7. Implement the orchestrator
Subclass AbstractOrchestrator to define routing logic:
from agenticai_core.runtime.agents.abstract_orchestrator import AbstractOrchestrator
class CustomerServiceOrchestrator ( AbstractOrchestrator ):
def __init__ ( self , agents ):
super (). __init__ ( name = "CustomerServiceOrchestrator" , agents = agents)
def route ( self , request ):
"""Route requests to appropriate agents."""
user_message = request.get( "message" , "" ).lower()
# Simple routing logic
if "billing" in user_message or "payment" in user_message:
return "BillingAgent"
elif "technical" in user_message or "error" in user_message:
return "TechnicalAgent"
else :
return "SupportAgent"
See all 17 lines
See Custom Orchestration for message handling, routing strategies, and memory-aware orchestration.
8. Start the application
Launch the MCP server with your orchestrator:
from agenticai_core.designtime.models.tool import ToolsRegistry
if __name__ == "__main__" :
app.start(
orchestrator_cls = CustomerServiceOrchestrator,
host = "0.0.0.0" ,
port = 8080
)
Complete example
from agenticai_core.designtime.models import App, Agent, OrchestratorType
from agenticai_core.designtime.models.llm_model import LlmModel, LlmModelConfig
from agenticai_core.designtime.models.prompt import Prompt
from agenticai_core.designtime.models.tool import Tool, ToolsRegistry
from agenticai_core.runtime.agents.abstract_orchestrator import AbstractOrchestrator
# 1. Register tools
@Tool.register ( name = "search" , description = "Search knowledge base" )
def search ( query : str ):
return { "results" : []}
# 2. Create agents
agent = Agent(
name = "Assistant" ,
description = "General purpose assistant" ,
role = "WORKER" ,
sub_type = "REACT" ,
type = "AUTONOMOUS" ,
llm_model = LlmModel(
model = "gpt-4o" ,
provider = "Open AI" ,
connection_name = "Default" ,
modelConfig = LlmModelConfig( temperature = 0.7 , max_tokens = 1600 )
),
prompt = Prompt(
system = "You are a helpful assistant." ,
custom = "Help users with their questions."
)
)
# 3. Create app
app = App(
name = "My App" ,
description = "AI Assistant Application" ,
orchestrationType = OrchestratorType. CUSTOM_SUPERVISOR ,
agents = [agent]
)
# 4. Create orchestrator
class MyOrchestrator ( AbstractOrchestrator ):
def __init__ ( self , agents ):
super (). __init__ ( name = "MyOrchestrator" , agents = agents)
# 5. Start
if __name__ == "__main__" :
app.start(
orchestrator_cls = MyOrchestrator,
port = 8080
)
See all 49 lines
Best practices
Agent design : Keep agents focused on specific domains. Write detailed descriptions — the orchestrator uses them for routing. Include prompt guidelines and examples.
Tool design : Make tools single-purpose with clear descriptions. Handle errors and return meaningful messages.
Memory management : Choose the right scope (USER_SPECIFIC, SESSION_LEVEL, or APPLICATION_WIDE). Define schemas for strict validation in production. Set retention policies that match actual data lifetime.
Orchestration : Implement a route_to_user fallback for unmatched queries. Maintain context across conversation turns using memory stores.