Skip to main content

Overview

LoopOS AI Server is a Python-based FastAPI application that hosts all AI agents and services. It serves as the backbone of the LoopOS AI ecosystem, providing a scalable and modular infrastructure for AI-powered operations.

Technology Stack

FastAPI

Modern, fast web framework for building APIs with automatic OpenAPI documentation.

OpenAI Agents

Multi-agent framework for building conversational AI systems with tool calling and handoffs.

Langfuse

Observability platform for tracing, token tracking, and cost monitoring.

Pydantic

Data validation using Python type annotations for request/response models.

Architecture Components

Service Layer

Each service in LoopOS AI follows a consistent architecture pattern:
Service Router (FastAPI)

Service Class (LoopOsAiService)

Agent(s) (OpenAI Agents)

Tools (Function calling)

Multi-Agent System

LoopOS AI uses a multi-agent architecture where specialized agents work together:

Product Identification Agent

Identifies products from catalog and collects basic product information.

Protocol Agent

Handles protocol questions after item creation, asking and validating required information.

C2C Descriptor Agent

Generates marketplace listings with pricing, descriptions, and condition assessment.

Validation Agents

Multiple specialized agents for title/description, brand, and product validation.

Agent Handoffs

Agents can hand off to other agents when specific conditions are met:
# Example: Product Identification → Options → Create Item → Protocol
loop_os_ai_submission_product_identification_agent
    ↓ (handoff when product confirmed)
loop_os_ai_submission_product_options_agent
    ↓ (handoff when options selected)
loop_os_ai_submission_create_item_agent
    ↓ (handoff when item created)
loop_os_ai_submission_protocol_agent

Service Structure

All services inherit from LoopOsAiService, which provides:
  • Session management: Track conversations and context across requests
  • Context handling: Manage agent context and state
  • Observability: Automatic Langfuse tracing
  • Error handling: Consistent error responses
  • Tool integration: Standardized tool calling patterns

Base Service Interface

class LoopOsAiService(ABC):
    name: str  # Service identifier
    starting_agent: Agent  # Entry point agent
    context: BaseModel  # Service-specific context
    
    async def process(self, body: dict, files: list = None):
        # Process request, run agent, return response
        pass

Request Flow

1

Request Received

FastAPI router receives HTTP request and validates input using Pydantic models.
2

Service Initialization

Service class initializes context with request data and business context.
3

Agent Execution

Starting agent processes input items, calls tools, and may hand off to other agents.
4

Tool Execution

Agents call tools (e.g., get_catalog, create_item) which interact with external systems.
5

Response Generation

Agent generates structured output, service formats response, and returns to client.

Observability

LoopOS AI includes comprehensive observability:

Langfuse Integration

  • Automatic tracing: All agent runs are traced with input/output
  • Token tracking: Monitor token usage and costs per request
  • Session correlation: Track conversations across multiple requests
  • Performance metrics: Latency, throughput, and error rates

Logging

  • Structured logging for all service operations
  • Error tracking with stack traces
  • Request/response logging for debugging

Middleware Stack

The server includes several middleware layers:

CORS

Cross-origin resource sharing for web client access.

GZip Compression

Automatic compression for large responses.

Request Size Limits

Maximum request size enforcement (default: 10MB).

Rate Limiting

Per-endpoint rate limiting using slowapi.

Timeout

Request timeout protection (default: 120 seconds).

Deployment

LoopOS AI Server is deployed on DigitalOcean App Platform:
https://loopos-ai-server-3recw.ondigitalocean.app
Branch: main

Shared Database

All environments share a single database for:
  • Session storage
  • Conversation history
  • Observability data

Context Management

Context is crucial for LoopOS AI agents. Context includes:
  • Business context (loopos_core_context): Description of the business/platform
  • Item context: Current item data and metadata
  • Session context: Conversation history and state
  • Agent-specific context: Service-specific parameters
Context flows through the agent system:
Request → Service Context → Agent Context → Tools → Response

Security

Input Validation

All inputs validated using Pydantic models with type checking.

Rate Limiting

Per-endpoint rate limits prevent abuse.

Size Limits

Request size limits prevent DoS attacks.

Timeout Protection

Request timeouts prevent resource exhaustion.

Scalability

The architecture supports horizontal scaling:
  • Stateless services: Services don’t maintain in-memory state
  • Session storage: Sessions stored in shared database
  • Async processing: FastAPI async support for concurrent requests
  • Tool isolation: Tools are isolated and can be optimized independently

Future Architecture Considerations

The architecture may evolve to support:
  • Separate repos for complex systems (price competitors, submission realtime)
  • Queue-based processing for long-running tasks
  • Vector storage for semantic search and RAG
  • Enhanced caching with Redis