Overview
Follow these best practices to get the most out of LoopOS AI services while maintaining performance, controlling costs, and ensuring reliability.
Request Optimization
Batch operations: When possible, batch multiple operations into single requests to reduce latency.
Cache responses: Cache responses for identical requests to reduce API calls and costs.
Use appropriate services: Choose the right service for your use case - don’t use conversational services for simple tasks.
Timeout Management
# Set appropriate timeouts
timeout = 120 # seconds for most services
timeout = 300 # seconds for long-running operations (e.g., submission)
Async Processing
For long-running operations, use async callbacks:
payload = {
"callback_url": "https://your-app.com/webhooks/ai-response",
# ... other fields
}
Cost Management
Token Optimization
Provide focused context: Only include relevant context to reduce token usage.
Reuse sessions: For conversational services, reuse session_id to maintain context efficiently.
Choose appropriate models: Services use optimized models - trust the service’s model selection.
Cost Monitoring
Track costs per service and operation:
# Track costs in your application
costs = {
"decision": 0.00003, # per request
"c2c_descriptor": 0.0005, # per request
"submission": 0.05, # per conversation
}
Error Handling
Retry Strategy
Implement exponential backoff for retries:
import time
def retry_with_backoff(func, max_retries=3):
for attempt in range(max_retries):
try:
return func()
except Exception as e:
if attempt == max_retries - 1:
raise
wait_time = 2 ** attempt
time.sleep(wait_time)
Error Classification
Handle different error types appropriately:
- 422 Validation Error: Don’t retry - fix the request
- 429 Rate Limit: Retry with backoff
- 500 Server Error: Retry with backoff
- 413 Payload Too Large: Don’t retry - reduce payload size
- Timeout: Retry with backoff
Context Management
Business Context
Always provide business context:
payload = {
"loopos_core_context": "Portuguese circular economy marketplace, "
"focus on electronics and furniture, "
"active marketplace with pricing data"
}
Item Context
Provide relevant item context:
payload = {
"item_context": "iPhone 12, 128GB, blue color, minor scratches on screen, "
"fully functional, original box included"
}
Session Context
Maintain session context for conversations:
# Reuse session_id across requests
session_id = "user-123-session"
# ... use in all conversational requests
Monitoring
Logging
Log all requests and responses:
import logging
logger = logging.getLogger(__name__)
def call_service_with_logging(endpoint, payload):
logger.info(f"Calling {endpoint} with payload: {payload}")
try:
result = call_service(endpoint, payload)
logger.info(f"Success: {result}")
return result
except Exception as e:
logger.error(f"Error: {e}", exc_info=True)
raise
Metrics
Track key metrics:
- Request count per service
- Success/failure rates
- Average latency
- Token usage and costs
- Error rates by type
Security
Data Privacy
Don’t send sensitive data: Avoid sending PII or sensitive information unless necessary.
Validate inputs: Always validate and sanitize inputs before sending to API.
Use HTTPS: Always use HTTPS for all API requests.
Service-Specific Best Practices
Submission Service
- Maintain sessions: Reuse session_id throughout the conversation
- Handle images: Support image uploads for better product identification
- Error recovery: Implement error recovery for failed tool calls
- Progress tracking: Show progress to users during protocol questions
Decision Service
- Clear prompts: Use clear, specific prompts
- Limit options: Keep options list manageable (2-10 options)
- Provide context: Include relevant item and business context
C2C Descriptor
- Quality images: Use high-quality images for better results
- Complete context: Provide category, product, brand, and description
- Review output: Always review generated content before publishing
Value Estimation
- Accurate item data: Provide complete and accurate item information
- Market context: Include market context for better estimates
- Review estimates: Review estimates before using in pricing
Testing
Test Scenarios
Test your integration with:
- Happy path: Normal successful requests
- Error cases: Invalid inputs, server errors
- Edge cases: Empty inputs, very long inputs, special characters
- Performance: Concurrent requests, high load
Mock Responses
Create mock responses for testing:
def mock_decision_response():
return {
"decision": "Good",
"reasoning": "Based on the description..."
}
Documentation
Code Documentation
Document your integration:
def assess_item_condition(item_description: str) -> dict:
"""
Assess item condition using LoopOS AI Decision service.
Args:
item_description: Description of the item
Returns:
dict with 'decision' and 'reasoning' keys
Raises:
ValueError: If request fails
"""
# ... implementation