LatticaAI Documentation
  • Welcome to LatticaAI
  • Conceptual Guide
  • Architecture Overview
    • Management Client
    • Query Client
  • Platform Workflows
    • Account Management
    • Model Management
    • User Access Management
    • Query Submission
    • Credit Management
    • Worker Management
  • How-To Guides
    • Client Installation
      • How-To: Install Management Client
      • How-To: Install Query Client
    • Model Lifecycle
      • How-To: Deploy AI model
      • How-To: Modify AI Model Settings
    • Access Control
      • How-To: Create User Access Token
      • How-To: Modify User Access Token Setting
      • How-To: Remove Token's Assignment
      • How-To: Assign Token to Model
      • How-To: See List of Tokens
    • Resource Management
      • How-To: Start Worker
      • How-To: Stop Worker
      • How-To: Monitor Worker Performance
    • Secure Query Processing
      • How To: Upload Evaluation Key
      • How-To: Encrypt Input Message
      • How To: Execute Query
      • How-To: Decrypt Output Data
      • How-To: Encrypt, Execute, and Decrypt in One Step
    • Account and Finance Operations
      • How-To: View Payment Transaction History
      • How-To: Update Account Information
      • How-To: View Credit Balance and Add Credit to Your Account
      • How-To: Monitor Balance and Usage
  • Demo Tutorials
    • Image Sharpening with LatticaAI Demo Tutorial
    • Sentiment Analysis with LatticaAI Demo Tutorial
    • Health Analysis with LatticaAI Demo Tutorial
    • Digit Recognition with LatticaAI Demo Tutorial
    • Zooming Into Each Step of Demo Run with LatticaAI flow
Powered by GitBook
On this page
  • Fundamentals
  • Worker Management Workflow
  • Future Functionality
  • Quick Links to How-To Pages

Was this helpful?

  1. Platform Workflows

Worker Management

This workflow provides a high-level overview of managing workers, including running, monitoring, and stopping them.

Fundamentals

  1. Worker-Model Relationship

    • Each worker operates for a single model, created and managed by the AI Provider.

    • One Worker, One Model: Workers cannot process multiple models, but a single model can be deployed on multiple workers simultaneously.

  2. Credit-Based Payment

    • AI Providers pay for workers’ runtime using credits purchased in advance.

    • The worker continuously deducts credits while in operation.

  3. Requirements to Start a Worker

    • The model must be active.

    • The provider’s account must have sufficient credits to sustain the worker.

  4. Query Processing and Worker Runtime When an end user submits a query using a valid access token, the request is routed to a worker running the AI model associated with that token.

    • Workers execute encrypted queries.

    • AI Providers are billed based on the runtime of active workers, not the number of queries processed.

    • Performance Considerations: High query volumes on a single worker may impact response times - scale worker sessions accordingly for optimal performance.

  5. Scaling Workers

    • Providers can start additional workers for the same model if performance drops due to high query volume.

    • There is no limit to the number of workers that can run for a model simultaneously.

  6. Monitoring Worker Performance

    • Providers can monitor worker performance by viewing:

      • Average Query Time: The average processing time for a query on the model.

      • Current Query Time: The real-time processing time for queries on the model.

  7. Stopping Workers

    • Workers can be stopped manually at any time.

    • If credits are depleted, workers are automatically stopped.


Worker Management Workflow

1

Start Worker

Before starting a worker, verify that your account has sufficient credits and check that your model is properly activated in the system.

2

Monitor Performance

Regularly check query performance metrics (average and current query time). Add workers if needed to maintain optimal performance.

3

Stop Worker

Stop workers when they are no longer needed or when credit consumption needs to be managed.


Future Functionality

  1. Performance Notifications AI Providers will receive alerts when worker performance drops below a defined threshold.

  2. Worker Usage Reports A detailed report of worker usage will help providers analyze and optimize their resource utilization.

  3. Scheduler for Worker Management AI Providers will be able to define schedules to start and stop workers automatically, eliminating the need for manual operations. This functionality will help align worker usage with predictable query loads and save credits during low-demand periods.


Quick Links to How-To Pages

For detailed steps, refer to the following guides:


PreviousCredit ManagementNextHow-To Guides

Last updated 2 months ago

Was this helpful?

[How-To: ]

[How-To: ]

[How-To: ]

Start Worker
Stop Worker
Monitor Worker Performance
Page cover image