Page cover

Conceptual Guide

This page provides an overview of how our system operates, the roles of each actor, and the process required for setup and ongoing use.

Actors

AI Provider: The organization that owns and manages the AI model. The model provider sets up the system, uploads the model, and handles operations such as resource management and credit maintenance.

End User: The individual interacting with a specific AI model by sending encrypted queries. They use a provider-issued access token to securely communicate with the model.

LatticaAI Backend: Our cloud-based core system securely manages providers data, AI models, access tokens, and financial transactions. It handles all encrypted queries and responses.

Worker: A hardware accelerator that runs AI model computations. We currently use GPUs, but our hardware-agnostic architecture - powered by HEAL, our integration layer, enables support for any FHE-compatible acceleration hardware.


System Workflow and Stages

LatticaAI’s platform is organized into five key stages, combining one-time setup and ongoing operations across different roles. Each stage defines a specific set of responsibilities handled via either the Lattica Web Console (or Python SDK) or the Query Client, depending on the actor.

Below is a breakdown of each stage, detailing the responsibilities, timing, and actors involved.

Diagram of system workflow and stages
1

AI Provider Workspace Preparation (One-Time Setup)

Prepares your environment to support encrypted model processing.

  • Account Creation: Register on the LatticaAI platform.

  • Consultation and Confirmation: Contact us to verify your model's compatibility with the homomorphic encrypted processing.

  • Model Management Tooling: Use the Web Console or install the Management Client to manage model onboarding and operations.

  • Model Submission: Submit your AI model via the chosen interface. LatticaAI prepares it for secure processing and notifies you when ready.

2

Interaction Setup (As Needed)

Manages access and financial readiness for secure model operation.

  • Access Token Generation: Create tokens to control who can access your model.

  • Credit Management: Maintain credit balance to keep worker nodes active. If credits run out, workers will automatically shut down.

3

End-User Workspace Setup (One-Time Setup)

Sets up the environment for secure, encrypted communication with your model.

4

Worker Lifecycle Management (Ongoing)

Optimizes compute usage and cost through manual control of worker activity.

  • Worker Activation/Deactivation: Start workers when needed; stop them when idle to control costs.

5

Secure Query Processing (Ongoing)

Executes encrypted queries from end users to your deployed model.

  • Query Submission: The end user sends encrypted input to the model.

  • Encrypted Response: The worker processes the query and returns a fully encrypted result.

🔧 Note: Most provider-side activities (Stages 1, 2, and 4) can be performed either via the Management Client's Python SDK (for integration into your systems) or through our Web Console (for ease of use). End-user operations (Stages 3 and 5) are handled exclusively through the Query Client.


Summary of Responsibilities

Actor
Responsibilities

AI Provider

Initial account setup, model configuration, token and credit management, worker administration, and Lattica Query Client deployment in the end-user's environment

End User

Evaluation Key generation, ongoing query submission

LatticaAI Backend

Manages accounts, models, tokens, and encrypted data handling throughout the process

Worker

Executes AI model computations in the cloud as directed by the AI provider, based on available credits


Pricing Model

Our platform uses a credit system.

AI providers are charged for active worker time and must maintain enough credits to keep the service running.

Workers remain active while credits are available, but AI provider can also stop them manually. When credits run out, all active workers automatically shut down.

This approach optimizes resource usage and helps providers control their costs effectively.


Key Points

Cost Control with Credits: AI Providers must maintain enough credits to activate and keep workers running. If credits are depleted, active workers will automatically stop.

One-Time vs. Ongoing Tasks: Workspace Preparation and End-User Model Connection Setup are one-time tasks, while Worker Management and Query Processing are ongoing, based on resource needs.

Encryption Throughout: All interactions with the AI model are encrypted, ensuring end-to-end data privacy.

Last updated

Was this helpful?