Page cover

Conceptual Guide

This page provides an overview of how our system operates, the roles of each actor, and the process required for setup and ongoing use.

Actors

Workload Provider: The organization that owns and manages the Privacy-preserving compute workload. The workload provider sets up the system, uploads the executable logic, and handles operations such as resource management and credit maintenance.

End User: The individual interacting with a deployed computation by sending encrypted queries. They use a provider-issued access token to securely communicate with the system.

LatticaAI Backend: Our cloud-based core system securely manages providers data, he compute artifacts, access tokens, and financial transactions. It handles all encrypted queries and responses.

Worker: A hardware accelerator that runs encrypted computations. We currently use GPUs, but our hardware-agnostic architecture - powered by HEALarrow-up-right, our integration layer, enables support for any FHE-compatible acceleration hardware.


System Workflow and Stages

LatticaAI’s platform is organized into five key stages, combining one-time setup and ongoing operations across different roles. Each stage defines a specific set of responsibilities handled via either the Lattica Web Console (or Python SDK) or the Query Client, depending on the actor.

Below is a breakdown of each stage, detailing the responsibilities, timing, and actors involved.

Diagram of system workflow and stages
1

AI Provider Workspace Preparation (One-Time Setup)

Prepares your environment to support encrypted computation processing.

  • Account Creation: Register on the LatticaAI platform.

  • Consultation and Confirmation: Contact usarrow-up-right to verify your workload's compatibility with the homomorphic encrypted processing.

  • Computation Management Tooling: Use the Web Console or install the Management Client to manage computation onboarding and operations.

  • Computation Submission: Submit your computation via the chosen interface. LatticaAI prepares it for secure execution and notifies you when it’s ready.

triangle-exclamation
2

Interaction Setup (As Needed)

Manages access and financial readiness for secure computation.

  • Access Token Generation: Create tokens to control who can access your deployed computation.

  • Credit Management: Maintain credit balance to keep worker nodes active. If credits run out, workers will automatically shut down.

3

End-User Workspace Setup (One-Time Setup)

Sets up the environment for secure, encrypted communication with your deployed computation.

4

Worker Lifecycle Management (Ongoing)

Optimizes compute usage and cost through manual control of worker activity.

  • Worker Activation/Deactivation: Start workers when needed; stop them when idle to control costs.

5

Secure Query Processing (Ongoing)

Executes encrypted queries from end users to your deployed computation.

  • Query Submission: The end user sends encrypted input for secure execution.

  • Encrypted Response: The worker processes the query and returns a fully encrypted result.

circle-info

🔧 Note: Most provider-side activities (Stages 1, 2, and 4) can be performed either via the Management Client's Python SDK (for integration into your systems) or through our Web Console (for ease of use). End-user operations (Stages 3 and 5) are handled exclusively through the Query Client.


Summary of Responsibilities

Actor
Responsibilities

AI Provider

Initial account setup, workload configuration, token and credit management, worker administration, and Lattica Query Client deployment in the end-user's environment

End User

Evaluation Key generation, ongoing query submission

LatticaAI Backend

Manages accounts, execution resources, access tokens, and encrypted data handling throughout the process.

Worker

Executes homomorphic computations in the cloud as directed by the provider, based on available credits


Pricing Model

Our platform uses a credit system.

Workload providers are charged for active worker time and must maintain enough credits to keep the service running.

Workers remain active while credits are available, but the provider can also stop them manually. When credits run out, all active workers automatically shut down.

This approach optimizes resource usage and helps providers control their costs effectively.


Key Points

Cost Control with Credits: Workload providers must maintain enough credits to activate and keep workers running. If credits are depleted, active workers will automatically stop.

One-Time vs. Ongoing Tasks: Workspace Preparation and End-User Connection Setup are one-time tasks, while Worker Management and Query Processing are ongoing, based on resource needs.

Encryption Throughout: All interactions with the workloads are encrypted, ensuring end-to-end data privacy.

Last updated

Was this helpful?