Conceptual Guide
This page provides an overview of how our system operates, the roles of each actor, and the process required for setup and ongoing use.
Last updated
Was this helpful?
This page provides an overview of how our system operates, the roles of each actor, and the process required for setup and ongoing use.
Last updated
Was this helpful?
⫸ AI Provider: The organization that owns and manages the AI model. The model provider sets up the system, uploads the model, and handles operations such as resource management and credit maintenance.
⫸ End User: The individual interacting with a specific AI model by sending encrypted queries. They use a provider-issued access token to securely communicate with the model.
⫸ LatticaAI Backend: Our cloud-based core system securely manages providers data, AI models, access tokens, and financial transactions. It handles all encrypted queries and responses.
⫸ Worker: A hardware accelerator that runs AI model computations. We currently use GPUs, but our hardware-agnostic architecture - powered by , our integration layer, enables support for any FHE-compatible acceleration hardware.
LatticaAI’s platform is organized into five key stages, combining one-time setup and ongoing operations across different roles. Each stage defines a specific set of responsibilities handled via either the Lattica Web Console (or Python SDK) or the , depending on the actor.
Below is a breakdown of each stage, detailing the responsibilities, timing, and actors involved.
AI Provider
Initial account setup, model configuration, token and credit management, worker administration, and Lattica Query Client deployment in the end-user's environment
End User
Evaluation Key generation, ongoing query submission
LatticaAI Backend
Manages accounts, models, tokens, and encrypted data handling throughout the process
Worker
Executes AI model computations in the cloud as directed by the AI provider, based on available credits
Our platform uses a credit system.
AI providers are charged for active worker time and must maintain enough credits to keep the service running.
Workers remain active while credits are available, but AI provider can also stop them manually. When credits run out, all active workers automatically shut down.
This approach optimizes resource usage and helps providers control their costs effectively.
⫸ Cost Control with Credits: AI Providers must maintain enough credits to activate and keep workers running. If credits are depleted, active workers will automatically stop.
⫸ One-Time vs. Ongoing Tasks: Workspace Preparation and End-User Model Connection Setup are one-time tasks, while Worker Management and Query Processing are ongoing, based on resource needs.
⫸ Encryption Throughout: All interactions with the AI model are encrypted, ensuring end-to-end data privacy.
Consultation and Confirmation: to verify your model's compatibility with the homomorphic encrypted processing.
Model Management Tooling: Use the Web Console or the Management Client to manage model onboarding and operations.
Model Submission: via the chosen interface. LatticaAI prepares it for secure processing and notifies you when ready.
: Create tokens to control who can access your model.
: Maintain credit balance to keep worker nodes active. If credits run out, workers will automatically shut down.
: Installs the on the user device.
: Generates an evaluation key (EVK) for encrypted data transmission with the AI model.
Worker Activation/Deactivation: workers when needed; them when idle to control costs.
: The end user sends encrypted input to the model.