LatticaAI Documentation
  • Welcome to LatticaAI
  • Conceptual Guide
  • Architecture Overview
    • Management Client
    • Query Client
  • Platform Workflows
    • Account Management
    • Model Management
    • User Access Management
    • Query Submission
    • Credit Management
    • Worker Management
  • How-To Guides
    • Client Installation
      • How-To: Install Management Client
      • How-To: Install Query Client
    • Model Lifecycle
      • How-To: Deploy AI model
      • How-To: Modify AI Model Settings
    • Access Control
      • How-To: Create User Access Token
      • How-To: Modify User Access Token Setting
      • How-To: Remove Token's Assignment
      • How-To: Assign Token to Model
      • How-To: See List of Tokens
    • Resource Management
      • How-To: Start Worker
      • How-To: Stop Worker
      • How-To: Monitor Worker Performance
    • Secure Query Processing
      • How To: Upload Evaluation Key
      • How-To: Encrypt Input Message
      • How To: Execute Query
      • How-To: Decrypt Output Data
      • How-To: Encrypt, Execute, and Decrypt in One Step
    • Account and Finance Operations
      • How-To: View Payment Transaction History
      • How-To: Update Account Information
      • How-To: View Credit Balance and Add Credit to Your Account
      • How-To: Monitor Balance and Usage
  • Demo Tutorials
    • Image Sharpening with LatticaAI Demo Tutorial
    • Sentiment Analysis with LatticaAI Demo Tutorial
    • Health Analysis with LatticaAI Demo Tutorial
    • Digit Recognition with LatticaAI Demo Tutorial
    • Zooming Into Each Step of Demo Run with LatticaAI flow
Powered by GitBook
On this page
  • Actors
  • System Workflow and Stages
  • Summary of Responsibilities
  • Pricing Model
  • Key Points

Was this helpful?

Conceptual Guide

This page provides an overview of how our system operates, the roles of each actor, and the process required for setup and ongoing use.

PreviousWelcome to LatticaAINextArchitecture Overview

Last updated 2 months ago

Was this helpful?

Actors

⫸ AI Provider: The organization that owns and manages the AI model. The model provider sets up the system, uploads the model, and handles operations such as resource management and credit maintenance.

⫸ End User: The individual interacting with a specific AI model by sending encrypted queries. They use a provider-issued access token to securely communicate with the model.

⫸ LatticaAI Backend: Our cloud-based core system securely manages providers data, AI models, access tokens, and financial transactions. It handles all encrypted queries and responses.

⫸ Worker: A hardware accelerator that runs AI model computations. We currently use GPUs, but our hardware-agnostic architecture - powered by , our integration layer, enables support for any FHE-compatible acceleration hardware.


System Workflow and Stages

LatticaAI’s platform is organized into five key stages, combining one-time setup and ongoing operations across different roles. Each stage defines a specific set of responsibilities handled via either the Lattica Web Console (or Python SDK) or the , depending on the actor.

Below is a breakdown of each stage, detailing the responsibilities, timing, and actors involved.

1

AI Provider Workspace Preparation (One-Time Setup)

Prepares your environment to support encrypted model processing.

  • Account Creation: Register on the LatticaAI platform.

Typically a one-time setup. Only revisit if your model or infrastructure undergoes major changes.

2

Interaction Setup (As Needed)

Manages access and financial readiness for secure model operation.

3

End-User Workspace Setup (One-Time Setup)

Sets up the environment for secure, encrypted communication with your model.

4

Worker Lifecycle Management (Ongoing)

Optimizes compute usage and cost through manual control of worker activity.

5

Secure Query Processing (Ongoing)

Executes encrypted queries from end users to your deployed model.

  • Encrypted Response: The worker processes the query and returns a fully encrypted result.

🔧 Note: Most provider-side activities (Stages 1, 2, and 4) can be performed either via the Management Client's Python SDK (for integration into your systems) or through our Web Console (for ease of use). End-user operations (Stages 3 and 5) are handled exclusively through the Query Client.


Summary of Responsibilities

Actor
Responsibilities

AI Provider

Initial account setup, model configuration, token and credit management, worker administration, and Lattica Query Client deployment in the end-user's environment

End User

Evaluation Key generation, ongoing query submission

LatticaAI Backend

Manages accounts, models, tokens, and encrypted data handling throughout the process

Worker

Executes AI model computations in the cloud as directed by the AI provider, based on available credits


Pricing Model

Our platform uses a credit system.

AI providers are charged for active worker time and must maintain enough credits to keep the service running.

Workers remain active while credits are available, but AI provider can also stop them manually. When credits run out, all active workers automatically shut down.

This approach optimizes resource usage and helps providers control their costs effectively.


Key Points

⫸ Cost Control with Credits: AI Providers must maintain enough credits to activate and keep workers running. If credits are depleted, active workers will automatically stop.

⫸ One-Time vs. Ongoing Tasks: Workspace Preparation and End-User Model Connection Setup are one-time tasks, while Worker Management and Query Processing are ongoing, based on resource needs.

⫸ Encryption Throughout: All interactions with the AI model are encrypted, ensuring end-to-end data privacy.

Consultation and Confirmation: to verify your model's compatibility with the homomorphic encrypted processing.

Model Management Tooling: Use the Web Console or the Management Client to manage model onboarding and operations.

Model Submission: via the chosen interface. LatticaAI prepares it for secure processing and notifies you when ready.

: Create tokens to control who can access your model.

: Maintain credit balance to keep worker nodes active. If credits run out, workers will automatically shut down.

: Installs the on the user device.

: Generates an evaluation key (EVK) for encrypted data transmission with the AI model.

Worker Activation/Deactivation: workers when needed; them when idle to control costs.

: The end user sends encrypted input to the model.

Contact us
install
Submit your AI model
Access Token Generation
Credit Management
Query Client Installation
query client
Evaluation Key Generation
Start
stop
Query Submission
HEAL
Query Client
Diagram of system workflow and stages
Page cover image