LatticaAI Documentation
  • Welcome to LatticaAI
  • Conceptual Guide
  • Architecture Overview
    • Management Client
    • Query Client
  • Platform Workflows
    • Account Management
    • Model Management
    • User Access Management
    • Query Submission
    • Credit Management
    • Worker Management
  • How-To Guides
    • Client Installation
      • How-To: Install Management Client
      • How-To: Install Query Client
    • Model Lifecycle
      • How-To: Deploy AI model
      • How-To: Modify AI Model Settings
    • Access Control
      • How-To: Create User Access Token
      • How-To: Modify User Access Token Setting
      • How-To: Remove Token's Assignment
      • How-To: Assign Token to Model
      • How-To: See List of Tokens
    • Resource Management
      • How-To: Start Worker
      • How-To: Stop Worker
      • How-To: Monitor Worker Performance
    • Secure Query Processing
      • How To: Upload Evaluation Key
      • How-To: Encrypt Input Message
      • How To: Execute Query
      • How-To: Decrypt Output Data
      • How-To: Encrypt, Execute, and Decrypt in One Step
    • Account and Finance Operations
      • How-To: View Payment Transaction History
      • How-To: Update Account Information
      • How-To: View Credit Balance and Add Credit to Your Account
      • How-To: Monitor Balance and Usage
  • Demo Tutorials
    • Image Sharpening with LatticaAI Demo Tutorial
    • Sentiment Analysis with LatticaAI Demo Tutorial
    • Health Analysis with LatticaAI Demo Tutorial
    • Digit Recognition with LatticaAI Demo Tutorial
    • Zooming Into Each Step of Demo Run with LatticaAI flow
Powered by GitBook
On this page
  • Installation & Setup
  • Authentication & Model ID
  • Generating & Registering Keys
  • Process the requested query

Was this helpful?

  1. Demo Tutorials

Zooming Into Each Step of Demo Run with LatticaAI flow

Installation & Setup

Before you begin, ensure you have the following:

  • Python 3.10+ installed on your client machine.

  • Install Lattica query package.

pip install lattica_query
npm install "@Lattica-ai/lattica-query-client"

Authentication & Model ID

You need an authentication JWT token to interact with our cloud infrastructure. This token validates your requests and ensures secure communication.

Each public model we run on the cloud has its own unique modelID. The specific modelID for each demo is provided in its corresponding tutorial.

  1. Request an authentication Token: Run the code below.

  2. Store the Token securely for subsequent operations.

from lattica_query.auth import get_demo_token

# Use the model ID provided in the specific demo tutorial (e.g., 'imageEnhancement', 'sentimentAnalysis')
model_id = "demoModelId"
my_token = get_demo_token(model_id)
import { getDemoToken } from '@Lattica-ai/lattica-query-client';

// Use the model ID provided in the specific demo tutorial (e.g., 'imageEnhancement', 'sentimentAnalysis')
const modelId = "demoModelId"
const token = await getDemoToken(modelId);

In our web demo version, the client logic is initialized automatically in your browserβ€”no separate install or setup is required. The web page manages your authentication and sets the appropriate Model ID behind the scenes.


Generating & Registering Keys

We supply a class that handles all the local calculations and communications to the LatticaAI server. Initialize this class using the token you obtained.

from lattica_query.lattica_query_client import QueryClient


client = QueryClient(my_token)
import { LatticaQueryClient } from '@Lattica-ai/lattica-query-client';

const client = new LatticaQueryClient(myToken);

Our encryption scheme relies on a secret key (which stays on your machine) and an evaluation key (EVK) (sent to LatticaAI cloud server).

One key pair can be reused for multiple demo sessions.

context, secret_key, client_blocks, = client.generate_key()
await client.init();

Process the requested query

You can now encrypt it and send it securely to the cloud for processing.

result = client.run_query(context, secret_key, pt, client_blocks)
const result = await client.runQuery(pt);

The run_query method works in 4 steps:

  1. Prepares your input using the model's preprocessing rules

  2. Takes your secret key to encrypt the data into a secure format

  3. Sends your encrypted data to LatticaAI server and waits for the response

  4. Decrypts what comes back and turns it into a ready-to-use PyTorch tensor

Here are snippets of the inner implementation of run_query method:

import lattica_query.query_toolkit as toolkit_interface


# apply preprocessing on plain text
pt = toolkit_interface.apply_client_block(client_block, context, pt)

# enctypt and get ct (cipher text)
ct = toolkit_interface.enc(context, secret_key, pt, pack_for_transmission=True)

# send to server and recieve enrypted cipher text
ct_res = self.worker_api.apply_hom_pipeline(ct, block_index=client_block.block_index+1)

# decrypt and get result plain text
pt_dec = toolkit_interface.dec(context, secret_key, ct_res)

PreviousDigit Recognition with LatticaAI Demo Tutorial

Last updated 2 months ago

Was this helpful?