Zooming Into Each Step of Demo Run with LatticaAI flow

Installation & Setup

Before you begin, ensure you have the following:

  • Python 3.10+ installed on your client machine.

  • Install Lattica query package.

pip install lattica_query

Authentication & Model ID

You need an authentication JWT token to interact with our cloud infrastructure. This token validates your requests and ensures secure communication.

Each public model we run on the cloud has its own unique modelID. The specific modelID for each demo is provided in its corresponding tutorial.

  1. Request an authentication Token: Run the code below.

  2. Store the Token securely for subsequent operations.

from lattica_query.auth import get_demo_token

# Use the model ID provided in the specific demo tutorial (e.g., 'imageEnhancement', 'sentimentAnalysis')
model_id = "demoModelId"
my_token = get_demo_token(model_id)

In our web demo version, the client logic is initialized automatically in your browserβ€”no separate install or setup is required. The web page manages your authentication and sets the appropriate Model ID behind the scenes.


Generating & Registering Keys

We supply a class that handles all the local calculations and communications to the LatticaAI server. Initialize this class using the token you obtained.

from lattica_query.lattica_query_client import QueryClient


client = QueryClient(my_token)

Our encryption scheme relies on a secret key (which stays on your machine) and an evaluation key (EVK) (sent to LatticaAI cloud server).

One key pair can be reused for multiple demo sessions.

context, secret_key, client_blocks, = client.generate_key()

Process the requested query

You can now encrypt it and send it securely to the cloud for processing.

result = client.run_query(context, secret_key, pt, client_blocks)

The run_query method works in 4 steps:

  1. Prepares your input using the model's preprocessing rules

  2. Takes your secret key to encrypt the data into a secure format

  3. Sends your encrypted data to LatticaAI server and waits for the response

  4. Decrypts what comes back and turns it into a ready-to-use PyTorch tensor

Here are snippets of the inner implementation of run_query method:

import lattica_query.query_toolkit as toolkit_interface


# apply preprocessing on plain text
pt = toolkit_interface.apply_client_block(client_block, context, pt)

# enctypt and get ct (cipher text)
ct = toolkit_interface.enc(context, secret_key, pt, pack_for_transmission=True)

# send to server and recieve enrypted cipher text
ct_res = self.worker_api.apply_hom_pipeline(ct, block_index=client_block.block_index+1)

# decrypt and get result plain text
pt_dec = toolkit_interface.dec(context, secret_key, ct_res)

Last updated

Was this helpful?