Edge Deployment
Edge and Multi-cloud Inference Anywhere provides the ability to deploy models and perform inferences on them in any environment (edge or multicloud), on any hardware. The inferences in these environments are observed for drift detection, the deployed models updated when new versions or entire new sets of models are created, and are deployed with or without GPUs.
The following hardware and AI Accelerators are supported.
| Accelerator | ARM Support | X64/X86 Support | Intel GPU | Nvidia GPU | Description |
|---|---|---|---|---|---|
None | N/A | N/A | N/A | N/A | The default acceleration, used for all scenarios and architectures. |
AIO | √ | X | X | X | AIO acceleration for Ampere Optimized trained models, only available with ARM processors. |
Jetson | √ | X | X | √ | Nvidia Jetson acceleration used with edge deployments with ARM processors. |
CUDA | √ | √ | X | √ | NVIDIA CUDA acceleration supported by both ARM and X64/X86 processors. Intended for deployment with Nvidia GPUs. See Nvidia Jetson Deployment Scenario for additional requirements. |
OpenVINO | X | √ | √ | X | Intel OpenVino acceleration. AI Accelerator from Intel compatible with x86/64 architectures. Aimed at edge and multi-cloud deployments either with or without Intel GPUs. |
QAIC | X | √ | X | X | Qualcomm Cloud AI. AI acceleration compatible with x86/64 architectures. For details on LLM deployment optimizations with QAIC, see LLM Inference with Qualcomm QAIC |
Pipeline Edge Deployment
Once a pipeline is published to the Edge Registry service, it can be deployed in environments such as Docker, Kubernetes, or similar container running services by a DevOps engineer. Before starting, verify the pipeline is published as per the Edge and Multi-cloud Pipeline Publish guide.
Docker Deployment
Before starting, verify that the the Docker environment is able to connect to the artifact registry service.
For more details, check with the documentation on the artifact registry service. The following are provided for the three major cloud services:
- Set up authentication for Docker
- Authenticate with an Azure container registry
- Authenticating Amazon ECR Repositories for Docker CLI with Credential Helper
For the deployment, the engine URL is specified with the following environmental variables:
DEBUG(true|false): Whether to include debug output.OCI_REGISTRY: The URL of the registry service.CONFIG_CPUS: The number of CPUs to use. This applies to the inference engine only.The following options apply to the inference pipeline and the models assigned as pipeline steps.
gpus: Whether to allocate available gpus to the deployment. If no gpus are to be allocated, this options is not available. For more details on how to specify gpu resources based on the edge hardware configuration, see Docker Engine: Containers: Access an NVIDIA GPU For example, to allocate gpus to the inference pipeline:--gpus all
cpus: The fractional number of cpus to apply. For example:--cpus=1.25--cpus=2.0
memory: The amount of ram to allocate in unit values of:k: kilobytem: megabyteg: gigabyte
For example:
--memory=1536m--memory=512k
PIPELINE_URL: The published pipeline URL.EDGE_BUNDLE(Optional): The base64 encoded edge token and other values to connect to the Wallaroo Ops instance. This is used for edge management and transmitting inference results for observability. IMPORTANT NOTE: The token forEDGE_BUNDLEis valid for one deployment. Best practice is to use thePERSISTENT_VOLUME_DIRto store the authentication credentials between deployments. For subsequent deployments, generate a new edge location with its ownEDGE_BUNDLE.LOCAL_INFERENCE_STORAGE(Optional): Sets amount of storage to allocate for the edge deployments inference log storage capacity. This is in the format{size as number}{unit value}. The values are similar to the Kubernetes memory resource units format. If used, must be used withPLATEAU_PAGE_SIZE. The accepted unit values are:Ki(for KiloBytes)Mi(for MegaBytes)Gi(for GigaBytes)Ti(for TeraBytes)
PLATEAU_PAGE_SIZE(Optional): How many inference log rows to upload from the edge deployment at a time. Must be used withLOCAL_INFERENCE_STORAGE.
The following variables must be set by the user.
OCI_USERNAME: The edge registry username.OCI_PASSWORD: The edge registry password or token.EDGE_PORT: The external port used to connect to the edge endpoints.PERSISTENT_VOLUME_DIR(Only applies to edge deployments with edge locations): The location for the persistent volume used by the edge location to store session information, logs, etc.
The following example shows deploying models in an edge environment with the following resources allocated:
- Wallaroo inference engine:
- cpus: 1
- Inference Pipeline:
- cpus: 1.25
- memory: 1536m
- gpus: true
docker run \
-p $EDGE_PORT:8080 \
-e OCI_USERNAME=$OCI_USERNAME \
-e OCI_PASSWORD=$OCI_PASSWORD \
-e PIPELINE_URL=sample-pipeline-url \
-e CONFIG_CPUS=1.0 --gpus all --cpus=1.25 --memory=1536m \
sample-engine-url
Docker Deployment Example
Using our sample environment, here’s sample deployment using Docker with a computer vision ML model, the same used in the Wallaroo Use Case Tutorials Computer Vision: Retail tutorials.
Login through
dockerto confirm access to the registry service. First,docker login. For example, logging into the artifact registry with the token stored in the variabletok:cat $tok | docker login -u _json_key_base64 --password-stdin https://sample-registry.comThen deploy the Wallaroo published pipeline with an edge added to the pipeline publish through
docker run.IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the
EDGE_BUNDLEoption include an authentication token that only authenticates once. To store the token long term, include the persistent volume flag-v {path to storage}setting.Deployment with
EDGE_BUNDLEfor observability.docker run -p 8080:8080 \ -v ./data:/persist \ -e DEBUG=true \ -e OCI_REGISTRY=$REGISTRYURL \ -e EDGE_BUNDLE=ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT1lZGdlLWNjZnJhdWQtb2JzZXJ2YWJpbGl0eXlhaWcKZXhwb3J0IEpPSU5fVE9LRU49MjZmYzFjYjgtMjUxMi00YmU3LTk0ZGUtNjQ2NGI1MGQ2MzhiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2Utb2JzZXJ2YWJpbGl0eS1waXBlbGluZTozYjQ5ZmJhOC05NGQ4LTRmY2EtYWVjYy1jNzUyNTdmZDE2YzYKZXhwb3J0IFdPUktTUEFDRV9JRD03 \ -e CONFIG_CPUS=1 \ -e OCI_USERNAME=$REGISTRYUSERNAME \ -e OCI_PASSWORD=$REGISTRYPASSWORD \ -e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-observability-pipeline:3b49fba8-94d8-4fca-aecc-c75257fd16c6 \ ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalone-mini:v2023.4.0-main-4079Connection to the Wallaroo Ops instance from edge deployment with
EDGE_BUNDLEis verified with the long entryNode attestation was successful.Deployment without observability.
docker run -p 8080:8080 \ -e DEBUG=true \ -e OCI_REGISTRY=$REGISTRYURL \ -e CONFIG_CPUS=1 \ -e OCI_USERNAME=$REGISTRYUSERNAME \ -e OCI_PASSWORD=$REGISTRYPASSWORD \ -e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-observability-pipeline:3b49fba8-94d8-4fca-aecc-c75257fd16c6 \ ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalo
Docker Compose Deployment
For users who prefer to use docker compose, the following sample compose.yaml file is used to launch the Wallaroo Edge pipeline. This is the same used in the Wallaroo Use Case Tutorials Computer Vision: Retail tutorials. The volumes tag is used to preserve the login session from the one-time token generated as part of the EDGE_BUNDLE.
EDGE_BUNDLE is only required when adding an edge to a Wallaroo publish for observability. The following is deployed without observability.
services:
engine:
image: {Your Engine URL}
ports:
- 8080:8080
environment:
PIPELINE_URL: {Your Pipeline URL}
OCI_REGISTRY: {Your Edge Registry URL}
OCI_USERNAME: {Your Registry Username}
OCI_PASSWORD: {Your Token or Password}
CONFIG_CPUS: 4
The procedure is:
Login through
dockerto confirm access to the registry service. First,docker login. For example, logging into the artifact registry with the token stored in the variabletokto the registryus-west1-docker.pkg.dev:cat $tok | docker login -u _json_key_base64 --password-stdin https://sample-registry.comSet up the
compose.yamlfile.IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the
EDGE_BUNDLEoption include an authentication token that only authenticates once. To store the token long term, include the persistent volume with thevolumes:tag.services: engine: image: sample-registry.com/engine:v2023.3.0-main-3707 ports: - 8080:8080 volumes: - ./data:/persist environment: PIPELINE_URL: sample-registry.com/pipelines/edge-cv-retail:bf70eaf7-8c11-4b46-b751-916a43b1a555 EDGE_BUNDLE: ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT1lZGdlLWNjZnJhdWQtb2JzZXJ2YWJpbGl0eXlhaWcKZXhwb3J0IEpPSU5fVE9LRU49MjZmYzFjYjgtMjUxMi00YmU3LTk0ZGUtNjQ2NGI1MGQ2MzhiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2Utb2JzZXJ2YWJpbGl0eS1waXBlbGluZTozYjQ5ZmJhOC05NGQ4LTRmY2EtYWVjYy1jNzUyNTdmZDE2YzYKZXhwb3J0IFdPUktTUEFDRV9JRD03 OCI_REGISTRY: sample-registry.com OCI_USERNAME: _json_key_base64 OCI_PASSWORD: abc123 CONFIG_CPUS: 4Then deploy with
docker compose up.
Docker Compose Deployment Example
The deployment and undeployment is then just a simple docker compose up and docker compose down. The following shows an example of deploying the Wallaroo edge pipeline using docker compose.
docker compose up
[+] Running 1/1
✔ Container cv_data-engine-1 Recreated 0.5s
Attaching to cv_data-engine-1
cv_data-engine-1 | Wallaroo Engine - Standalone mode
cv_data-engine-1 | Login Succeeded
cv_data-engine-1 | Fetching manifest and config for pipeline: sample-registry.com/pipelines/edge-cv-retail:bf70eaf7-8c11-4b46-b751-916a43b1a555
cv_data-engine-1 | Fetching model layers
cv_data-engine-1 | digest: sha256:c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984
cv_data-engine-1 | filename: c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984
cv_data-engine-1 | name: resnet-50
cv_data-engine-1 | type: model
cv_data-engine-1 | runtime: onnx
cv_data-engine-1 | version: 693e19b5-0dc7-4afb-9922-e3f7feefe66d
cv_data-engine-1 |
cv_data-engine-1 | Fetched
cv_data-engine-1 | Starting engine
cv_data-engine-1 | Looking for preexisting `yaml` files in //modelconfigs
cv_data-engine-1 | Looking for preexisting `yaml` files in //pipelines
Podman Deployment
Wallaroo edge deployments can be made using Podman.
For the deployment, the engine URL is specified with the following environmental variables:
DEBUG(true|false): Whether to include debug output.OCI_REGISTRY: The URL of the registry service.CONFIG_CPUS: The number of CPUs to use. This applies to the inference engine only.The following options apply to the inference pipeline and the models assigned as pipeline steps.
gpus: Whether to allocate available gpus to the deployment. If no gpus are to be allocated, this options is not available. For more details on how to specify gpu resources based on the edge hardware configuration, see Docker Engine: Containers: Access an NVIDIA GPU For example, to allocate gpus to the inference pipeline:--gpus all
cpus: The fractional number of cpus to apply. For example:--cpus=1.25--cpus=2.0
memory: The amount of ram to allocate in unit values of:k: kilobytem: megabyteg: gigabyte
For example:
--memory=1536m--memory=512k
PIPELINE_URL: The published pipeline URL.EDGE_BUNDLE(Optional): The base64 encoded edge token and other values to connect to the Wallaroo Ops instance. This is used for edge management and transmitting inference results for observability. IMPORTANT NOTE: The token forEDGE_BUNDLEis valid for one deployment. For subsequent deployments, generate a new edge location with its ownEDGE_BUNDLE.LOCAL_INFERENCE_STORAGE(Optional): Sets amount of storage to allocate for the edge deployments inference log storage capacity. This is in the format{size as number}{unit value}. The values are similar to the Kubernetes memory resource units format. If used, must be used withPLATEAU_PAGE_SIZE. The accepted unit values are:Ki(for KiloBytes)Mi(for MegaBytes)Gi(for GigaBytes)Ti(for TeraBytes)
PLATEAU_PAGE_SIZE(Optional): How many inference log rows to upload from the edge deployment at a time. Must be used withLOCAL_INFERENCE_STORAGE.
The following variables must be set by the user.
OCI_USERNAME: The edge registry username.OCI_PASSWORD: The edge registry password or token.EDGE_PORT: The external port used to connect to the edge endpoints.PERSISTENT_VOLUME_DIR(Only applies to edge deployments with edge locations): The location for the persistent volume used by the edge location to store session information, logs, etc.
Podman Deployment Example
Using our sample environment, here’s sample deployment using Podman with a linear regression ML model, the same used in the Wallaroo Edge Observability with Wallaroo Assays tutorials.
Best practice is to login as the root user before running
podman. For example:sudo su-.Deploy the Wallaroo published pipeline with an edge added to the pipeline publish through
podman run.IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the
EDGE_BUNDLEoption include an authentication token that only authenticates once. To store the token long term, include the persistent volume flag-v {path to storage}setting.Deployment with
EDGE_BUNDLEfor observability. This is for edge deployments with specific edge locations defined for observability. For more details, see Edge Observabilitypodman run -v $PERSISTENT_VOLUME_DIR:/persist \ -p $EDGE_PORT:8080 \ -e OCI_USERNAME=$OCI_USERNAME \ -e OCI_PASSWORD=$OCI_PASSWORD \ -e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/assay-demonstration-tutorial:1ff19772-f41f-42fb-b0d1-f82130bf5801\ -e EDGE_BUNDLE=ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IENPTkZJR19DUFVTPTQKZXhwb3J0IEVER0VfTkFNRT1ob3VzZXByaWNlLWVkZ2UtZGVtb25zdHJhdGlvbi0wMgpleHBvcnQgT1BTQ0VOVEVSX0hPU1Q9ZG9jLXRlc3Qud2FsbGFyb29jb21tdW5pdHkubmluamEKZXhwb3J0IFBJUEVMSU5FX1VSTD1naGNyLmlvL3dhbGxhcm9vbGFicy9kb2Mtc2FtcGxlcy9waXBlbGluZXMvYXNzYXktZGVtb25zdHJhdGlvbi10dXRvcmlhbDoxZmYxOTc3Mi1mNDFmLTQyZmItYjBkMS1mODIxMzBiZjU4MDEKZXhwb3J0IEpPSU5fVE9LRU49YTQ0OGIyZjItNjgwYi00Y2ZiLThiMjItY2ZjNTI5MTk5ZjY5CmV4cG9ydCBPQ0lfUkVHSVNUUlk9Z2hjci5pbw== \ -e CONFIG_CPUS=4 --cpus=4.0 --memory=3g \ ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6250Deployment without observability. This is for publishes that do not have a specific edge location defined.
podman run \ -p $EDGE_PORT:8080 \ -e OCI_USERNAME=$OCI_USERNAME \ -e OCI_PASSWORD=$OCI_PASSWORD \ -e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/assay-demonstration-tutorial:1ff19772-f41f-42fb-b0d1-f82130bf5801 \ -e CONFIG_CPUS=4 --cpus=4.0 --memory=3g \ ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6250
Helm Deployment
Published pipelines can be deployed through the use of helm charts.
Helm deployments take up to two steps - the first step is in retrieving the required values.yaml and making updates to override.
IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the EDGE_BUNDLE option include an authentication token that only authenticates once. Helm chart installations automatically add a persistent volume during deployment to store the authentication session data for future deployments.
Login to the registry service with
helm registry login. For example, if the token is stored in the variabletok:helm registry login sample-registry.com --username _json_key_base64 --password $tokPull the helm charts from the published pipeline. The two fields are the Helm Chart URL and the Helm Chart version to specify the OCI . This typically takes the format of:
helm pull oci://{published.helm_chart_url} --version {published.helm_chart_version}Extract the
tgzfile and copy thevalues.yamland copy the values used to edit engine allocations, etc. The following are required for the deployment to run:ociRegistry: registry: {your registry service} username: {registry username here} password: {registry token here}For Wallaroo Server deployments with edge location set, the values include
edgeBundleas generated when the edge was added to the pipeline publish.ociRegistry: registry: {your registry service} username: {registry username here} password: {registry token here} edgeBundle: abcdefg
Store this into another file, suc as local-values.yaml.
Create the namespace to deploy the pipeline to. For example, the namespace
wallaroo-edge-pipelinewould be:kubectl create -n wallaroo-edge-pipelineDeploy the
helminstallation withhelm installthrough one of the following options:Specify the
tgzfile that was downloaded and the local values file. For example:helm install --namespace {namespace} --values {local values file} {helm install name} {tgz path} --timeout 10m --wait --wait-for-jobsSpecify the expended directory from the downloaded
tgzfile.helm install --namespace {namespace} --values {local values file} {helm install name} {helm directory path} --timeout 10m --wait --wait-for-jobsSpecify the Helm Pipeline Helm Chart and the Pipeline Helm Version.
helm install --namespace {namespace} --values {local values file} {helm install name} oci://{published.helm_chart_url} --version {published.helm_chart_version} --timeout 10m --wait --wait-for-jobs
Once deployed, the DevOps engineer will have to forward the appropriate ports to the
svc/engine-svcservice in the specific pipeline. For example, usingkubectl port-forwardto the namespaceccfraudthat would be:kubectl port-forward svc/engine-svc -n ccfraud01 8080 --address 0.0.0.0`