Once a pipeline is deployed to the Edge Registry service, it can be deployed in environments such as Docker, Kubernetes, or similar container running services by a DevOps engineer.
Before starting, verify that the the Docker environment is able to connect to the artifact registry service.
For more details, check with the documentation on the artifact registry service. The following are provided for the three major cloud services:
For the deployment, the engine URL is specified with the following environmental variables:
DEBUG
(true|false): Whether to include debug output.
OCI_REGISTRY
: The URL of the registry service.
CONFIG_CPUS
: The number of CPUs to use. This applies to the inference engine only.
The following options apply to the inference pipeline and the models assigned as pipeline steps.
gpus
: Whether to allocate available gpus to the deployment. If no gpus are to be allocated, this options is not available. For more details on how to specify gpu resources based on the edge hardware configuration, see Docker Engine: Containers: Access an NVIDIA GPU For example, to allocate gpus to the inference pipeline:--gpus all
cpus
: The fractional number of cpus to apply. For example:--cpus=1.25
--cpus=2.0
memory
: The amount of ram to allocate in unit values of:k
: kilobytem
: megabyteg
: gigabyteFor example:
--memory=1536m
--memory=512k
PIPELINE_URL
: The published pipeline URL.
EDGE_BUNDLE
(Optional): The base64 encoded edge token and other values to connect to the Wallaroo Ops instance. This is used for edge management and transmitting inference results for observability. IMPORTANT NOTE: The token for EDGE_BUNDLE
is valid for one deployment. Best practice is to use the PERSISTENT_VOLUME_DIR
to store the authentication credentials between deployments. For subsequent deployments, generate a new edge location with its own EDGE_BUNDLE
.
LOCAL_INFERENCE_STORAGE
(Optional): Sets amount of storage to allocate for the edge deployments inference log storage capacity. This is in the format {size as number}{unit value}
. The values are similar to the Kubernetes memory resource units format. If used, must be used with PLATEAU_PAGE_SIZE
. The accepted unit values are:
Ki
(for KiloBytes)Mi
(for MegaBytes)Gi
(for GigaBytes)Ti
(for TeraBytes)PLATEAU_PAGE_SIZE
(Optional): How many inference log rows to upload from the edge deployment at a time. Must be used with LOCAL_INFERENCE_STORAGE
.
The following variables must be set by the user.
OCI_USERNAME
: The edge registry username.OCI_PASSWORD
: The edge registry password or token.EDGE_PORT
: The external port used to connect to the edge endpoints.PERSISTENT_VOLUME_DIR
(Only applies to edge deployments with edge locations): The location for the persistent volume used by the edge location to store session information, logs, etc.The following example shows deploying models in an edge environment with the following resources allocated:
docker run \
-p $EDGE_PORT:8080 \
-e OCI_USERNAME=$OCI_USERNAME \
-e OCI_PASSWORD=$OCI_PASSWORD \
-e PIPELINE_URL=sample-pipeline-url \
-e CONFIG_CPUS=1.0 --gpus all --cpus=1.25 --memory=1536m \
sample-engine-url
Using our sample environment, here’s sample deployment using Docker with a computer vision ML model, the same used in the Wallaroo Use Case Tutorials Computer Vision: Retail tutorials.
Login through docker
to confirm access to the registry service. First, docker login
. For example, logging into the artifact registry with the token stored in the variable tok
:
cat $tok | docker login -u _json_key_base64 --password-stdin https://sample-registry.com
Then deploy the Wallaroo published pipeline with an edge added to the pipeline publish through docker run
.
IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the EDGE_BUNDLE
option include an authentication token that only authenticates once. To store the token long term, include the persistent volume flag -v {path to storage}
setting.
Deployment with EDGE_BUNDLE
for observability.
docker run -p 8080:8080 \
-v ./data:/persist \
-e DEBUG=true \
-e OCI_REGISTRY=$REGISTRYURL \
-e EDGE_BUNDLE=ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT1lZGdlLWNjZnJhdWQtb2JzZXJ2YWJpbGl0eXlhaWcKZXhwb3J0IEpPSU5fVE9LRU49MjZmYzFjYjgtMjUxMi00YmU3LTk0ZGUtNjQ2NGI1MGQ2MzhiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2Utb2JzZXJ2YWJpbGl0eS1waXBlbGluZTozYjQ5ZmJhOC05NGQ4LTRmY2EtYWVjYy1jNzUyNTdmZDE2YzYKZXhwb3J0IFdPUktTUEFDRV9JRD03 \
-e CONFIG_CPUS=1 \
-e OCI_USERNAME=$REGISTRYUSERNAME \
-e OCI_PASSWORD=$REGISTRYPASSWORD \
-e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-observability-pipeline:3b49fba8-94d8-4fca-aecc-c75257fd16c6 \
ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalone-mini:v2023.4.0-main-4079
Connection to the Wallaroo Ops instance from edge deployment with EDGE_BUNDLE
is verified with the long entry Node attestation was successful
.
Deployment without observability.
docker run -p 8080:8080 \
-e DEBUG=true \
-e OCI_REGISTRY=$REGISTRYURL \
-e CONFIG_CPUS=1 \
-e OCI_USERNAME=$REGISTRYUSERNAME \
-e OCI_PASSWORD=$REGISTRYPASSWORD \
-e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-observability-pipeline:3b49fba8-94d8-4fca-aecc-c75257fd16c6 \
ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalo
For users who prefer to use docker compose
, the following sample compose.yaml
file is used to launch the Wallaroo Edge pipeline. This is the same used in the Wallaroo Use Case Tutorials Computer Vision: Retail tutorials. The volumes
tag is used to preserve the login session from the one-time token generated as part of the EDGE_BUNDLE
.
EDGE_BUNDLE
is only required when adding an edge to a Wallaroo publish for observability. The following is deployed without observability.
services:
engine:
image: {Your Engine URL}
ports:
- 8080:8080
environment:
PIPELINE_URL: {Your Pipeline URL}
OCI_REGISTRY: {Your Edge Registry URL}
OCI_USERNAME: {Your Registry Username}
OCI_PASSWORD: {Your Token or Password}
CONFIG_CPUS: 4
The procedure is:
Login through docker
to confirm access to the registry service. First, docker login
. For example, logging into the artifact registry with the token stored in the variable tok
to the registry us-west1-docker.pkg.dev
:
cat $tok | docker login -u _json_key_base64 --password-stdin https://sample-registry.com
Set up the compose.yaml
file.
IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the EDGE_BUNDLE
option include an authentication token that only authenticates once. To store the token long term, include the persistent volume with the volumes:
tag.
services:
engine:
image: sample-registry.com/engine:v2023.3.0-main-3707
ports:
- 8080:8080
volumes:
- ./data:/persist
environment:
PIPELINE_URL: sample-registry.com/pipelines/edge-cv-retail:bf70eaf7-8c11-4b46-b751-916a43b1a555
EDGE_BUNDLE: ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT1lZGdlLWNjZnJhdWQtb2JzZXJ2YWJpbGl0eXlhaWcKZXhwb3J0IEpPSU5fVE9LRU49MjZmYzFjYjgtMjUxMi00YmU3LTk0ZGUtNjQ2NGI1MGQ2MzhiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2Utb2JzZXJ2YWJpbGl0eS1waXBlbGluZTozYjQ5ZmJhOC05NGQ4LTRmY2EtYWVjYy1jNzUyNTdmZDE2YzYKZXhwb3J0IFdPUktTUEFDRV9JRD03
OCI_REGISTRY: sample-registry.com
OCI_USERNAME: _json_key_base64
OCI_PASSWORD: abc123
CONFIG_CPUS: 4
Then deploy with docker compose up
.
The deployment and undeployment is then just a simple docker compose up
and docker compose down
. The following shows an example of deploying the Wallaroo edge pipeline using docker compose
.
docker compose up
[+] Running 1/1
✔ Container cv_data-engine-1 Recreated 0.5s
Attaching to cv_data-engine-1
cv_data-engine-1 | Wallaroo Engine - Standalone mode
cv_data-engine-1 | Login Succeeded
cv_data-engine-1 | Fetching manifest and config for pipeline: sample-registry.com/pipelines/edge-cv-retail:bf70eaf7-8c11-4b46-b751-916a43b1a555
cv_data-engine-1 | Fetching model layers
cv_data-engine-1 | digest: sha256:c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984
cv_data-engine-1 | filename: c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984
cv_data-engine-1 | name: resnet-50
cv_data-engine-1 | type: model
cv_data-engine-1 | runtime: onnx
cv_data-engine-1 | version: 693e19b5-0dc7-4afb-9922-e3f7feefe66d
cv_data-engine-1 |
cv_data-engine-1 | Fetched
cv_data-engine-1 | Starting engine
cv_data-engine-1 | Looking for preexisting `yaml` files in //modelconfigs
cv_data-engine-1 | Looking for preexisting `yaml` files in //pipelines
Wallaroo edge deployments can be made using Podman.
For the deployment, the engine URL is specified with the following environmental variables:
DEBUG
(true|false): Whether to include debug output.
OCI_REGISTRY
: The URL of the registry service.
CONFIG_CPUS
: The number of CPUs to use. This applies to the inference engine only.
The following options apply to the inference pipeline and the models assigned as pipeline steps.
gpus
: Whether to allocate available gpus to the deployment. If no gpus are to be allocated, this options is not available. For more details on how to specify gpu resources based on the edge hardware configuration, see Docker Engine: Containers: Access an NVIDIA GPU For example, to allocate gpus to the inference pipeline:--gpus all
cpus
: The fractional number of cpus to apply. For example:--cpus=1.25
--cpus=2.0
memory
: The amount of ram to allocate in unit values of:k
: kilobytem
: megabyteg
: gigabyteFor example:
--memory=1536m
--memory=512k
PIPELINE_URL
: The published pipeline URL.
EDGE_BUNDLE
(Optional): The base64 encoded edge token and other values to connect to the Wallaroo Ops instance. This is used for edge management and transmitting inference results for observability. IMPORTANT NOTE: The token for EDGE_BUNDLE
is valid for one deployment. For subsequent deployments, generate a new edge location with its own EDGE_BUNDLE
.
LOCAL_INFERENCE_STORAGE
(Optional): Sets amount of storage to allocate for the edge deployments inference log storage capacity. This is in the format {size as number}{unit value}
. The values are similar to the Kubernetes memory resource units format. If used, must be used with PLATEAU_PAGE_SIZE
. The accepted unit values are:
Ki
(for KiloBytes)Mi
(for MegaBytes)Gi
(for GigaBytes)Ti
(for TeraBytes)PLATEAU_PAGE_SIZE
(Optional): How many inference log rows to upload from the edge deployment at a time. Must be used with LOCAL_INFERENCE_STORAGE
.
The following variables must be set by the user.
OCI_USERNAME
: The edge registry username.OCI_PASSWORD
: The edge registry password or token.EDGE_PORT
: The external port used to connect to the edge endpoints.PERSISTENT_VOLUME_DIR
(Only applies to edge deployments with edge locations): The location for the persistent volume used by the edge location to store session information, logs, etc.Using our sample environment, here’s sample deployment using Podman with a linear regression ML model, the same used in the Wallaroo Edge Observability with Wallaroo Assays tutorials.
Best practice is to login as the root user before running podman
. For example: sudo su-
.
Deploy the Wallaroo published pipeline with an edge added to the pipeline publish through podman run
.
IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the EDGE_BUNDLE
option include an authentication token that only authenticates once. To store the token long term, include the persistent volume flag -v {path to storage}
setting.
Deployment with EDGE_BUNDLE
for observability. This is for edge deployments with specific edge locations defined for observability. For more details, see Edge Observability
podman run -v $PERSISTENT_VOLUME_DIR:/persist \
-p $EDGE_PORT:8080 \
-e OCI_USERNAME=$OCI_USERNAME \
-e OCI_PASSWORD=$OCI_PASSWORD \
-e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/assay-demonstration-tutorial:1ff19772-f41f-42fb-b0d1-f82130bf5801\
-e EDGE_BUNDLE=ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IENPTkZJR19DUFVTPTQKZXhwb3J0IEVER0VfTkFNRT1ob3VzZXByaWNlLWVkZ2UtZGVtb25zdHJhdGlvbi0wMgpleHBvcnQgT1BTQ0VOVEVSX0hPU1Q9ZG9jLXRlc3Qud2FsbGFyb29jb21tdW5pdHkubmluamEKZXhwb3J0IFBJUEVMSU5FX1VSTD1naGNyLmlvL3dhbGxhcm9vbGFicy9kb2Mtc2FtcGxlcy9waXBlbGluZXMvYXNzYXktZGVtb25zdHJhdGlvbi10dXRvcmlhbDoxZmYxOTc3Mi1mNDFmLTQyZmItYjBkMS1mODIxMzBiZjU4MDEKZXhwb3J0IEpPSU5fVE9LRU49YTQ0OGIyZjItNjgwYi00Y2ZiLThiMjItY2ZjNTI5MTk5ZjY5CmV4cG9ydCBPQ0lfUkVHSVNUUlk9Z2hjci5pbw== \
-e CONFIG_CPUS=4 --cpus=4.0 --memory=3g \
ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6250
Deployment without observability. This is for publishes that do not have a specific edge location defined.
podman run \
-p $EDGE_PORT:8080 \
-e OCI_USERNAME=$OCI_USERNAME \
-e OCI_PASSWORD=$OCI_PASSWORD \
-e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/assay-demonstration-tutorial:1ff19772-f41f-42fb-b0d1-f82130bf5801 \
-e CONFIG_CPUS=4 --cpus=4.0 --memory=3g \
ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6250
Published pipelines can be deployed through the use of helm charts.
Helm deployments take up to two steps - the first step is in retrieving the required values.yaml
and making updates to override.
IMPORTANT NOTE: Edge deployments with Edge Observability enabled with the EDGE_BUNDLE
option include an authentication token that only authenticates once. Helm chart installations automatically add a persistent volume during deployment to store the authentication session data for future deployments.
Login to the registry service with helm registry login
. For example, if the token is stored in the variable tok
:
helm registry login sample-registry.com --username _json_key_base64 --password $tok
Pull the helm charts from the published pipeline. The two fields are the Helm Chart URL and the Helm Chart version to specify the OCI . This typically takes the format of:
helm pull oci://{published.helm_chart_url} --version {published.helm_chart_version}
Extract the tgz
file and copy the values.yaml
and copy the values used to edit engine allocations, etc. The following are required for the deployment to run:
ociRegistry:
registry: {your registry service}
username: {registry username here}
password: {registry token here}
For Wallaroo Server deployments with edge location set, the values include edgeBundle
as generated when the edge was added to the pipeline publish.
ociRegistry:
registry: {your registry service}
username: {registry username here}
password: {registry token here}
edgeBundle: abcdefg
Store this into another file, suc as local-values.yaml
.
Create the namespace to deploy the pipeline to. For example, the namespace wallaroo-edge-pipeline
would be:
kubectl create -n wallaroo-edge-pipeline
Deploy the helm
installation with helm install
through one of the following options:
Specify the tgz
file that was downloaded and the local values file. For example:
helm install --namespace {namespace} --values {local values file} {helm install name} {tgz path} --timeout 10m --wait --wait-for-jobs
Specify the expended directory from the downloaded tgz
file.
helm install --namespace {namespace} --values {local values file} {helm install name} {helm directory path} --timeout 10m --wait --wait-for-jobs
Specify the Helm Pipeline Helm Chart and the Pipeline Helm Version.
helm install --namespace {namespace} --values {local values file} {helm install name} oci://{published.helm_chart_url} --version {published.helm_chart_version} --timeout 10m --wait --wait-for-jobs
Once deployed, the DevOps engineer will have to forward the appropriate ports to the svc/engine-svc
service in the specific pipeline. For example, using kubectl port-forward
to the namespace ccfraud
that would be:
kubectl port-forward svc/engine-svc -n ccfraud01 8080 --address 0.0.0.0`
The following endpoints are available for API calls to the edge deployed pipeline.
The endpoint GET /pipelines
returns:
Running
, or Error
if there are any issues.curl localhost:8080/pipelines
{"pipelines":[{"id":"edge-cv-retail","status":"Running"}]}
The endpoint GET /models
returns a List of models with the following fields:
curl localhost:8080/models
{"models":[{"name":"resnet-50","sha":"c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984","status":"Running","version":"693e19b5-0dc7-4afb-9922-e3f7feefe66d"}]}
The inference endpoint takes the following patterns:
POST /infer
: The static inference endpoint. If a model deployment is updated or a new pipeline publish replaces a previous one, the /infer
endpoint always points to the current deployed pipeline. For more information, see Run Anywhere: In-Line Model Updates on Edge Devices.POST /pipelines/{pipeline-name}
: The pipeline-name
is the same as returned from the /pipelines
endpoint as id
. This endpoint changes based on the pipeline publish deployed.Organizations are encouraged to use the /infer
endpoint for consistency.
Wallaroo inference endpoint URLs accept the following data inputs through the Content-Type
header:
Content-Type: application/vnd.apache.arrow.file
: For Apache Arrow tables.Content-Type: application/json; format=pandas-records
: For pandas DataFrame in record format.Once deployed, we can perform an inference through the deployment URL.
The endpoint returns Content-Type: application/json; format=pandas-records
by default with the following fields:
null
if the input may be too long for a proper return.The following example demonstrates sending an Apache Arrow table to the Edge deployed pipeline, requesting the inference results back in a pandas DataFrame records format.
curl -X POST localhost:8080/infer -H "Content-Type: application/vnd.apache.arrow.file" -H 'Accept: application/json; format=pandas-records' --data-binary @./data/image_224x224.arrow
Returns:
[{"check_failures":[],"elapsed":[1067541,21209776],"model_name":"resnet-50","model_version":"2e05e1d0-fcb3-4213-bba8-4bac13f53e8d","original_data":null,"outputs":[{"Int64":{"data":[535],"dim":[1],"v":1}},{"Float":{"data":[0.00009498586587142199,0.00009141524787992239,0.0004606838047038764,0.00007667174941161647,0.00008047101437114179,...],"dim":[1,1001],"v":1}}],"pipeline_name":"edge-cv-demo","shadow_data":{},"time":1694205578428}]
Inference logs are retrieved from edge location deployments through the /logs
endpoint. For full details on edge observability, see Model Observability for Edge Deployments with Low or No Connectivity.
/logs
POST
Content-Type: application/json
: Submissions to the /logs
endpoint in JSON text format.Accept: application/json; format=pandas-records
: The /logs
endpoint returns JSON in pandas Record format.{}
: An empty set.Inference logs are returned as JSON in pandas Record Format with the following fields:
Field | Type | Description |
---|---|---|
time | DateTime | DateTime field in Epoch format. |
in | Dict | The inputs in Dict format. |
out | Dict | The outputs in Dict format with the model field outputs and values. |
anomaly | Dict | Any anomalies detected; the field count is reserved for the total number of validations derived as True . See anomalies for more details. |
metadata | Dict | Metadata of the transaction that includes:
|
Edge location inference log storage capacity is set with the LOCAL_INFERENCE_STORAGE
and PLATEAU_PAGE_SIZE
fields. These fields are required for this endpoint to store the log data and response to requests.
LOCAL_INFERENCE_STORAGE
(Optional): Sets amount of storage to allocate for the edge deployments inference log storage capacity. This is in the format {size as number}{unit value}
. If used, must be used with PLATEAU_PAGE_SIZE
. The values are similar to the Kubernetes memory resource units format. This is configurable based on the amount of storage needed during low/no connectivity periods before connectivity is restored for logs to be available in the Wallaroo model Ops center. The accepted unit values are:Ki
(for KiloBytes)Mi
(for MegaBytes)Gi
(for GigaBytes)Ti
(for TeraBytes)PLATEAU_PAGE_SIZE
(Optional): How many inference log rows to upload from the edge deployment at a time. Must be used with LOCAL_INFERENCE_STORAGE
.For example, a typical Docker deployment with these variable is:
docker run -v $PERSISTENT_VOLUME_DIR:/persist \
-p $EDGE_PORT:8080 \
-e OCI_USERNAME=$OCI_USERNAME \
-e OCI_PASSWORD=$OCI_PASSWORD \
-e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-low-connection-demonstration:a8c50aab-5227-4a36-bb66-f34086ff65f4\
-e EDGE_BUNDLE=abc123 \
-e CONFIG_CPUS=1.0 --cpus=0.5 --memory=1g \
-e LOCAL_INFERENCE_STORAGE=100m \
-e PLATEAU_PAGE_SIZE=100 \
ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6250
The following shows retrieving logs from a model deployment on an edge location.
We will store the logs to a JSON file in pandas Record format, then display the edge logs as a DataFrame.
!curl -X POST http://localhost:8080/logs \
-H "Content-Type: Content-Type: application/json; format=pandas-records" \
--data {} > ./edge-logs.df.json
df_logs = pd.read_json("./edge-logs.df.json", orient="records")
df_logs
time | in | out | anomaly | metadata | |
---|---|---|---|---|---|
0 | 1713880452318 | {'tensor': [4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]} | {'variable': [718013.7]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [15654000, 17385666], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
1 | 1713880461579 | {'tensor': [4.0, 2.75, 3010.0, 7215.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3010.0, 0.0, 47.6952018738, -122.1780014038, 3010.0, 7215.0, 0.0, 0.0, 0.0]} | {'variable': [795841.06]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
2 | 1713880461579 | {'tensor': [4.0, 1.75, 1400.0, 7920.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1400.0, 0.0, 47.465801239, -122.1839981079, 1910.0, 7700.0, 52.0, 0.0, 0.0]} | {'variable': [267013.97]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
3 | 1713880461579 | {'tensor': [4.0, 2.5, 3130.0, 13202.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3130.0, 0.0, 47.5877990723, -121.9759979248, 2840.0, 10470.0, 19.0, 0.0, 0.0]} | {'variable': [879083.56]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
4 | 1713880461579 | {'tensor': [3.0, 2.25, 1620.0, 997.0, 2.5, 0.0, 0.0, 3.0, 8.0, 1540.0, 80.0, 47.5400009155, -122.0260009766, 1620.0, 1068.0, 4.0, 0.0, 0.0]} | {'variable': [544392.06]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
... | ... | ... | ... | ... | ... |
995 | 1713880461579 | {'tensor': [4.0, 2.5, 2040.0, 9225.0, 1.0, 0.0, 0.0, 5.0, 8.0, 1610.0, 430.0, 47.6360015869, -122.0970001221, 1730.0, 9225.0, 46.0, 0.0, 0.0]} | {'variable': [627853.3]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
996 | 1713880461579 | {'tensor': [3.0, 3.0, 1330.0, 1379.0, 2.0, 0.0, 0.0, 4.0, 8.0, 1120.0, 210.0, 47.6125984192, -122.31300354, 1810.0, 1770.0, 9.0, 0.0, 0.0]} | {'variable': [450867.7]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
997 | 1713880461579 | {'tensor': [3.0, 2.5, 1880.0, 4499.0, 2.0, 0.0, 0.0, 3.0, 8.0, 1880.0, 0.0, 47.5663986206, -121.9990005493, 2130.0, 5114.0, 22.0, 0.0, 0.0]} | {'variable': [553463.25]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
998 | 1713880461579 | {'tensor': [4.0, 1.5, 1200.0, 10890.0, 1.0, 0.0, 0.0, 5.0, 7.0, 1200.0, 0.0, 47.342300415, -122.0879974365, 1250.0, 10139.0, 42.0, 0.0, 0.0]} | {'variable': [241330.17]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
999 | 1713880461579 | {'tensor': [4.0, 3.25, 5180.0, 19850.0, 2.0, 0.0, 3.0, 3.0, 12.0, 3540.0, 1640.0, 47.5620002747, -122.1620025635, 3160.0, 9750.0, 9.0, 0.0, 0.0]} | {'variable': [1295531.8]} | {'count': 0} | {'last_model': '{"model_name":"rf-house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '76bec2c1-d93c-4941-b17b-3c6a6254d0b2', 'elapsed': [6297000, 12721000], 'dropped': [], 'partition': 'houseprice-low-connection-demonstration-01'} |
1000 rows × 5 columns
When an edge is added to a pipeline publish, the field docker_run_variables
contains a JSON value for edge devices to connect to the Wallaroo Ops instance.
The settings are stored in the key EDGE_BUNDLE
as a base64 encoded value that include the following:
BUNDLE_VERSION
: The current version of the bundled Wallaroo pipeline.EDGE_NAME
: The edge name as defined when created and added to the pipeline publish.JOIN_TOKEN_
: The one time authentication token for authenticating to the Wallaroo Ops instance.OPSCENTER_HOST
: The hostname of the Wallaroo Ops edge service. See Edge Deployment Registry Guide for full details on enabling pipeline publishing and edge observability to Wallaroo.PIPELINE_URL
: The OCI registry URL to the containerized pipeline.WORKSPACE_ID
: The numerical ID of the workspace.For example:
{'edgeBundle': 'ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT14Z2ItY2NmcmF1ZC1lZGdlLXRlc3QKZXhwb3J0IEpPSU5fVE9LRU49MzE0OGFkYTUtMjg1YS00ZmNhLWIzYjgtYjUwYTQ4ZDc1MTFiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2UtcGlwZWxpbmU6ZjM4OGMxMDktOGQ1Ny00ZWQyLTk4MDYtYWExM2Y4NTQ1NzZiCmV4cG9ydCBXT1JLU1BBQ0VfSUQ9NQ=='}
base64 -D
ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT14Z2ItY2NmcmF1ZC1lZGdlLXRlc3QKZXhwb3J0IEpPSU5fVE9LRU49MzE0OGFkYTUtMjg1YS00ZmNhLWIzYjgtYjUwYTQ4ZDc1MTFiCmV4cG9ydCBPUFNDRU5URVJfSE9TVD1kb2MtdGVzdC5lZGdlLndhbGxhcm9vY29tbXVuaXR5Lm5pbmphCmV4cG9ydCBQSVBFTElORV9VUkw9Z2hjci5pby93YWxsYXJvb2xhYnMvZG9jLXNhbXBsZXMvcGlwZWxpbmVzL2VkZ2UtcGlwZWxpbmU6ZjM4OGMxMDktOGQ1Ny00ZWQyLTk4MDYtYWExM2Y4NTQ1NzZiCmV4cG9ydCBXT1JLU1BBQ0VfSUQ9NQ==^D
export BUNDLE_VERSION=1
export EDGE_NAME=xgb-ccfraud-edge-test
export JOIN_TOKEN=3148ada5-285a-4fca-b3b8-b50a48d7511b
export OPSCENTER_HOST=doc-test.wallaroocommunity.ninja/edge
export PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/edge-pipeline:f388c109-8d57-4ed2-9806-aa13f854576b
export WORKSPACE_ID=5
The JOIN_TOKEN
is a one time access token. Once used, a JOIN_TOKEN
expires. The authentication session data is stored in persistent volumes. Persistent volumes must be specified for docker
and docker compose
based deployments of Wallaroo pipelines; helm
based deployments automatically provide persistent volumes to store authentication credentials.
The JOIN_TOKEN
has the following time to live (TTL) parameters.
JOIN_TOKEN
is valid for 24 hours. After it expires the edge will not be allowed to contact the OpsCenter the first time and a new edge bundle will have to be created.JOIN_TOKEN
.Wallaroo edges require unique names. To create a new edge bundle with the same name:
EDGE_BUNDLE
is generated with a new JOIN_TOKEN
.