The following tutorial is available on the Wallaroo Tutorials repository.
The following tutorial demonstrates how to use Wallaroo to detect mitochondria from high resolution images, publish the Wallaroo pipeline to an Open Container Initiative (OCI) Registry, and deploy it in an edge system. For this example we will be using a high resolution 1536x2048 image that is broken down into “patches” of 256x256 images that can be quickly analyzed.
Mitochondria are known as the “powerhouse” of the cell, and having a healthy amount of mitochondria indicates that a patient has enough energy to live a healthy life, or may have underling issues that a doctor can check for.
Scanning high resolution images of patient cells can be used to count how many mitochondria a patient has, but the process is laborious. The following ML Model is trained to examine an image of cells, then detect which structures are mitochondria. This is used to speed up the process of testing patients and determining next steps.
This tutorial will perform the following:
mitochondria_epochs_15.onnx
model to a Wallaroo pipeline.Complete the steps from Mitochondria Detection Computer Vision Tutorial Part 00: Prerequisites.
The first step is to import the necessary libraries. Included with this tutorial are the following custom modules:
tiff_utils
: Organizes the tiff images to perform random image selections and other tasks.Note that tensorflow may return warnings depending on the environment.
import json
import IPython.display as display
import time
import matplotlib.pyplot as plt
from IPython.display import clear_output, display
import tifffile as tiff
import requests
import pandas as pd
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework
import numpy as np
from matplotlib import pyplot as plt
import cv2
#from tensorflow.keras.utils import normalize
#from lib.TiffImageUtils import TiffUtils
#tiff_utils = TiffUtils()
# ignoring warnings for demonstration
import warnings
warnings.filterwarnings('ignore')
The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client()
command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client()
. For more details on logging in through Wallaroo, see the Wallaroo SDK Essentials Guide: Client Connection.
wl = wallaroo.Client()
We will create a workspace to manage our pipeline and models. The following variables will set the name of our sample workspace then set it as the current workspace.
Workspace, pipeline, and model names should be unique to each Wallaroo instance, so we’ll add in a randomly generated suffix so multiple people can run this tutorial in a Wallaroo instance without affecting each other.
workspace_name = f'edgebiolabsworkspace'
pipeline_name = f'edgebiolabspipeline'
model_name = f'edgebiolabsmodel'
model_file_name = 'models/mitochondria_epochs_15.onnx'
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)
pipeline = wl.build_pipeline(pipeline_name)
pipeline
name | edgebiolabspipeline |
---|---|
created | 2025-05-16 15:57:59.007227+00:00 |
last_updated | 2025-07-16 16:46:41.976146+00:00 |
deployed | True |
workspace_id | 1669 |
workspace_name | edgebiolabsworkspace |
arch | x86 |
accel | none |
tags | |
versions | d4c95081-e477-46e3-a9c6-38e5c7bcd1b8, 452c6098-007b-47dd-b3ac-2a4d94c827cc, 3ce85c5e-c37b-469d-972f-b8dbc069b7bd, 9049239f-ff5d-4e5d-a8a8-1d276b27e4c4, 38427c27-c8d6-4df3-8a54-6a3d92bf9875, a770d6dd-f06d-4d5f-83f5-139bdd3d6fa7, 708b33f4-8bb1-4527-a74b-81210d86fa4e, 609999d2-c1bf-4e3f-ab8d-33508d9364d9, b660902e-fe59-4be3-b6a6-5354140c847e, f5d6d952-faf4-46ea-ab40-4032a20d7bb7, 4815e577-a12b-4be0-9cb2-921af0865639 |
steps | edgebiolabsmodel |
published | True |
Now we will:
model = wl.upload_model(model_name, model_file_name, framework=Framework.ONNX)
Before deploying an inference engine we need to tell wallaroo what resources it will need.
To do this we will use the wallaroo DeploymentConfigBuilder() and fill in the options listed below to determine what the properties of our inference engine will be.
We will be testing this deployment for an edge scenario, so the resource specifications are kept small – what’s the minimum needed to meet the expected load on the planned hardware.
deployment_config = wallaroo.DeploymentConfigBuilder().replica_count(1).cpus(4).memory("8Gi").build()
pipeline = wl.build_pipeline(pipeline_name) \
.clear() \
.add_model_step(model) \
.deploy(deployment_config = deployment_config, wait_for_status=False)
Deployment initiated for edgebiolabspipeline. Please check pipeline status.
# check the pipeline status before performing an inference
import time
while pipeline.status()['status'] != 'Running':
time.sleep(15)
pipeline.status()
{'status': 'Running',
'details': [],
'engines': [{'ip': '10.4.0.4',
'name': 'engine-6969677584-whx6s',
'status': 'Running',
'reason': None,
'details': [],
'pipeline_statuses': {'pipelines': [{'id': 'edgebiolabspipeline',
'status': 'Running',
'version': 'ae6e1b89-b8f7-4b1a-ac81-4f1068dcde5d'}]},
'model_statuses': {'models': [{'model_version_id': 849,
'name': 'edgebiolabsmodel',
'sha': 'e80fcdaf563a183b0c32c027dcb3890a64e1764d6d7dcd29524cd270dd42e7bd',
'status': 'Running',
'version': '2853062f-92f8-484c-a3fc-81440ebd2241'}]}}],
'engine_lbs': [{'ip': '10.4.1.14',
'name': 'engine-lb-648945b8b4-gm5kq',
'status': 'Running',
'reason': None,
'details': []}],
'sidekicks': []}
The next step is to process the image into a numpy array that the model is trained to detect from.
We start by retrieving all the patch images from a recorded time series tiff recorded on one of our microscopes.
For this tutorial, we will be using the path ./patches/condensed
, with a more limited number of images to save on local memory.
sample_mitochondria_patches_path = "./patches/condensed"
patches = tiff_utils.get_all_patches(sample_mitochondria_patches_path)
Randomly we will retrieve a 256x256 patch image and use it to do our semantic segmentation prediction.
We’ll then convert it into a numpy array and insert into a DataFrame for a single inference.
The following helper function loadImageAndConvertTiff
is used to convert the image into a numpy, then insert that into the DataFrame. This allows a later command to take the randomly grabbed image perform the process on other images.
def loadImageAndConvertTiff(imagePath, width, height):
img = cv2.imread(imagePath, 0)
imgNorm = np.expand_dims(normalize(np.array(img), axis=1),2)
imgNorm=imgNorm[:,:,0][:,:,None]
imgNorm=np.expand_dims(imgNorm, 0)
resizedImage = None
#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.
dictData = {"tensor":[imgNorm]}
dataframedata = pd.DataFrame(dictData)
# display(dataframedata)
return dataframedata, resizedImage
def run_semantic_segmentation_inference(pipeline, input_tiff_image, width, height, threshold):
tensor, resizedImage = loadImageAndConvertTiff(input_tiff_image, width, height)
# print(tensor)
# #
# # run inference on the 256x256 patch image get the predicted mitochandria mask
# #
output = pipeline.infer(tensor)
# print(output)
# # Obtain the flattened predicted mitochandria mask result
list1d = output.loc[0]["out.conv2d_37"]
np1d = np.array(list1d)
# # unflatten it
predicted_mask = np1d.reshape(1,width,height,1)
# # perform the element-wise comaprison operation using the threshold provided
predicted_mask = (predicted_mask[0,:,:,0] > threshold).astype(np.uint8)
# return predicted_mask
return predicted_mask
We will now perform our inferences and display the results. This results in a predicted mask showing us where the mitochondria cells are located.
We’ll perform this 10 times to show how quickly the inferences can be submitted.
random_patches = []
for x in range(10):
random_patches.append(tiff_utils.get_random_patch_sample(patches))
for random_patch in random_patches:
# get a sample 256x256 mitochondria image
# random_patch = tiff_utils.get_random_patch_sample(patches)
# build the path to the image
patch_image_path = sample_mitochondria_patches_path + "/images/" + random_patch['patch_image_file']
# run inference in order to get the predicted 256x256 mask
predicted_mask = run_semantic_segmentation_inference(pipeline, patch_image_path, 256,256, 0.2)
# # plot the results
test_image = random_patch['patch_image'][:,:,0]
test_image_title = f"Testing Image - {random_patch['index']}"
ground_truth_image = random_patch['patch_mask'][:,:,0]
ground_truth_image_title = "Ground Truth Mask"
predicted_mask_title = 'Predicted Mask'
tiff_utils.plot_test_results(test_image, test_image_title, \
ground_truth_image, ground_truth_image_title, \
predicted_mask, predicted_mask_title)
With the experiment complete, we will undeploy the pipeline.
pipeline.undeploy()
name | edgebiolabspipeline |
---|---|
created | 2025-05-16 15:57:59.007227+00:00 |
last_updated | 2025-07-16 16:46:50.584341+00:00 |
deployed | False |
workspace_id | 1669 |
workspace_name | edgebiolabsworkspace |
arch | x86 |
accel | none |
tags | |
versions | ae6e1b89-b8f7-4b1a-ac81-4f1068dcde5d, 4065f6b4-178d-48fb-9bcc-8bdf29470cd5, d4c95081-e477-46e3-a9c6-38e5c7bcd1b8, 452c6098-007b-47dd-b3ac-2a4d94c827cc, 3ce85c5e-c37b-469d-972f-b8dbc069b7bd, 9049239f-ff5d-4e5d-a8a8-1d276b27e4c4, 38427c27-c8d6-4df3-8a54-6a3d92bf9875, a770d6dd-f06d-4d5f-83f5-139bdd3d6fa7, 708b33f4-8bb1-4527-a74b-81210d86fa4e, 609999d2-c1bf-4e3f-ab8d-33508d9364d9, b660902e-fe59-4be3-b6a6-5354140c847e, f5d6d952-faf4-46ea-ab40-4032a20d7bb7, 4815e577-a12b-4be0-9cb2-921af0865639 |
steps | edgebiolabsmodel |
published | True |
It worked! For a demo, we’ll take working once as “tested”. So now that we’ve tested our pipeline, we are ready to publish it for edge deployment.
Publishing it means assembling all of the configuration files and model assets and pushing them to an Open Container Initiative (OCI) repository set in the Wallaroo instance as the Edge Registry service. DevOps engineers then retrieve that image and deploy it through Docker, Kubernetes, or similar deployments.
See Edge Deployment Registry Guide for details on adding an OCI Registry Service to Wallaroo as the Edge Deployment Registry.
This is done through the SDK command wallaroo.pipeline.publish(deployment_config)
.
We will now publish the pipeline to our Edge Deployment Registry with the pipeline.publish(deployment_config)
command. deployment_config
is an optional field that specifies the pipeline deployment. This can be overridden by the DevOps engineer during deployment.
pub=pipeline.publish(deployment_config)
pub
Waiting for pipeline publish... It may take up to 600 sec.
Pipeline is publishing... Published.
ID | 109 | |
Pipeline Name | edgebiolabspipeline | |
Pipeline Version | 38245d64-3d27-4ff7-9261-d85570e5f58d | |
Status | Published | |
Workspace Id | 1669 | |
Workspace Name | edgebiolabsworkspace | |
Edges | ||
Engine URL | sample.registry.example.com/uat/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6245 | |
Pipeline URL | sample.registry.example.com/uat/pipelines/edgebiolabspipeline:38245d64-3d27-4ff7-9261-d85570e5f58d | |
Helm Chart URL | oci://sample.registry.example.com/uat/charts/edgebiolabspipeline | |
Helm Chart Reference | sample.registry.example.com/uat/charts@sha256:2de708ba41b7fda99a9b3a54868770f7e82678a19ee86336f12dd0652b6cf27e | |
Helm Chart Version | 0.0.1-38245d64-3d27-4ff7-9261-d85570e5f58d | |
Engine Config | {'engine': {'resources': {'limits': {'cpu': 4.0, 'memory': '8Gi'}, 'requests': {'cpu': 4.0, 'memory': '8Gi'}, 'accel': 'none', 'arch': 'x86', 'gpu': False}}, 'engineAux': {'autoscale': {'type': 'none', 'cpu_utilization': 50.0}, 'images': {}}} | |
User Images | [] | |
Created By | john.hummel@wallaroo.ai | |
Created At | 2025-07-16 16:47:49.974418+00:00 | |
Updated At | 2025-07-16 16:47:49.974418+00:00 | |
Replaces | ||
Docker Run Command |
Note: Please set the EDGE_PORT , OCI_USERNAME , and OCI_PASSWORD environment variables. | |
Podman Run Command |
Note: Please set the EDGE_PORT , OCI_USERNAME , and OCI_PASSWORD environment variables. | |
Helm Install Command |
Note: Please set the HELM_INSTALL_NAME , HELM_INSTALL_NAMESPACE ,
OCI_USERNAME , and OCI_PASSWORD environment variables. |
The method wallaroo.client.list_pipelines()
shows a list of all pipelines in the Wallaroo instance, and includes the published
field that indicates whether the pipeline was published to the registry (True
), or has not yet been published (False
).
wl.list_pipelines(workspace_name=workspace_name)
name | created | last_updated | deployed | workspace_id | workspace_name | arch | accel | tags | versions | steps | published |
---|---|---|---|---|---|---|---|---|---|---|---|
edgebiolabspipeline | 2025-16-May 15:57:59 | 2025-16-Jul 16:47:48 | False | 1669 | edgebiolabsworkspace | x86 | none | 38245d64-3d27-4ff7-9261-d85570e5f58d, ae6e1b89-b8f7-4b1a-ac81-4f1068dcde5d, 4065f6b4-178d-48fb-9bcc-8bdf29470cd5, d4c95081-e477-46e3-a9c6-38e5c7bcd1b8, 452c6098-007b-47dd-b3ac-2a4d94c827cc, 3ce85c5e-c37b-469d-972f-b8dbc069b7bd, 9049239f-ff5d-4e5d-a8a8-1d276b27e4c4, 38427c27-c8d6-4df3-8a54-6a3d92bf9875, a770d6dd-f06d-4d5f-83f5-139bdd3d6fa7, 708b33f4-8bb1-4527-a74b-81210d86fa4e, 609999d2-c1bf-4e3f-ab8d-33508d9364d9, b660902e-fe59-4be3-b6a6-5354140c847e, f5d6d952-faf4-46ea-ab40-4032a20d7bb7, 4815e577-a12b-4be0-9cb2-921af0865639 | edgebiolabsmodel | True |
All publishes created from a pipeline are displayed with the wallaroo.pipeline.publishes
method. The pipeline_version_id
is used to know what version of the pipeline was used in that specific publish. This allows for pipelines to be updated over time, and newer versions to be sent and tracked to the Edge Deployment Registry service.
N/A
A List of the following fields:
Field | Type | Description |
---|---|---|
id | integer | Numerical Wallaroo id of the published pipeline. |
pipeline_version_id | integer | Numerical Wallaroo id of the pipeline version published. |
engine_url | string | The URL of the published pipeline engine in the edge registry. |
pipeline_url | string | The URL of the published pipeline in the edge registry. |
created_by | string | The email address of the user that published the pipeline. |
Created At | DateTime | When the published pipeline was created. |
Updated At | DateTime | When the published pipeline was updated. |
pipeline.publishes()
id | Pipeline Name | Pipeline Version | Workspace Id | Workspace Name | Edges | Engine URL | Pipeline URL | Created By | Created At | Updated At |
---|---|---|---|---|---|---|---|---|---|---|
51 | edgebiolabspipeline | a770d6dd-f06d-4d5f-83f5-139bdd3d6fa7 | 1669 | edgebiolabsworkspace | sample.registry.example.com/uat/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-main-6139 | sample.registry.example.com/uat/pipelines/edgebiolabspipeline:a770d6dd-f06d-4d5f-83f5-139bdd3d6fa7 | john.hummel@wallaroo.ai | 2025-16-May 16:02:31 | 2025-16-May 16:02:31 | |
109 | edgebiolabspipeline | 38245d64-3d27-4ff7-9261-d85570e5f58d | 1669 | edgebiolabsworkspace | sample.registry.example.com/uat/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2025.1.0-6245 | sample.registry.example.com/uat/pipelines/edgebiolabspipeline:38245d64-3d27-4ff7-9261-d85570e5f58d | john.hummel@wallaroo.ai | 2025-16-Jul 16:47:49 | 2025-16-Jul 16:47:49 |
Once a pipeline is deployed to the Edge Registry service, it can be deployed in environments such as Docker, Kubernetes, or similar container running services by a DevOps engineer. For our example, we will use the docker run
command output during the pipeline publish.
Once deployed, we can check the pipelines and models available. We’ll use a curl
command, but any HTTP based request will work the same way.
The endpoint /pipelines
returns:
Running
, or Error
if there are any issues.For this example, the deployment is made on a machine called testboy.local
. Replace this URL with the URL of you edge deployment.
!curl testboy.local:8080/pipelines
{"pipelines":[{"id":"edgebiolabspipeline","status":"Running"}]}
The endpoint /models
returns a List of models with the following fields:
!curl testboy.local:8080/models
{"models":[{"name":"edgebiolabsmodel","sha":"e80fcdaf563a183b0c32c027dcb3890a64e1764d6d7dcd29524cd270dd42e7bd","status":"Running","version":"37b76f7a-cef3-4dfb-8bed-c0779c0e668c"}]}
The inference endpoint takes the following pattern:
/pipelines/infer
Wallaroo inference endpoint URLs accept the following data inputs through the Content-Type
header:
Content-Type: application/vnd.apache.arrow.file
: For Apache Arrow tables.Content-Type: application/json; format=pandas-records
: For pandas DataFrame in record format.Once deployed, we can perform an inference through the deployment URL.
The endpoint returns Content-Type: application/json; format=pandas-records
by default with the following fields:
null
if the input may be too long for a proper return.We’ll repeat our process above - only this time through the Python requests
library to our locally deployed pipeline.
def loadImageAndConvertTiffList(imagePath, width, height):
img = cv2.imread(imagePath, 0)
imgNorm = np.expand_dims(normalize(np.array(img), axis=1),2)
imgNorm=imgNorm[:,:,0][:,:,None]
imgNorm=np.expand_dims(imgNorm, 0)
resizedImage = None
#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.
dictData = {"tensor":imgNorm.tolist()}
dataframedata = pd.DataFrame(dictData)
# display(dataframedata)
return dataframedata, resizedImage
def run_semantic_segmentation_inference_requests(pipeline_url, input_tiff_image, width, height, threshold):
tensor, resizedImage = loadImageAndConvertTiffList(input_tiff_image, width, height)
# print(tensor)
# #
# # run inference on the 256x256 patch image get the predicted mitochondria mask
# #
# set the content type and accept headers
headers = {
'Content-Type': 'application/json; format=pandas-records'
}
data = tensor.to_json(orient="records")
# print(data)
# print(pipeline_url)
response = requests.post(
pipeline_url,
headers=headers,
data=data,
verify=True
)
# list1d = response.json()[0]['outputs'][0]['Float']['data']
output = pd.DataFrame(response.json())
# display(output)
list1d = output.loc[0]["outputs"][0]['Float']['data']
# output = pipeline.infer(tensor)
# print(output)
# # Obtain the flattened predicted mitochandria mask result
# list1d = output.loc[0]["out.conv2d_37"]
np1d = np.array(list1d)
# # # unflatten it
predicted_mask = np1d.reshape(1,width,height,1)
# # # perform the element-wise comaprison operation using the threshold provided
predicted_mask = (predicted_mask[0,:,:,0] > threshold).astype(np.uint8)
# # return predicted_mask
return predicted_mask
# set this to your deployed pipeline's URL
host = 'http://testboy.local:8080'
deployurl = f'{host}/infer'
for random_patch in random_patches:
# build the path to the image
patch_image_path = sample_mitochondria_patches_path + "/images/" + random_patch['patch_image_file']
# run inference in order to get the predicted 256x256 mask
predicted_mask = run_semantic_segmentation_inference_requests(deployurl, patch_image_path, 256,256, 0.2)
# # plot the results
test_image = random_patch['patch_image'][:,:,0]
test_image_title = f"Testing Image - {random_patch['index']}"
ground_truth_image = random_patch['patch_mask'][:,:,0]
ground_truth_image_title = "Ground Truth Mask"
predicted_mask_title = 'Predicted Mask'
tiff_utils.plot_test_results(test_image, test_image_title, \
ground_truth_image, ground_truth_image_title, \
predicted_mask, predicted_mask_title)