This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.
Step 01: Detecting Objects Using mobilenet
The following tutorial demonstrates how to use a trained mobilenet model deployed in Wallaroo to detect objects. This process will use the following steps:
- Create a Wallaroo workspace and pipeline.
- Upload a trained mobilenet ML model and add it as a pipeline step.
- Deploy the pipeline.
- Perform an inference on a sample image.
- Draw the detected objects, their bounding boxes, their classifications, and the confidence of the classifications on the provided image.
- Review our results.
Steps
Import Libraries
The first step will be to import our libraries. Please check with Step 00: Introduction and Setup and verify that the necessary libraries and applications are added to your environment.
import torch
import pickle
import wallaroo
from wallaroo.object import EntityNotFoundError
import os
import numpy as np
import json
import requests
import time
import pandas as pd
from CVDemoUtils import CVDemo
# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)
Connect to Wallaroo
Now we connect to the Wallaroo instance. If you are connecting from a remote connection, set the wallarooPrefix
and wallarooSuffix
and use them to connect. If the connection is from within the Wallaroo instance cluster, then just wl = wallaroo.Client()
can be used.
# Login through local service
# wl = wallaroo.Client()
# SSO login through keycloak
wallarooPrefix = "YOUR PREFIX"
wallarooSuffix = "YOUR SUFFIX"
wl = wallaroo.Client(api_endpoint=f"https://{wallarooPrefix}.api.{wallarooSuffix}",
auth_endpoint=f"https://{wallarooPrefix}.keycloak.{wallarooSuffix}",
auth_type="sso")
Arrow Support
As of the 2023.1 release, Wallaroo provides support for DataFrame and Arrow for inference inputs. This tutorial allows users to adjust their experience based on whether they have enabled Arrow support in their Wallaroo instance or not.
If Arrow support has been enabled, arrowEnabled=True
. If disabled or you’re not sure, set it to arrowEnabled=False
The examples below will be shown in an arrow enabled environment.
import os
# Only set the below to make the OS environment ARROW_ENABLED to TRUE. Otherwise, leave as is.
os.environ["ARROW_ENABLED"]="True"
if "ARROW_ENABLED" not in os.environ or os.environ["ARROW_ENABLED"].casefold() == "False".casefold():
arrowEnabled = False
else:
arrowEnabled = True
print(arrowEnabled)
True
Set Variables
The following variables and methods are used later to create or connect to an existing workspace, pipeline, and model.
workspace_name = 'mobilenetworkspacetest'
pipeline_name = 'mobilenetpipeline'
model_name = 'mobilenet'
model_file_name = 'models/mobilenet.pt.onnx'
def get_workspace(name):
workspace = None
for ws in wl.list_workspaces():
if ws.name() == name:
workspace= ws
if(workspace == None):
workspace = wl.create_workspace(name)
return workspace
def get_pipeline(name):
try:
pipeline = wl.pipelines_by_name(pipeline_name)[0]
except EntityNotFoundError:
pipeline = wl.build_pipeline(pipeline_name)
return pipeline
Create Workspace
The workspace will be created or connected to, and set as the default workspace for this session. Once that is done, then all models and pipelines will be set in that workspace.
workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
wl.get_current_workspace()
{'name': 'mobilenetworkspacetest', 'id': 9, 'archived': False, 'created_by': 'ca7d7043-8e94-42d5-9f3a-8f55c2e42814', 'created_at': '2023-03-02T19:21:36.309503+00:00', 'models': [{'name': 'mobilenet', 'versions': 1, 'owner_id': '""', 'last_update_time': datetime.datetime(2023, 3, 2, 19, 21, 50, 515472, tzinfo=tzutc()), 'created_at': datetime.datetime(2023, 3, 2, 19, 21, 50, 515472, tzinfo=tzutc())}], 'pipelines': [{'name': 'mobilenetpipeline', 'create_time': datetime.datetime(2023, 3, 2, 19, 21, 51, 863552, tzinfo=tzutc()), 'definition': '[]'}]}
Create Pipeline and Upload Model
We will now create or connect to an existing pipeline as named in the variables above.
pipeline = get_pipeline(pipeline_name)
mobilenet_model = wl.upload_model(model_name, model_file_name)
Deploy Pipeline
With the model uploaded, we can add it is as a step in the pipeline, then deploy it. Once deployed, resources from the Wallaroo instance will be reserved and the pipeline will be ready to use the model to perform inference requests.
pipeline.add_model_step(mobilenet_model)
pipeline.deploy()
name | mobilenetpipeline |
---|---|
created | 2023-03-02 19:21:51.863552+00:00 |
last_updated | 2023-03-02 19:32:02.206502+00:00 |
deployed | True |
tags | |
versions | 6682b775-3c04-4071-b643-8d52b3c06e56, baf226fc-bc5e-4c52-9962-d7b87c987ad3, 48280b55-1285-41e3-a1c6-3fbe05ee4d3a |
steps | mobilenet |
Prepare input image
Next we will load a sample image and resize it to the width and height required for the object detector. Once complete, it the image will be converted to a numpy ndim array and added to a dictionary.
# The size the image will be resized to
width = 640
height = 480
# Only objects that have a confidence > confidence_target will be displayed on the image
cvDemo = CVDemo()
imagePath = 'data/images/current/input/example/dairy_bottles.png'
# The image width and height needs to be set to what the model was trained for. In this case 640x480.
tensor, resizedImage = cvDemo.loadImageAndResize(imagePath, width, height)
# get npArray from the tensorFloat
npArray = tensor.cpu().numpy()
#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.
dictData = {"tensor": npArray.tolist()}
# test turning this into a dataframe
dataframedata = pd.DataFrame.from_records(dictData)
#display(dataframedata)
Run Inference
With that done, we can have the model detect the objects on the image by running an inference through the pipeline, and storing the results for the next step.
startTime = time.time()
infResults = pipeline.infer(dictData)
endTime = time.time()
if arrowEnabled is True:
results = infResults[0]
else:
results = infResults[0].raw
Draw the Inference Results
With our inference results, we can take them and use the Wallaroo CVDemo class and draw them onto the original image. The bounding boxes and the confidence value will only be drawn on images where the model returned a 90% confidence rate in the object’s identity.
df = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.float_format = '{:.2%}'.format
# Points to where all the inference results are
outputs = results['outputs']
boxes = outputs[0]
# reshape this to an array of bounding box coordinates converted to ints
boxList = boxes['Float']['data']
boxA = np.array(boxList)
boxes = boxA.reshape(-1, 4)
boxes = boxes.astype(int)
df[['x', 'y','width','height']] = pd.DataFrame(boxes)
classes = outputs[1]['Int64']['data']
confidences = outputs[2]['Float']['data']
infResults = {
'model_name' : model_name,
'pipeline_name' : pipeline_name,
'width': width,
'height': height,
'image' : resizedImage,
'boxes' : boxes,
'classes' : classes,
'confidences' : confidences,
'confidence-target' : 0.90,
'inference-time': (endTime-startTime),
'onnx-time' : int(results['elapsed']) / 1e+9,
'color':(255,0,0)
}
image = cvDemo.drawAndDisplayDetectedObjectsWithClassification(infResults)

Extract the Inference Information
To show what is going on in the background, we’ll extract the inference results create a dataframe with columns representing the classification, confidence, and bounding boxes of the objects identified.
idx = 0
for idx in range(0,len(classes)):
df['classification'][idx] = cvDemo.CLASSES[classes[idx]] # Classes contains the 80 different COCO classificaitons
df['confidence'][idx] = confidences[idx]
df
classification | confidence | x | y | width | height | |
---|---|---|---|---|---|---|
0 | bottle | 98.65% | 0 | 210 | 85 | 479 |
1 | bottle | 90.12% | 72 | 197 | 151 | 468 |
2 | bottle | 60.78% | 211 | 184 | 277 | 420 |
3 | bottle | 59.22% | 143 | 203 | 216 | 448 |
4 | refrigerator | 53.73% | 13 | 41 | 640 | 480 |
5 | bottle | 45.13% | 106 | 206 | 159 | 463 |
6 | bottle | 43.73% | 278 | 1 | 321 | 93 |
7 | bottle | 43.09% | 462 | 104 | 510 | 224 |
8 | bottle | 40.85% | 310 | 1 | 352 | 94 |
9 | bottle | 39.19% | 528 | 268 | 636 | 475 |
10 | bottle | 35.76% | 220 | 0 | 258 | 90 |
11 | bottle | 31.81% | 552 | 96 | 600 | 233 |
12 | bottle | 26.45% | 349 | 0 | 404 | 98 |
13 | bottle | 23.06% | 450 | 264 | 619 | 472 |
14 | bottle | 20.48% | 261 | 193 | 307 | 408 |
15 | bottle | 17.46% | 509 | 101 | 544 | 235 |
16 | bottle | 17.31% | 592 | 100 | 633 | 239 |
17 | bottle | 16.00% | 475 | 297 | 551 | 468 |
18 | bottle | 14.91% | 368 | 163 | 423 | 362 |
19 | book | 13.66% | 120 | 0 | 175 | 81 |
20 | book | 13.32% | 72 | 0 | 143 | 85 |
21 | bottle | 12.22% | 271 | 200 | 305 | 274 |
22 | book | 12.13% | 161 | 0 | 213 | 85 |
23 | bottle | 11.96% | 162 | 0 | 214 | 83 |
24 | bottle | 11.53% | 310 | 190 | 367 | 397 |
25 | bottle | 9.62% | 396 | 166 | 441 | 360 |
26 | cake | 8.65% | 439 | 256 | 640 | 473 |
27 | bottle | 7.84% | 544 | 375 | 636 | 472 |
28 | vase | 7.23% | 272 | 2 | 306 | 96 |
29 | bottle | 6.28% | 453 | 303 | 524 | 463 |
30 | bottle | 5.28% | 609 | 94 | 635 | 211 |
Undeploy the Pipeline
With the inference complete, we can undeploy the pipeline and return the resources back to the Wallaroo instance.
pipeline.undeploy()
name | mobilenetpipeline |
---|---|
created | 2023-03-02 19:21:51.863552+00:00 |
last_updated | 2023-03-02 19:32:02.206502+00:00 |
deployed | False |
tags | |
versions | 6682b775-3c04-4071-b643-8d52b3c06e56, baf226fc-bc5e-4c52-9962-d7b87c987ad3, 48280b55-1285-41e3-a1c6-3fbe05ee4d3a |
steps | mobilenet |