This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.
Step 03: Detecting Objects Using Shadow Deploy
The following tutorial demonstrates how to use two trained models, one based on the resnet50, the other on mobilenet, deployed in Wallaroo to detect objects. This builds on the previous tutorials in this series, Step 01: Detecting Objects Using mobilenet" and “Step 02: Detecting Objects Using resnet50”.
For this tutorial, the Wallaroo feature Shadow Deploy will be used to submit inference requests to both models at once. The mobilnet object detector is the control and the faster-rcnn object detector is the challenger. The results between the two will be compared for their confidence, and that confidence will be used to draw bounding boxes around identified objects.
This process will use the following steps:
- Create a Wallaroo workspace and pipeline.
- Upload a trained resnet50 ML model and trained mobilenet model and add them as a shadow deployed step with the mobilenet as the control model.
- Deploy the pipeline.
- Perform an inference on a sample image.
- Based on the
- Draw the detected objects, their bounding boxes, their classifications, and the confidence of the classifications on the provided image.
- Review our results.
Steps
Import Libraries
The first step will be to import our libraries. Please check with Step 00: Introduction and Setup and verify that the necessary libraries and applications are added to your environment.
import torch
import pickle
import wallaroo
from wallaroo.object import EntityNotFoundError
import os
import numpy as np
import json
import requests
import time
import pandas as pd
from CVDemoUtils import CVDemo
Connect to Wallaroo
Now we connect to the Wallaroo instance. If you are connecting from a remote connection, set the wallarooPrefix
and wallarooSuffix
and use them to connect. If the connection is from within the Wallaroo instance cluster, then just wl = wallaroo.Client()
can be used.
# Login through local service
# wl = wallaroo.Client()
# SSO login through keycloak
wallarooPrefix = "YOUR PREFIX"
wallarooSuffix = "YOUR SUFFIX"
wl = wallaroo.Client(api_endpoint=f"https://{wallarooPrefix}.api.{wallarooSuffix}",
auth_endpoint=f"https://{wallarooPrefix}.keycloak.{wallarooSuffix}",
auth_type="sso")
Arrow Support
As of the 2023.1 release, Wallaroo provides support for DataFrame and Arrow for inference inputs. This tutorial allows users to adjust their experience based on whether they have enabled Arrow support in their Wallaroo instance or not.
If Arrow support has been enabled, arrowEnabled=True
. If disabled or you’re not sure, set it to arrowEnabled=False
The examples below will be shown in an arrow enabled environment.
import os
# Only set the below to make the OS environment ARROW_ENABLED to TRUE. Otherwise, leave as is.
os.environ["ARROW_ENABLED"]="True"
if "ARROW_ENABLED" not in os.environ or os.environ["ARROW_ENABLED"].casefold() == "False".casefold():
arrowEnabled = False
else:
arrowEnabled = True
print(arrowEnabled)
Set Variables
The following variables and methods are used later to create or connect to an existing workspace, pipeline, and model. This example has both the resnet model, and a post process script.
workspace_name = 'shadowimageworkspacetest'
pipeline_name = 'shadowimagepipelinetest'
control_model_name = 'mobilenet'
control_model_file_name = 'models/mobilenet.pt.onnx'
challenger_model_name = 'resnet50'
challenger_model_file_name = 'models/frcnn-resnet.pt.onnx'
def get_workspace(name):
workspace = None
for ws in wl.list_workspaces():
if ws.name() == name:
workspace= ws
if(workspace == None):
workspace = wl.create_workspace(name)
return workspace
def get_pipeline(name):
try:
pipeline = wl.pipelines_by_name(pipeline_name)[0]
except EntityNotFoundError:
pipeline = wl.build_pipeline(pipeline_name)
return pipeline
Create Workspace
The workspace will be created or connected to, and set as the default workspace for this session. Once that is done, then all models and pipelines will be set in that workspace.
workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
wl.get_current_workspace()
Create Pipeline and Upload Model
We will now create or connect to an existing pipeline as named in the variables above, then upload each of the models.
pipeline = get_pipeline(pipeline_name)
control = wl.upload_model(control_model_name, control_model_file_name)
challenger = wl.upload_model(challenger_model_name, challenger_model_file_name)
Shadow Deploy Pipeline
For this step, rather than deploying each model into a separate step, both will be deployed into a single step as a Shadow Deploy step. This will take the inference input data and process it through both pipelines at the same time. The inference results for the control will be stored in it’s ['outputs']
array, while the results for the challenger are stored the ['shadow_data']
array.
pipeline.add_shadow_deploy(control, [challenger])
name | shadowimagepipelinetest |
---|---|
created | 2023-03-02 19:37:25.349488+00:00 |
last_updated | 2023-03-02 19:37:25.349488+00:00 |
deployed | (none) |
tags | |
versions | 474cfb6d-51fc-4e9c-923e-4ca553e73ccd |
steps |
pipeline.deploy()
name | shadowimagepipelinetest |
---|---|
created | 2023-03-02 19:37:25.349488+00:00 |
last_updated | 2023-03-02 19:38:07.386270+00:00 |
deployed | True |
tags | |
versions | 5b41a12c-f643-47ec-8b9f-842e658fd45c, 474cfb6d-51fc-4e9c-923e-4ca553e73ccd |
steps | mobilenet |
pipeline.status()
{'status': 'Running',
'details': [],
'engines': [{'ip': '10.244.13.25',
'name': 'engine-6775774cb8-ngrz7',
'status': 'Running',
'reason': None,
'details': [],
'pipeline_statuses': {'pipelines': [{'id': 'shadowimagepipelinetest',
'status': 'Running'}]},
'model_statuses': {'models': [{'name': 'resnet50',
'version': 'e30e6def-5e32-40d2-bb9f-11896cc36bd9',
'sha': 'ee606dc9776a1029420b3adf59b6d29395c89d1d9460d75045a1f2f152d288e7',
'status': 'Running'},
{'name': 'mobilenet',
'version': '483465ed-5f41-488e-8539-66a0b028662b',
'sha': 'f4c7009e53b679f5e44d70d9612e8dc365565cec88c25b5efa11b903b6b7bdc6',
'status': 'Running'}]}}],
'engine_lbs': [{'ip': '10.244.12.52',
'name': 'engine-lb-ddd995646-qrj6m',
'status': 'Running',
'reason': None,
'details': []}],
'sidekicks': []}
Prepare input image
Next we will load a sample image and resize it to the width and height required for the object detector.
We will convert the image to a numpy ndim array and add it do a dictionary
imagePath = 'data/images/current/input/example/store-front.png'
# The image width and height needs to be set to what the model was trained for. In this case 640x480.
cvDemo = CVDemo()
# The size the image will be resized to meet the input requirements of the object detector
width = 640
height = 480
tensor, controlImage = cvDemo.loadImageAndResize(imagePath, width, height)
challengerImage = controlImage.copy()
# get npArray from the tensorFloat
npArray = tensor.cpu().numpy()
#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.
dictData = {"tensor": npArray.tolist()}
Run Inference using Shadow Deployment
Now lets have the model detect the objects on the image by running inference and extracting the results
startTime = time.time()
infResults = pipeline.infer(dictData, timeout=60)
endTime = time.time()
if arrowEnabled is True:
results = infResults[0]
else:
results = infResults[0].raw
Extract Control Inference Results
First we’ll extract the inference result data for the control model and map it onto the image.
df = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.float_format = '{:.2%}'.format
# Points to where all the inference results are
outputs = results['outputs']
shadow_data = results['shadow_data']
controlBoxes = outputs[0]
# reshape this to an array of bounding box coordinates converted to ints
boxList = controlBoxes['Float']['data']
boxA = np.array(boxList)
controlBoxes = boxA.reshape(-1, 4)
controlBoxes = controlBoxes.astype(int)
df[['x', 'y','width','height']] = pd.DataFrame(controlBoxes)
controlClasses = outputs[1]['Int64']['data']
controlConfidences = outputs[2]['Float']['data']
results = {
'model_name' : control.name(),
'pipeline_name' : pipeline.name(),
'width': width,
'height': height,
'image' : controlImage,
'boxes' : controlBoxes,
'classes' : controlClasses,
'confidences' : controlConfidences,
'confidence-target' : 0.9,
'color':CVDemo.RED, # color to draw bounding boxes and the text in the statistics
'inference-time': (endTime-startTime),
'onnx-time' : 0,
}
cvDemo.drawAndDisplayDetectedObjectsWithClassification(results)

Display the Control Results
Here we will use the Wallaroo CVDemo helper class to draw the control model results on the image.
The full results will be displayed in a dataframe with columns representing the classification, confidence, and bounding boxes of the objects identified.
Once extracted from the results we will want to reshape the flattened array into an array with 4 elements (x,y,width,height).
idx = 0
cocoClasses = cvDemo.getCocoClasses()
for idx in range(0,len(controlClasses)):
df['classification'][idx] = cocoClasses[controlClasses[idx]] # Classes contains the 80 different COCO classificaitons
df['confidence'][idx] = controlConfidences[idx]
df
classification | confidence | x | y | width | height | |
---|---|---|---|---|---|---|
0 | car | 99.82% | 278 | 335 | 494 | 471 |
1 | person | 95.43% | 32 | 303 | 66 | 365 |
2 | umbrella | 81.33% | 117 | 256 | 209 | 322 |
3 | person | 72.38% | 183 | 310 | 203 | 367 |
4 | umbrella | 58.16% | 213 | 273 | 298 | 309 |
5 | person | 47.49% | 155 | 307 | 180 | 365 |
6 | person | 45.20% | 263 | 315 | 303 | 422 |
7 | person | 44.17% | 8 | 304 | 36 | 361 |
8 | person | 41.89% | 608 | 330 | 628 | 375 |
9 | person | 40.04% | 557 | 330 | 582 | 395 |
10 | potted plant | 39.22% | 241 | 193 | 315 | 292 |
11 | person | 38.94% | 547 | 329 | 573 | 397 |
12 | person | 38.50% | 615 | 331 | 634 | 372 |
13 | person | 37.89% | 553 | 321 | 576 | 374 |
14 | person | 37.04% | 147 | 304 | 170 | 366 |
15 | person | 36.11% | 515 | 322 | 537 | 369 |
16 | person | 34.55% | 562 | 317 | 586 | 373 |
17 | person | 32.37% | 531 | 329 | 557 | 399 |
18 | person | 32.19% | 239 | 306 | 279 | 428 |
19 | person | 30.28% | 320 | 308 | 343 | 359 |
20 | person | 26.50% | 289 | 311 | 310 | 380 |
21 | person | 23.09% | 371 | 307 | 394 | 337 |
22 | person | 22.66% | 295 | 300 | 340 | 373 |
23 | person | 22.23% | 1 | 306 | 25 | 362 |
24 | person | 21.88% | 484 | 319 | 506 | 349 |
25 | person | 21.13% | 272 | 327 | 297 | 405 |
26 | person | 20.15% | 136 | 304 | 160 | 363 |
27 | person | 19.68% | 520 | 338 | 543 | 392 |
28 | person | 16.86% | 478 | 317 | 498 | 348 |
29 | person | 16.55% | 365 | 319 | 391 | 344 |
30 | person | 16.22% | 621 | 339 | 639 | 403 |
31 | potted plant | 16.18% | 0 | 361 | 215 | 470 |
32 | person | 15.13% | 279 | 313 | 300 | 387 |
33 | person | 10.62% | 428 | 312 | 444 | 337 |
34 | umbrella | 10.01% | 215 | 252 | 313 | 315 |
35 | umbrella | 9.10% | 295 | 294 | 346 | 357 |
36 | umbrella | 7.95% | 358 | 293 | 402 | 319 |
37 | umbrella | 7.81% | 319 | 307 | 344 | 356 |
38 | potted plant | 7.18% | 166 | 331 | 221 | 439 |
39 | umbrella | 6.38% | 129 | 264 | 200 | 360 |
40 | person | 5.69% | 428 | 318 | 450 | 343 |
Display the Challenger Results
Here we will use the Wallaroo CVDemo helper class to draw the challenger model results on the input image.
challengerBoxes = shadow_data['resnet50'][0]
# reshape this to an array of bounding box coordinates converted to ints
boxList = challengerBoxes['Float']['data']
boxA = np.array(boxList)
challengerBoxes = boxA.reshape(-1, 4)
challengerBoxes = challengerBoxes.astype(int)
challengerDf = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.float_format = '{:.2%}'.format
challengerDf[['x', 'y','width','height']] = pd.DataFrame(challengerBoxes)
#pd.options.display.float_format = '{:.2%}'.format
challengerClasses = shadow_data['resnet50'][1]['Int64']['data']
challengerConfidences = shadow_data['resnet50'][2]['Float']['data']
blue = (255, 0, 0)
results = {
'model_name' : challenger.name(),
'pipeline_name' : pipeline.name(),
'width': width,
'height': height,
'image' : challengerImage,
'boxes' : challengerBoxes,
'classes' : challengerClasses,
'confidences' : challengerConfidences,
'confidence-target' : 0.90,
'color':CVDemo.BLUE, # color to draw bounding boxes and the text in the statistics
'inference-time': (endTime-startTime),
'onnx-time' : 0,
}
cvDemo.drawAndDisplayDetectedObjectsWithClassification(results)

Display Challenger Results
The inference results for the objects detected by the challenger model will be displayed including the confidence values. Once extracted from the results we will want to reshape the flattened array into an array with 4 elements (x,y,width,height).
idx = 0
for idx in range(0,len(challengerClasses)):
challengerDf['classification'][idx] = cvDemo.CLASSES[challengerClasses[idx]] # Classes contains the 80 different COCO classificaitons
challengerDf['confidence'][idx] = challengerConfidences[idx]
challengerDf
classification | confidence | x | y | width | height | |
---|---|---|---|---|---|---|
0 | car | 99.91% | 274 | 332 | 496 | 472 |
1 | person | 99.77% | 536 | 320 | 563 | 409 |
2 | person | 98.88% | 31 | 305 | 69 | 370 |
3 | car | 97.02% | 617 | 335 | 639 | 424 |
4 | potted plant | 96.82% | 141 | 337 | 164 | 365 |
... | ... | ... | ... | ... | ... | ... |
81 | person | 5.61% | 312 | 316 | 341 | 371 |
82 | umbrella | 5.60% | 328 | 275 | 418 | 337 |
83 | person | 5.54% | 416 | 320 | 425 | 331 |
84 | person | 5.52% | 406 | 317 | 419 | 331 |
85 | person | 5.14% | 277 | 308 | 292 | 390 |
86 rows × 6 columns
pipeline.undeploy()
name | shadowimagepipelinetest |
---|---|
created | 2023-03-02 19:37:25.349488+00:00 |
last_updated | 2023-03-02 19:38:07.386270+00:00 |
deployed | False |
tags | |
versions | 5b41a12c-f643-47ec-8b9f-842e658fd45c, 474cfb6d-51fc-4e9c-923e-4ca553e73ccd |
steps | mobilenet |
Conclusion
Notice the difference in the control confidence and the challenger confidence. Clearly we can see in this example the challenger resnet50 model is performing better than the control mobilenet model. This is likely due to the fact that frcnn resnet50 model is a 2 stage object detector vs the frcnn mobilenet is a single stage detector.
This completes using Wallaroo’s shadow deployment feature to compare different computer vision models.