Step 03: mobilenet and resnet50 Shadow Deploy

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Step 03: Detecting Objects Using Shadow Deploy

The following tutorial demonstrates how to use two trained models, one based on the resnet50, the other on mobilenet, deployed in Wallaroo to detect objects. This builds on the previous tutorials in this series, Step 01: Detecting Objects Using mobilenet" and “Step 02: Detecting Objects Using resnet50”.

For this tutorial, the Wallaroo feature Shadow Deploy will be used to submit inference requests to both models at once. The mobilnet object detector is the control and the faster-rcnn object detector is the challenger. The results between the two will be compared for their confidence, and that confidence will be used to draw bounding boxes around identified objects.

This process will use the following steps:

  1. Create a Wallaroo workspace and pipeline.
  2. Upload a trained resnet50 ML model and trained mobilenet model and add them as a shadow deployed step with the mobilenet as the control model.
  3. Deploy the pipeline.
  4. Perform an inference on a sample image.
  5. Based on the
  6. Draw the detected objects, their bounding boxes, their classifications, and the confidence of the classifications on the provided image.
  7. Review our results.

Steps

Import Libraries

The first step will be to import our libraries. Please check with Step 00: Introduction and Setup and verify that the necessary libraries and applications are added to your environment.

import torch
import pickle
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import numpy as np
import json
import requests
import time
import pandas as pd
from CVDemoUtils import CVDemo

# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

# used for unique connection names

import string
import random
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local service

wl = wallaroo.Client()

Set Variables

The following variables and methods are used later to create or connect to an existing workspace, pipeline, and model. This example has both the resnet model, and a post process script.

workspace_name = f'shadowimageworkspacetest{suffix}'
pipeline_name = f'shadowimagepipelinetest{suffix}'
control_model_name = f'mobilenet{suffix}'
control_model_file_name = 'models/mobilenet.pt.onnx'
challenger_model_name = f'resnet50{suffix}'
challenger_model_file_name = 'models/frcnn-resnet.pt.onnx'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

Create Workspace

The workspace will be created or connected to, and set as the default workspace for this session. Once that is done, then all models and pipelines will be set in that workspace.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
wl.get_current_workspace()
{'name': 'shadowimageworkspacetestydhi', 'id': 17, 'archived': False, 'created_by': '4e296632-35b3-460e-85fe-565e311bc566', 'created_at': '2023-07-14T15:18:03.466347+00:00', 'models': [], 'pipelines': []}

Create Pipeline and Upload Model

We will now create or connect to an existing pipeline as named in the variables above, then upload each of the models.

pipeline = get_pipeline(pipeline_name)
control =  wl.upload_model(control_model_name, control_model_file_name, framework=Framework.ONNX).configure(batch_config="single")
challenger = wl.upload_model(challenger_model_name, challenger_model_file_name, framework=Framework.ONNX).configure(batch_config="single")

Shadow Deploy Pipeline

For this step, rather than deploying each model into a separate step, both will be deployed into a single step as a Shadow Deploy step. This will take the inference input data and process it through both pipelines at the same time. The inference results for the control will be stored in it’s ['outputs'] array, while the results for the challenger are stored the ['shadow_data'] array.

pipeline.add_shadow_deploy(control, [challenger])
name shadowimagepipelinetestydhi
created 2023-07-14 15:18:05.557027+00:00
last_updated 2023-07-14 15:18:05.557027+00:00
deployed (none)
tags
versions 05cd00dc-cf35-4a96-bdc3-f624d7b36477
steps
pipeline.deploy()
name shadowimagepipelinetestydhi
created 2023-07-14 15:18:05.557027+00:00
last_updated 2023-07-14 15:19:41.498743+00:00
deployed True
tags
versions 64f3cfe9-bce4-4c4f-8328-9ee1cef3fe2d, 05cd00dc-cf35-4a96-bdc3-f624d7b36477
steps mobilenetydhi
pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.244.3.137',
   'name': 'engine-c7bdfbd9c-qfgf9',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'shadowimagepipelinetestydhi',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'mobilenetydhi',
      'version': '06ab0c1b-4151-4ab7-ba74-9a83a5c288f2',
      'sha': 'f4c7009e53b679f5e44d70d9612e8dc365565cec88c25b5efa11b903b6b7bdc6',
      'status': 'Running'},
     {'name': 'resnet50ydhi',
      'version': '7811489f-cf8b-469e-a917-9070c842a969',
      'sha': 'ee606dc9776a1029420b3adf59b6d29395c89d1d9460d75045a1f2f152d288e7',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.244.4.182',
   'name': 'engine-lb-584f54c899-qdzqk',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': []}

Prepare input image

Next we will load a sample image and resize it to the width and height required for the object detector.

We will convert the image to a numpy ndim array and add it do a dictionary


imagePath = 'data/images/input/example/store-front.png'

# The image width and height needs to be set to what the model was trained for.  In this case 640x480.
cvDemo = CVDemo()

# The size the image will be resized to meet the input requirements of the object detector
width = 640
height = 480
tensor, controlImage = cvDemo.loadImageAndResize(imagePath, width, height)
challengerImage = controlImage.copy()

# get npArray from the tensorFloat
npArray = tensor.cpu().numpy()

#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.
# dictData = {"tensor": npArray.tolist()}

dictData = {"tensor":[npArray]}
dataframedata = pd.DataFrame(dictData)

Run Inference using Shadow Deployment

Now lets have the model detect the objects on the image by running inference and extracting the results

startTime = time.time()
infResults = pipeline.infer(dataframedata, dataset=["*", "metadata.elapsed"])
endTime = time.time()

Extract Control Inference Results

First we’ll extract the inference result data for the control model and map it onto the image.

df = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None  # default='warn'
pd.options.display.float_format = '{:.2%}'.format

# Points to where all the inference results are
# boxList = infResults[0]["out.output"]
boxList = infResults.loc[0]["out.output"]

# # reshape this to an array of bounding box coordinates converted to ints
boxA = np.array(boxList)
controlBoxes = boxA.reshape(-1, 4)
controlBoxes = controlBoxes.astype(int)

df[['x', 'y','width','height']] = pd.DataFrame(controlBoxes)

controlClasses = infResults.loc[0]["out.2519"]
controlConfidences = infResults.loc[0]["out.2518"]

results = {
    'model_name' : control.name(),
    'pipeline_name' : pipeline.name(),
    'width': width,
    'height': height,
    'image' : controlImage,
    'boxes' : controlBoxes,
    'classes' : controlClasses,
    'confidences' : controlConfidences,
    'confidence-target' : 0.9,
    'color':CVDemo.RED, # color to draw bounding boxes and the text in the statistics
    'inference-time': (endTime-startTime),
    'onnx-time' : 0,                
}
cvDemo.drawAndDisplayDetectedObjectsWithClassification(results)

Display the Control Results

Here we will use the Wallaroo CVDemo helper class to draw the control model results on the image.

The full results will be displayed in a dataframe with columns representing the classification, confidence, and bounding boxes of the objects identified.

Once extracted from the results we will want to reshape the flattened array into an array with 4 elements (x,y,width,height).

idx = 0 
cocoClasses = cvDemo.getCocoClasses()
for idx in range(0,len(controlClasses)):
    df['classification'][idx] = cocoClasses[controlClasses[idx]] # Classes contains the 80 different COCO classificaitons
    df['confidence'][idx] = controlConfidences[idx]
df
classification confidence x y width height
0 car 99.82% 278 335 494 471
1 person 95.43% 32 303 66 365
2 umbrella 81.33% 117 256 209 322
3 person 72.38% 183 310 203 367
4 umbrella 58.16% 213 273 298 309
5 person 47.49% 155 307 180 365
6 person 45.20% 263 315 303 422
7 person 44.17% 8 304 36 361
8 person 41.89% 608 330 628 375
9 person 40.04% 557 330 582 395
10 potted plant 39.22% 241 193 315 292
11 person 38.94% 547 329 573 397
12 person 38.50% 615 331 634 372
13 person 37.89% 553 321 576 374
14 person 37.04% 147 304 170 366
15 person 36.11% 515 322 537 369
16 person 34.55% 562 317 586 373
17 person 32.37% 531 329 557 399
18 person 32.19% 239 306 279 428
19 person 30.28% 320 308 343 359
20 person 26.50% 289 311 310 380
21 person 23.09% 371 307 394 337
22 person 22.66% 295 300 340 373
23 person 22.23% 1 306 25 362
24 person 21.88% 484 319 506 349
25 person 21.13% 272 327 297 405
26 person 20.15% 136 304 160 363
27 person 19.68% 520 338 543 392
28 person 16.86% 478 317 498 348
29 person 16.55% 365 319 391 344
30 person 16.22% 621 339 639 403
31 potted plant 16.18% 0 361 215 470
32 person 15.13% 279 313 300 387
33 person 10.62% 428 312 444 337
34 umbrella 10.01% 215 252 313 315
35 umbrella 9.10% 295 294 346 357
36 umbrella 7.95% 358 293 402 319
37 umbrella 7.81% 319 307 344 356
38 potted plant 7.18% 166 331 221 439
39 umbrella 6.38% 129 264 200 360
40 person 5.69% 428 318 450 343

Display the Challenger Results

Here we will use the Wallaroo CVDemo helper class to draw the challenger model results on the input image.

challengerDf = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None  # default='warn'
pd.options.display.float_format = '{:.2%}'.format

# Points to where all the inference results are
boxList = infResults.loc[0][f"out_{challenger_model_name}.output"]

# outputs = results['outputs']
# boxes = outputs[0]

# # reshape this to an array of bounding box coordinates converted to ints
# boxList = boxes['Float']['data']
boxA = np.array(boxList)
challengerBoxes = boxA.reshape(-1, 4)
challengerBoxes = challengerBoxes.astype(int)

challengerDf[['x', 'y','width','height']] = pd.DataFrame(challengerBoxes)

challengerClasses = infResults.loc[0][f"out_{challenger_model_name}.3070"]
challengerConfidences = infResults.loc[0][f"out_{challenger_model_name}.3069"]

results = {
    'model_name' : challenger.name(),
    'pipeline_name' : pipeline.name(),
    'width': width,
    'height': height,
    'image' : challengerImage,
    'boxes' : challengerBoxes,
    'classes' : challengerClasses,
    'confidences' : challengerConfidences,
    'confidence-target' : 0.9,
    'color':CVDemo.RED, # color to draw bounding boxes and the text in the statistics
    'inference-time': (endTime-startTime),
    'onnx-time' : 0,                
}
cvDemo.drawAndDisplayDetectedObjectsWithClassification(results)

Display Challenger Results

The inference results for the objects detected by the challenger model will be displayed including the confidence values. Once extracted from the results we will want to reshape the flattened array into an array with 4 elements (x,y,width,height).

idx = 0 
for idx in range(0,len(challengerClasses)):
    challengerDf['classification'][idx] = cvDemo.CLASSES[challengerClasses[idx]] # Classes contains the 80 different COCO classificaitons
    challengerDf['confidence'][idx] = challengerConfidences[idx]
challengerDf
classification confidence x y width height
0 car 99.91% 274 332 496 472
1 person 99.77% 536 320 563 409
2 person 98.88% 31 305 69 370
3 car 97.02% 617 335 639 424
4 potted plant 96.82% 141 337 164 365
... ... ... ... ... ... ...
81 person 5.61% 312 316 341 371
82 umbrella 5.60% 328 275 418 337
83 person 5.54% 416 320 425 331
84 person 5.52% 406 317 419 331
85 person 5.14% 277 308 292 390

86 rows × 6 columns

pipeline.undeploy()
name shadowimagepipelinetestydhi
created 2023-07-14 15:18:05.557027+00:00
last_updated 2023-07-14 15:19:41.498743+00:00
deployed False
tags
versions 64f3cfe9-bce4-4c4f-8328-9ee1cef3fe2d, 05cd00dc-cf35-4a96-bdc3-f624d7b36477
steps mobilenetydhi

Conclusion

Notice the difference in the control confidence and the challenger confidence. Clearly we can see in this example the challenger resnet50 model is performing better than the control mobilenet model. This is likely due to the fact that frcnn resnet50 model is a 2 stage object detector vs the frcnn mobilenet is a single stage detector.

This completes using Wallaroo’s shadow deployment feature to compare different computer vision models.