Step 01: Detecting Objects Using mobilenet

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Step 01: Detecting Objects Using mobilenet

The following tutorial demonstrates how to use a trained mobilenet model deployed in Wallaroo to detect objects. This process will use the following steps:

  1. Create a Wallaroo workspace and pipeline.
  2. Upload a trained mobilenet ML model and add it as a pipeline step.
  3. Deploy the pipeline.
  4. Perform an inference on a sample image.
  5. Draw the detected objects, their bounding boxes, their classifications, and the confidence of the classifications on the provided image.
  6. Review our results.

Steps

Import Libraries

The first step will be to import our libraries. Please check with Step 00: Introduction and Setup and verify that the necessary libraries and applications are added to your environment.

import torch
import pickle
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import numpy as np
import json
import requests
import time
import pandas as pd
from CVDemoUtils import CVDemo

# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

# used for unique connection names

import string
import random
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local service

wl = wallaroo.Client()

Set Variables

The following variables and methods are used later to create or connect to an existing workspace, pipeline, and model.

workspace_name = f'mobilenetworkspacetest{suffix}'
pipeline_name = f'mobilenetpipeline{suffix}'
model_name = f'mobilenet{suffix}'
model_file_name = 'models/mobilenet.pt.onnx'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

Create Workspace

The workspace will be created or connected to, and set as the default workspace for this session. Once that is done, then all models and pipelines will be set in that workspace.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
wl.get_current_workspace()
{'name': 'mobilenetworkspacetesthsec', 'id': 15, 'archived': False, 'created_by': '4e296632-35b3-460e-85fe-565e311bc566', 'created_at': '2023-07-14T15:14:43.837337+00:00', 'models': [], 'pipelines': []}

Create Pipeline and Upload Model

We will now create or connect to an existing pipeline as named in the variables above.

pipeline = get_pipeline(pipeline_name)
mobilenet_model = wl.upload_model(model_name, model_file_name, framework=Framework.ONNX).configure(batch_config="single")

Deploy Pipeline

With the model uploaded, we can add it is as a step in the pipeline, then deploy it. Once deployed, resources from the Wallaroo instance will be reserved and the pipeline will be ready to use the model to perform inference requests.

pipeline.add_model_step(mobilenet_model)

pipeline.deploy()
name mobilenetpipelinehsec
created 2023-07-14 15:14:45.891178+00:00
last_updated 2023-07-14 15:15:07.488456+00:00
deployed True
tags
versions 13e17927-aef3-410c-b695-6bd713248f93, ae7cd8ab-7ad0-48ce-ba01-8e8e12595f45
steps mobilenethsec

Prepare input image

Next we will load a sample image and resize it to the width and height required for the object detector. Once complete, it the image will be converted to a numpy ndim array and added to a dictionary.


# The size the image will be resized to
width = 640
height = 480

# Only objects that have a confidence > confidence_target will be displayed on the image
cvDemo = CVDemo()

imagePath = 'data/images/input/example/dairy_bottles.png'

# The image width and height needs to be set to what the model was trained for.  In this case 640x480.
tensor, resizedImage = cvDemo.loadImageAndResize(imagePath, width, height)

# get npArray from the tensorFloat
npArray = tensor.cpu().numpy()

#creates a dictionary with the wallaroo "tensor" key and the numpy ndim array representing image as the value.

dictData = {"tensor":[npArray]}
dataframedata = pd.DataFrame(dictData)

Run Inference

With that done, we can have the model detect the objects on the image by running an inference through the pipeline, and storing the results for the next step.

startTime = time.time()
# pass the dataframe in 
infResults = pipeline.infer(dataframedata, dataset=["*", "metadata.elapsed"])
endTime = time.time()

Draw the Inference Results

With our inference results, we can take them and use the Wallaroo CVDemo class and draw them onto the original image. The bounding boxes and the confidence value will only be drawn on images where the model returned a 90% confidence rate in the object’s identity.

df = pd.DataFrame(columns=['classification','confidence','x','y','width','height'])
pd.options.mode.chained_assignment = None  # default='warn'
pd.options.display.float_format = '{:.2%}'.format

# Points to where all the inference results are
boxList = infResults.loc[0]["out.output"]

# # reshape this to an array of bounding box coordinates converted to ints
boxA = np.array(boxList)
boxes = boxA.reshape(-1, 4)
boxes = boxes.astype(int)

df[['x', 'y','width','height']] = pd.DataFrame(boxes)

classes = infResults.loc[0]["out.2519"]
confidences = infResults.loc[0]["out.2518"]

infResults = {
    'model_name' : model_name,
    'pipeline_name' : pipeline_name,
    'width': width,
    'height': height,
    'image' : resizedImage,
    'boxes' : boxes,
    'classes' : classes,
    'confidences' : confidences,
    'confidence-target' : 0.90,
    'inference-time': (endTime-startTime),
    'onnx-time' : int(infResults.loc[0]["metadata.elapsed"][1]) / 1e+9,                
    'color':(255,0,0)
}

image = cvDemo.drawAndDisplayDetectedObjectsWithClassification(infResults)

Extract the Inference Information

To show what is going on in the background, we’ll extract the inference results create a dataframe with columns representing the classification, confidence, and bounding boxes of the objects identified.

idx = 0 
for idx in range(0,len(classes)):
    df['classification'][idx] = cvDemo.CLASSES[classes[idx]] # Classes contains the 80 different COCO classificaitons
    df['confidence'][idx] = confidences[idx]
df
classification confidence x y width height
0 bottle 98.65% 0 210 85 479
1 bottle 90.12% 72 197 151 468
2 bottle 60.78% 211 184 277 420
3 bottle 59.22% 143 203 216 448
4 refrigerator 53.73% 13 41 640 480
5 bottle 45.13% 106 206 159 463
6 bottle 43.73% 278 1 321 93
7 bottle 43.09% 462 104 510 224
8 bottle 40.85% 310 1 352 94
9 bottle 39.19% 528 268 636 475
10 bottle 35.76% 220 0 258 90
11 bottle 31.81% 552 96 600 233
12 bottle 26.45% 349 0 404 98
13 bottle 23.06% 450 264 619 472
14 bottle 20.48% 261 193 307 408
15 bottle 17.46% 509 101 544 235
16 bottle 17.31% 592 100 633 239
17 bottle 16.00% 475 297 551 468
18 bottle 14.91% 368 163 423 362
19 book 13.66% 120 0 175 81
20 book 13.32% 72 0 143 85
21 bottle 12.22% 271 200 305 274
22 book 12.13% 161 0 213 85
23 bottle 11.96% 162 0 214 83
24 bottle 11.53% 310 190 367 397
25 bottle 9.62% 396 166 441 360
26 cake 8.65% 439 256 640 473
27 bottle 7.84% 544 375 636 472
28 vase 7.23% 272 2 306 96
29 bottle 6.28% 453 303 524 463
30 bottle 5.28% 609 94 635 211

Undeploy the Pipeline

With the inference complete, we can undeploy the pipeline and return the resources back to the Wallaroo instance.

pipeline.undeploy()
name mobilenetpipelinehsec
created 2023-07-14 15:14:45.891178+00:00
last_updated 2023-07-14 15:15:07.488456+00:00
deployed False
tags
versions 13e17927-aef3-410c-b695-6bd713248f93, ae7cd8ab-7ad0-48ce-ba01-8e8e12595f45
steps mobilenethsec