ARM Computer Vision Retail Demonstration
For details on adding ARM nodepools to a Wallaroo cluster, see Create ARM Nodepools for Kubernetes Clusters.
This tutorial is available on the Wallaroo Tutorials repository.
Computer Vision ARM Edge Deployment Tutorial
This tutorial demonstrates how to use the Wallaroo combined with ARM processors to perform inferences with pre-trained computer vision ML models. This demonstration assumes that:
- Wallaroo Version 2023.3 or above instance is installed.
- A nodepools with ARM architecture virtual machines are part of the Kubernetes cluster. For example, Azure supports Ampere® Altra® Arm-based processor included with the following virtual machines:
Tutorial Goals
For our example, we will perform the following:
- Create a workspace for our work.
- Upload the the resnet computer vision model with the architecture set to ARM.
- Create a pipeline and deploy the model on ARM.
- Perform sample inferences on the ARM deployed model.
Steps
Import Libraries
The first step will be to import our libraries.
import torch
import pickle
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework
import numpy as np
import json
import requests
import time
import pandas as pd
# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)
# used for unique connection names
import string
import random
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))
Connect to the Wallaroo Instance
The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client()
command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client()
. For more information on Wallaroo Client settings, see the Client Connection guide.
# Login through local service
wl = wallaroo.Client()
Set Variables
The following variables and methods are used later to create or connect to an existing workspace, pipeline, and model.
workspace_name = f'cv-arm-example'
pipeline_name = 'cv-sample'
arm_resnet_model_name = 'arm-resnet50'
resnet_model_file_name = 'models/resnet50_v1.onnx'
Create Workspace
The workspace will be created or connected to, and set as the default workspace for this session. Once that is done, then all models and pipelines will be set in that workspace.
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)
wl.get_current_workspace()
{'name': 'cv-arm-example', 'id': 26, 'archived': False, 'created_by': '0e5060a5-218c-47c1-9678-e83337494184', 'created_at': '2023-09-08T21:54:56.56663+00:00', 'models': [{'name': 'x86-resnet50', 'versions': 1, 'owner_id': '""', 'last_update_time': datetime.datetime(2023, 9, 8, 21, 55, 1, 675188, tzinfo=tzutc()), 'created_at': datetime.datetime(2023, 9, 8, 21, 55, 1, 675188, tzinfo=tzutc())}, {'name': 'arm-resnet50', 'versions': 1, 'owner_id': '""', 'last_update_time': datetime.datetime(2023, 9, 8, 21, 55, 6, 69116, tzinfo=tzutc()), 'created_at': datetime.datetime(2023, 9, 8, 21, 55, 6, 69116, tzinfo=tzutc())}], 'pipelines': [{'name': 'cv-sample', 'create_time': datetime.datetime(2023, 9, 8, 21, 54, 57, 62345, tzinfo=tzutc()), 'definition': '[]'}]}
Create Pipeline and Upload Model
We will now create or connect to our pipeline, then upload the model with the architecture set to ARM
. Model architectures are set at model upload.
pipeline = wl.build_pipeline(pipeline_name)
from wallaroo.engine_config import Architecture
arm_resnet_model = wl.upload_model(arm_resnet_model_name,
resnet_model_file_name,
framework=Framework.ONNX,
arch=Architecture.ARM)
Deploy Pipeline
With the model uploaded, we can add it is as a step in the pipeline, then deploy it.
Once deployed, resources from the Wallaroo instance will be reserved and the pipeline will be ready to use the model to perform inference requests.
The deployment configuration inherits the model’s architecture settings; when deployed, the model will automatically be deployed on nodepools with ARM processor nodes.
deployment_config = (wallaroo.deployment_config
.DeploymentConfigBuilder()
.cpus(2)
.memory('2Gi')
.build()
)
# undeploy the pipeline if previously used
pipeline.undeploy()
# clear previous steps if previously used
pipeline.clear()
# add the model to the pipeline as a model step
pipeline.add_model_step(arm_resnet_model)
pipeline.deploy(deployment_config = deployment_config)
Waiting for undeployment - this will take up to 45s .................................... ok
Waiting for deployment - this will take up to 45s ............... ok
name | cv-sample |
---|---|
created | 2023-09-08 21:54:57.062345+00:00 |
last_updated | 2023-09-08 21:56:26.218871+00:00 |
deployed | True |
tags | |
versions | 3aff896f-52cb-478b-9cd7-64c3212d768f, ccc676d6-019c-4f9a-8866-033950a5907b, 68297806-92bb-4dce-8c10-a1f1d278ab2a |
steps | x86-resnet50 |
published | False |
pipeline.status()
{'status': 'Running',
'details': [],
'engines': [{'ip': '10.244.3.61',
'name': 'engine-57548b6596-qx8k8',
'status': 'Running',
'reason': None,
'details': [],
'pipeline_statuses': {'pipelines': [{'id': 'cv-sample',
'status': 'Running'}]},
'model_statuses': {'models': [{'name': 'arm-resnet50',
'version': 'dec621e2-8b13-44cf-a330-4fdada1f518e',
'sha': 'c6c8869645962e7711132a7e17aced2ac0f60dcdc2c7faa79b2de73847a87984',
'status': 'Running'}]}}],
'engine_lbs': [{'ip': '10.244.3.62',
'name': 'engine-lb-584f54c899-56c8l',
'status': 'Running',
'reason': None,
'details': []}],
'sidekicks': []}
ARM Inference
We will now perform an inference through the model deployed on ARM using an Apache Arrow table.
startTime = time.time()
# pass the table in
results = pipeline.infer_from_file('./data/image_224x224.arrow')
endTime = time.time()
arm_time = endTime-startTime
Undeploy the Pipeline
With the inference complete, we can undeploy the pipeline and return the resources back to the Wallaroo instance.
pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ...................................... ok
name | cv-sample |
---|---|
created | 2023-09-08 21:54:57.062345+00:00 |
last_updated | 2023-09-08 21:56:26.218871+00:00 |
deployed | False |
tags | |
versions | 3aff896f-52cb-478b-9cd7-64c3212d768f, ccc676d6-019c-4f9a-8866-033950a5907b, 68297806-92bb-4dce-8c10-a1f1d278ab2a |
steps | x86-resnet50 |
published | False |