This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.
One of the biggest challenges facing organizations once they have a model trained is deploying the model: Getting all of the resources together, MLOps configured and systems prepared to allow inferences to run.
The next biggest challenge? Replacing the model while keeping the existing production systems running.
This tutorial demonstrates how Wallaroo model hot swap can update a pipeline step with a new model with one command. This lets organizations keep their production systems running while changing a ML model, with the change taking only milliseconds, and any inference requests in that time are processed after the hot swap is completed.
This example and sample data comes from the Machine Learning Group’s demonstration on Credit Card Fraud detection.
This tutorial provides the following:
rf_model.onnx
: The champion model that has been used in this environment for some time.xgb_model.onnx
and gbr_model.onnx
: Rival models that we will swap out from the champion model.For more information about Wallaroo and related features, see the Wallaroo Documentation Site.
The following steps demonstrate the following:
rf_model.onnx
model as a pipeline step.xgb_model.onnx
while keeping the pipeline deployed.Load the Python libraries used to connect and interact with the Wallaroo instance.
import wallaroo
from wallaroo.object import EntityNotFoundError
# to display dataframe tables
from IPython.display import display
# used to display dataframe information without truncating
import pandas as pd
pd.set_option('display.max_colwidth', None)
import pyarrow as pa
The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client()
command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client()
. For more information on Wallaroo Client settings, see the Client Connection guide.
# Login through local Wallaroo instance
wl = wallaroo.Client()
The following variables are used in the later steps for creating the workspace, pipeline, and uploading the models. Modify them according to your organization’s requirements.
Just for the sake of this tutorial, we’ll use the SDK below to create our workspace , assign as our current workspace, then display all of the workspaces we have at the moment. We’ll also set up for our models and pipelines down the road, so we have one spot to change names to whatever fits your organization’s standards best.
To allow this tutorial to be run multiple times or by multiple users in the same Wallaroo instance, a random 4 character prefix will be added to the workspace, pipeline, and model.
import string
import random
# make a random 4 character prefix
prefix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))
workspace_name = f'{prefix}hotswapworkspace'
pipeline_name = f'{prefix}hotswappipeline'
original_model_name = f'{prefix}housingmodelcontrol'
original_model_file_name = './models/rf_model.onnx'
replacement_model_name01 = f'{prefix}gbrhousingchallenger'
replacement_model_file_name01 = './models/gbr_model.onnx'
replacement_model_name02 = f'{prefix}xgbhousingchallenger'
replacement_model_file_name02 = './models/xgb_model.onnx'
def get_workspace(name):
workspace = None
for ws in wl.list_workspaces():
if ws.name() == name:
workspace= ws
if(workspace == None):
workspace = wl.create_workspace(name)
return workspace
def get_pipeline(name):
try:
pipeline = wl.pipelines_by_name(name)[0]
except EntityNotFoundError:
pipeline = wl.build_pipeline(name)
return pipeline
We will create a workspace based on the variable names set above, and set the new workspace as the current
workspace. This workspace is where new pipelines will be created in and store uploaded models for this session.
Once set, the pipeline will be created.
workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)
pipeline = get_pipeline(pipeline_name)
pipeline
name | hjfkhotswappipeline |
---|---|
created | 2023-07-14 15:36:48.697941+00:00 |
last_updated | 2023-07-14 15:36:48.697941+00:00 |
deployed | (none) |
tags | |
versions | 2b84d42b-bda9-4cc8-b182-cb3856c2882b |
steps |
We can now upload both of the models. In a later step, only one model will be added as a pipeline step, where the pipeline will submit inference requests to the pipeline.
original_model = wl.upload_model(original_model_name , original_model_file_name, framework=wallaroo.framework.Framework.ONNX)
replacement_model01 = wl.upload_model(replacement_model_name01 , replacement_model_file_name01, framework=wallaroo.framework.Framework.ONNX)
replacement_model02 = wl.upload_model(replacement_model_name02 , replacement_model_file_name02, framework=wallaroo.framework.Framework.ONNX)
wl.list_models()
Name | # of Versions | Owner ID | Last Updated | Created At |
---|---|---|---|---|
hjfkxgbhousingchallenger | 1 | "" | 2023-07-14 15:36:52.193971+00:00 | 2023-07-14 15:36:52.193971+00:00 |
hjfkgbrhousingchallenger | 1 | "" | 2023-07-14 15:36:51.451443+00:00 | 2023-07-14 15:36:51.451443+00:00 |
hjfkhousingmodelcontrol | 1 | "" | 2023-07-14 15:36:50.705124+00:00 | 2023-07-14 15:36:50.705124+00:00 |
With the models uploaded, we will add the original model as a pipeline step, then deploy the pipeline so it is available for performing inferences.
pipeline.add_model_step(original_model)
pipeline
name | hjfkhotswappipeline |
---|---|
created | 2023-07-14 15:36:48.697941+00:00 |
last_updated | 2023-07-14 15:36:48.697941+00:00 |
deployed | (none) |
tags | |
versions | 2b84d42b-bda9-4cc8-b182-cb3856c2882b |
steps |
pipeline.deploy()
name | hjfkhotswappipeline |
---|---|
created | 2023-07-14 15:36:48.697941+00:00 |
last_updated | 2023-07-14 15:36:55.684558+00:00 |
deployed | True |
tags | |
versions | 88c2fa3b-9d7e-494c-a84f-5786509b59f4, 2b84d42b-bda9-4cc8-b182-cb3856c2882b |
steps | hjfkhousingmodelcontrol |
pipeline.status()
{'status': 'Running',
'details': [],
'engines': [{'ip': '10.244.3.143',
'name': 'engine-96ddf456f-nlxtl',
'status': 'Running',
'reason': None,
'details': [],
'pipeline_statuses': {'pipelines': [{'id': 'hjfkhotswappipeline',
'status': 'Running'}]},
'model_statuses': {'models': [{'name': 'hjfkhousingmodelcontrol',
'version': '5c97c14e-b8f4-412c-b812-ec67ccc964b9',
'sha': 'e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6',
'status': 'Running'}]}}],
'engine_lbs': [{'ip': '10.244.4.189',
'name': 'engine-lb-584f54c899-fz26k',
'status': 'Running',
'reason': None,
'details': []}],
'sidekicks': []}
The pipeline is deployed with our model. The following will verify that the model is operating correctly. The high_fraud.json
file contains data that the model should process as a high likelihood of being a fraudulent transaction.
normal_input = pd.DataFrame.from_records({"tensor": [[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]]})
result = pipeline.infer(normal_input)
display(result)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:09.422 | [4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0] | [718013.7] | 0 |
large_house_input = pd.DataFrame.from_records({'tensor': [[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0]]})
large_house_result = pipeline.infer(large_house_input)
display(large_house_result)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:09.871 | [4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0] | [1514079.4] | 0 |
The pipeline is currently deployed and is able to handle inferences. The model will now be replaced without having to undeploy the pipeline. This is done using the pipeline method replace_with_model_step(index, model)
. Steps start at 0
, so the method called below will replace step 0 in our pipeline with the replacement model.
As an exercise, this deployment can be performed while inferences are actively being submitted to the pipeline to show how quickly the swap takes place.
pipeline.replace_with_model_step(0, replacement_model01).deploy()
To verify the swap, we’ll submit the same inferences and display the result. Note that out.variable
has a different output than with the original model.
normal_input = pd.DataFrame.from_records({"tensor": [[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]]})
result02 = pipeline.infer(normal_input)
display(result02)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:25.853 | [4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0] | [704901.9] | 0 |
large_house_input = pd.DataFrame.from_records({'tensor': [[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0]]})
large_house_result02 = pipeline.infer(large_house_input)
display(large_house_result02)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:26.255 | [4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0] | [1981238.0] | 0 |
Let’s do one more hot swap, this time with our replacement_model02
, then get some test inferences.
pipeline.replace_with_model_step(0, replacement_model02).deploy()
name | hjfkhotswappipeline |
---|---|
created | 2023-07-14 15:36:48.697941+00:00 |
last_updated | 2023-07-14 15:37:27.279807+00:00 |
deployed | True |
tags | |
versions | 9126c35d-68db-4b41-915e-14ebef5b1b51, 5c643e10-c9bf-48db-ad25-a5e38b6faf5f, 88c2fa3b-9d7e-494c-a84f-5786509b59f4, 2b84d42b-bda9-4cc8-b182-cb3856c2882b |
steps | hjfkhousingmodelcontrol |
normal_input = pd.DataFrame.from_records({"tensor": [[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]]})
result03 = pipeline.infer(normal_input)
display(result03)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:31.114 | [4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0] | [659806.0] | 0 |
large_house_input = pd.DataFrame.from_records({'tensor': [[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0]]})
large_house_result03 = pipeline.infer(large_house_input)
display(large_house_result03)
time | in.tensor | out.variable | check_failures | |
---|---|---|---|---|
0 | 2023-07-14 15:37:31.514 | [4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0] | [2176827.0] | 0 |
We’ll display the outputs of our inferences through the different models for comparison.
display([original_model_name, result.loc[0, "out.variable"]])
display([replacement_model_name01, result02.loc[0, "out.variable"]])
display([replacement_model_name02, result03.loc[0, "out.variable"]])
['hjfkhousingmodelcontrol', [718013.7]]
[‘hjfkgbrhousingchallenger’, [704901.9]]
[‘hjfkxgbhousingchallenger’, [659806.0]]
display([original_model_name, large_house_result.loc[0, "out.variable"]])
display([replacement_model_name01, large_house_result02.loc[0, "out.variable"]])
display([replacement_model_name02, large_house_result03.loc[0, "out.variable"]])
['hjfkhousingmodelcontrol', [1514079.4]]
[‘hjfkgbrhousingchallenger’, [1981238.0]]
[‘hjfkxgbhousingchallenger’, [2176827.0]]
With the tutorial complete, the pipeline is undeployed to return the resources back to the Wallaroo instance.
pipeline.undeploy()
name | hjfkhotswappipeline |
---|---|
created | 2023-07-14 15:36:48.697941+00:00 |
last_updated | 2023-07-14 15:37:27.279807+00:00 |
deployed | False |
tags | |
versions | 9126c35d-68db-4b41-915e-14ebef5b1b51, 5c643e10-c9bf-48db-ad25-a5e38b6faf5f, 88c2fa3b-9d7e-494c-a84f-5786509b59f4, 2b84d42b-bda9-4cc8-b182-cb3856c2882b |
steps | hjfkhousingmodelcontrol |