Deploy the Model in Wallaroo

The following tutorials are available from the Wallaroo Tutorials Repository.

Stage 3: Deploy the Model in Wallaroo

In this stage, we upload the trained model and the processing steps to Wallaroo, then set up and deploy the inference pipeline.

Once deployed we can feed the newest batch of data to the pipeline, do the inferences and write the results to a results table.

For clarity in this demo, we have split the training/upload task into two notebooks:

  • 02_automated_training_process.ipynb: Train and pickle ML model.
  • 03_deploy_model.ipynb: Upload the model to Wallaroo and deploy into a pipeline.

Assuming no changes are made to the structure of the model, these two notebooks, or a script based on them, can then be scheduled to run on a regular basis, to refresh the model with more recent training data and update the inference pipeline.

This notebook is expected to run within the Wallaroo instance’s Jupyter Hub service to provide access to all required Wallaroo libraries and functionality.

Resources

The following resources are used as part of this tutorial:

  • data
    • data/seattle_housing_col_description.txt: Describes the columns used as part data analysis.
    • data/seattle_housing.csv: Sample data of the Seattle, Washington housing market between 2014 and 2015.
  • code
    • simdb.py: A simulated database to demonstrate sending and receiving queries.
  • models
    • ./models/housing_model_xgb.onnx: Model created in Stage 2: Training Process Automation Setup.
    • ./models/preprocess_step.zip: A preprocessing model that formats the data for acceptance by the model.
    • ./models/postprocess_step.zip: A postprocessing model that formats the data output by housing_model_xgb.onnx for the inference output.

Steps

The process of uploading the model to Wallaroo follows these steps:

Connect to Wallaroo

First we import the required libraries to connect to the Wallaroo instance, then connect to the Wallaroo instance.

import json
import pickle
import pandas as pd
import numpy as np
import pyarrow as pa

import simdb # module for the purpose of this demo to simulate pulling data from a database

# from wallaroo.ModelConversion import ConvertXGBoostArgs, ModelConversionSource, ModelConversionInputType
import wallaroo
from wallaroo.object import EntityNotFoundError

# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

import datetime

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()
workspace_name = 'housepricing'
model_name = "housepricemodel"
model_file = "./models/housing_model_xgb.onnx"
pipeline_name = "housing-pipe"
new_workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
_ = wl.set_current_workspace(new_workspace)

Upload The Model

With the connection set and workspace prepared, upload the model created in 02_automated_training_process.ipynb into the current workspace.

To ensure the model input contract matches the provided input, the configuration tensor_fields=["tensor"] is used so regardless of what the model input type is, Wallaroo will ensure inputs of type tensor are accepted.

hpmodel = (wl.upload_model(model_name, 
                           model_file, 
                           framework=wallaroo.framework.Framework.ONNX)
                           .configure(tensor_fields=["tensor"]
                        )
            )

Upload the Processing Modules

We upload the preprocessing and postprocessing models that formats the incoming data into a shape the model is trained to accept, and the outgoing data from the model into a format we can use in our production system.

For more details on deploying Python models in Wallaroo, see Model Uploads and Registrations: Python Models.

input_schema = pa.schema([
    pa.field('id', pa.int64()),
    pa.field('date', pa.string()),
    pa.field('list_price', pa.float64()),
    pa.field('bedrooms', pa.int64()),
    pa.field('bathrooms', pa.float64()),
    pa.field('sqft_living', pa.int64()),
    pa.field('sqft_lot', pa.int64()),
    pa.field('floors', pa.float64()),
    pa.field('waterfront', pa.int64()),
    pa.field('view', pa.int64()),
    pa.field('condition', pa.int64()),
    pa.field('grade', pa.int64()),
    pa.field('sqft_above', pa.int64()),
    pa.field('sqft_basement', pa.int64()),
    pa.field('yr_built', pa.int64()),
    pa.field('yr_renovated', pa.int64()),
    pa.field('zipcode', pa.int64()),
    pa.field('lat', pa.float64()),
    pa.field('long', pa.float64()),
    pa.field('sqft_living15', pa.int64()),
    pa.field('sqft_lot15', pa.int64()),
    pa.field('sale_price', pa.float64())
])

output_schema = pa.schema([
    pa.field('tensor', pa.list_(pa.float32(), list_size=18))
])

preprocess_model = wl.upload_model("preprocess-step", "./models/preprocess_step.zip", \
                                   framework=wallaroo.framework.Framework.PYTHON, \
                                   input_schema=input_schema, output_schema=output_schema)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime........successful

Ready

Postprocess model:

input_schema = pa.schema([
    pa.field('variable', pa.list_(pa.float32()))
])

output_schema = pa.schema([
    pa.field('variable', pa.list_(pa.float32()))
])

postprocess_model = wl.upload_model("postprocess-step", "./models/postprocess_step.zip", \
                                   framework=wallaroo.framework.Framework.PYTHON, \
                                   input_schema=input_schema, output_schema=output_schema)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime.......successful

Ready

Create and Deploy the Pipeline

Create the pipeline with the preprocess module, housing model, and postprocess module as pipeline steps, then deploy the newpipeline.

pipeline = wl.build_pipeline(pipeline_name)

pipeline.undeploy()
# clear if the tutorial was run before
pipeline.clear()

pipeline.add_model_step(preprocess_model)
pipeline.add_model_step(hpmodel)
pipeline.add_model_step(postprocess_model)

deploy_config = wallaroo.DeploymentConfigBuilder().replica_count(1).cpus(0.5).memory("1Gi").build()
pipeline.deploy()
namehousing-pipe
created2024-04-08 18:10:48.940254+00:00
last_updated2024-04-11 19:51:23.423449+00:00
deployedTrue
archx86
accelnone
tags
versions49dce46a-3cc8-44d7-b1f8-2fe2b1ef4919, 6709af7a-f6cc-40f1-a0f6-e43912d1e308, 682ab64c-239d-4cda-9f84-d8395b9747b9, dea96d38-f411-481e-a719-e4e307c5f51b, 34926083-1324-45b3-bded-36bec313bd46, 7b730932-9462-4d49-b42c-80b1996d5707, 2497971a-6c59-42c3-ac54-09917308be6a, 902bd20a-b57e-4dae-a045-df14013a33f0, e7da71eb-8642-4379-ae38-e2e4f57705e4, bb3dc349-1709-45ac-8c6f-2734838125d5
stepshousepricemodel
publishedFalse
pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.28.0.157',
   'name': 'engine-6dbb497ccb-md8bt',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'housing-pipe',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'housepricemodel',
      'sha': 'd8b79e526eed180d39d4653b39bebd9d06e6ae7f68293b5745775a9093a3ae7d',
      'status': 'Running',
      'version': 'e48d3420-e416-41d1-953b-2996cb732bbd'},
     {'name': 'preprocess-step',
      'sha': 'c09bbca6748ff23d83f48f57446c3ad6b5758c403936157ab731b3c269c0afb9',
      'status': 'Running',
      'version': 'd1c227f6-d22c-407e-a7d8-aa5a4b5432b5'},
     {'name': 'postprocess-step',
      'sha': 'c4dfec3dd259395598646ce85b8efd7811840dc726bf4915c39d862b87fc7070',
      'status': 'Running',
      'version': 'e4a1e6ce-ce7b-4a7b-81e9-44f99d23faeb'}]}}],
 'engine_lbs': [{'ip': '10.28.1.106',
   'name': 'engine-lb-d7cc8fc9c-2gqtp',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': [{'ip': '10.28.3.112',
   'name': 'engine-sidekick-postprocess-step-113-84494f4569-d7dvm',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': '\n'},
  {'ip': '10.28.2.235',
   'name': 'engine-sidekick-preprocess-step-112-7f78fb6ccc-2lpqk',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': '\n'}]}

Test the Pipeline

We will use a single query from the simulated housing_price table and infer. When successful, we will undeploy the pipeline to restore the resources back to the Kubernetes environment.

conn = simdb.simulate_db_connection()

# create the query
query = f"select * from {simdb.tablename} limit 1"
print(query)

# read in the data
singleton = pd.read_sql_query(query, conn)
conn.close()

display(singleton.loc[:, ["id", "date", "list_price", "bedrooms", "bathrooms", "sqft_living", "sqft_lot"]])
select * from house_listings limit 1
iddatelist_pricebedroomsbathroomssqft_livingsqft_lot
071293005202023-08-29221900.031.011805650
result = pipeline.infer(singleton)
# display the entire result
display(result)
# display just the output
display(result.loc[:, ['time', 'out.variable']])
timein.bathroomsin.bedroomsin.conditionin.datein.floorsin.gradein.idin.latin.list_price...in.sqft_living15in.sqft_lotin.sqft_lot15in.viewin.waterfrontin.yr_builtin.yr_renovatedin.zipcodeout.variableanomaly.count
02024-04-11 19:51:48.0521.0332023-08-291.07712930052047.5112221900.0...134056505650001955098178[224852.0]0

1 rows × 25 columns

timeout.variable
02024-04-11 19:51:48.052[224852.0]

When finished, we undeploy the pipeline to return the resources back to the environment.

pipeline.undeploy()
namehousing-pipe
created2024-04-08 18:10:48.940254+00:00
last_updated2024-04-11 19:51:23.423449+00:00
deployedFalse
archx86
accelnone
tags
versions49dce46a-3cc8-44d7-b1f8-2fe2b1ef4919, 6709af7a-f6cc-40f1-a0f6-e43912d1e308, 682ab64c-239d-4cda-9f84-d8395b9747b9, dea96d38-f411-481e-a719-e4e307c5f51b, 34926083-1324-45b3-bded-36bec313bd46, 7b730932-9462-4d49-b42c-80b1996d5707, 2497971a-6c59-42c3-ac54-09917308be6a, 902bd20a-b57e-4dae-a045-df14013a33f0, e7da71eb-8642-4379-ae38-e2e4f57705e4, bb3dc349-1709-45ac-8c6f-2734838125d5
stepshousepricemodel
publishedFalse

With this stage complete, we can proceed to Stage 4: Regular Batch Inference.