Wallaroo SDK Upload and Deploy Tutorial: XGBoost Booster RF Regression

How to upload a XGBoost Booster RF Regression model to Wallaroo

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Booster RF Regression Example

The following tutorial demonstrates deploying and serving an Booster RF Regression model to Wallaroo.

The following XGBoost model types are supported by Wallaroo. XGBoost models not supported by Wallaroo are supported via the Arbitrary Python models, also known as Bring Your Own Predict (BYOP).

XGBoost Model TypeWallaroo Auto Packaging Supported
XGBClassifier
XGBRegressor
Booster Classifier
Booster Classifier
Booster Regressor
Booster Random Forest Regressor
Booster Random Forest Classifier
XGBRFClassifier
XGBRFRegressor
XGBRanker*X
  • XGBRanker XGBoost models are currently supported via converting them to BYOP models.

Goal

Upload, deploy, and serve a sample Booster RF Regression model.

Resources

This tutorial provides the following:

  • Models:
    • ./models/xgb_booster_rf_regression.pkl: The sample XGBoost model that receives the sklearn.datasets.load_diabetes dataset.

Prerequisites

  • A deployed Wallaroo instance with Edge Registry Services and Edge Observability enabled.
  • The following Python libraries installed:
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
  • A X64 Docker deployment to deploy the model on an edge location.
  • The notebook “Wallaroo Run Anywhere Model Drift Observability with Assays: Preparation” has been run, and the model edge deployments executed.

Steps

Import Libraries

The first step is to import the libraries we will need. See ./requirements.txt for a list of additional libraries used with this tutorial.

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
import pyarrow as pa
from wallaroo.framework import Framework

import pickle
from sklearn.datasets import load_diabetes
from xgboost import train, DMatrix
from sklearn.model_selection import train_test_split

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more details on logging in through Wallaroo, see the Wallaroo SDK Essentials Guide: Client Connection.

wl = wallaroo.Client()

Set Variables

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

workspace_name = f'booster-rf-regression'
pipeline_name = f'booster-rf-regression'

model_name = 'booster-rf-regression'
model_file_name = './models/xgb_booster_rf_regression.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)

pipeline = wl.build_pipeline(pipeline_name)

Upload XGBoost Model

XGBoost models are uploaded to Wallaroo through the wallaroo.client.Client.upload_model method.

Upload XGBoost Model Parameters

The following parameters are available for XGBoost models.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Required)Set as Framework.XGBOOST.
input_schemapyarrow.lib.Schema (Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Required)The output schema in Apache Arrow schema format.
convert_waitbool (Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

Upload XGBoost Model Return

The following is returned with a successful model upload and conversion.

FieldTypeDescription
namestringThe name of the model.
versionstringThe model version as a unique UUID.
file_namestringThe file name of the model as stored in Wallaroo.
image_pathstringThe image used to deploy the model in the Wallaroo engine.
last_update_timeDateTimeWhen the model was last updated.

Configure Input and Output Schemas

First we configure the input and output schemas in PyArrow format.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float32(), list_size=10))
])

output_schema = pa.schema([
    pa.field('predictions', pa.float32()),
])

Upload Model Example

With the input and output schemas defined, we now upload the XGBoost model.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.XGBOOST, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime.
Model is attempting loading to a native runtime.successful

Ready
Namebooster-rf-regression
Version6b13d69b-6398-4e97-9e07-0677ceb07c34
File Namexgb_booster_rf_regression.pkl
SHAb58b410a1eb4690dcf1bdcd08157f37253d8316cafd406a165b484ceb47408b3
Statusready
Image PathNone
Architecturex86
Accelerationnone
Updated At2024-19-Jul 16:34:47
Workspace id38
Workspace namebooster-rf-regression

Deploy Pipeline

With the model uploaded and packaged, we add the model as a pipeline model step. For our deployment, we will set a minimum deployment configuration - this is the amount of resources the deployed pipeline uses from the cluster.

Once set, we deploy the pipeline, which allocates the assigned resources for the cluster and makes it available for inference requests.

pipeline.add_model_step(model)

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.28.1.13',
   'name': 'engine-8577cb8c44-65r7q',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'booster-rf-regression',
      'status': 'Running',
      'version': '9c692e82-4aca-41a6-b541-c601c3394069'}]},
   'model_statuses': {'models': [{'name': 'booster-rf-regression',
      'sha': 'b58b410a1eb4690dcf1bdcd08157f37253d8316cafd406a165b484ceb47408b3',
      'status': 'Running',
      'version': 'bf80294b-28db-49a2-8bb7-0a52e2cfccc7'}]}}],
 'engine_lbs': [{'ip': '10.28.1.12',
   'name': 'engine-lb-6b59985857-s7vfp',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': []}

Run Sample Inference

The dataset is from the sklearn.datasets.load_diabetes examples. These are converted to a pandas DataFrame, that is submitted to the deployed model in Wallaroo for an inference request.

dataset = load_diabetes()

# assuming the model is trained on the following DMatrix
X, y = dataset.data, dataset.target
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)
dtrain = DMatrix(X_train, label=y_train)
dtest = DMatrix(X_test, label=y_test)

data = dtest.get_data().todense()[:100]
import pandas as pd

dataframe = pd.DataFrame({"inputs": data.tolist()})
results = pipeline.infer(dataframe)
results
timein.inputsout.predictionsanomaly.count
02024-07-19 16:34:52.774[0.0453409851, -0.0446416363, -0.0062059541, -...154.3456600
12024-07-19 16:34:52.774[0.0925639868, -0.0446416363, 0.0369065292, 0....161.9828500
22024-07-19 16:34:52.774[0.0635036752, 0.0506801195, -0.0040503298, -0...154.3456600
32024-07-19 16:34:52.774[0.0961965248, -0.0446416363, 0.0519958995, 0....244.7065100
42024-07-19 16:34:52.774[0.0126481373, 0.0506801195, -0.0202175118, -0...108.6131800
...............
842024-07-19 16:34:52.774[0.0017505219, -0.0446416363, -0.065485619, -0...104.9217760
852024-07-19 16:34:52.774[0.0126481373, -0.0446416363, -0.0256065708, -...78.3001600
862024-07-19 16:34:52.774[-0.0273097865, -0.0446416363, -0.0633299947, ...82.2594800
872024-07-19 16:34:52.774[-0.0236772466, -0.0446416363, -0.0697968677, ...68.9625400
882024-07-19 16:34:52.774[-0.0636351705, -0.0446416363, 0.0358287171, -...126.3158100

89 rows × 4 columns

Undeploy the Pipeline

With the tutorial complete, we undeploy the pipeline and return the resources back to the cluster.

pipeline.undeploy()
namebooster-rf-regression
created2024-07-19 16:13:25.926020+00:00
last_updated2024-07-19 16:34:51.070191+00:00
deployedFalse
workspace_id38
workspace_namebooster-rf-regression
archx86
accelnone
tags
versionsff322395-3693-4b02-a6c9-7866871da73b, a96de725-6099-4ade-9081-7169da9f565d, 9c692e82-4aca-41a6-b541-c601c3394069, a6bc8bd7-13b5-4eff-b52e-ed78e3be0d27, 7fc5f096-3230-463e-a17b-fdd4ef7314c6, 74f494f8-f7b3-47c0-aeba-6f67e3989eb6
stepsbooster-rf-regression
publishedFalse