Wallaroo SDK Essentials Guide: Model Uploads and Registrations: XGBoost
Table of Contents
Model Naming Requirements
Model names map onto Kubernetes objects, and must be DNS compliant. The strings for model names must be ASCII alpha-numeric characters or dash (-) only. .
and _
are not allowed.
Wallaroo supports XGBoost models by containerizing the model and running as an image.
Parameter | Description |
---|---|
Web Site | https://xgboost.ai/ |
Supported Libraries | xgboost==1.7.4 |
Framework | Framework.XGBOOST aka xgboost |
Supported File Types | pickle (XGB files are not supported.) |
Runtime | Containerized aka tensorflow / mlflow |
XGBoost Schema Inputs
XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64()
is a good choice.
For example, the following DataFrame has 4 columns, each column a float
.
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | |
---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 |
1 | 4.9 | 3.0 | 1.4 | 0.2 |
For submission to an XGBoost model, the data input schema will be a single array with 4 float values.
input_schema = pa.schema([
pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])
When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.
Original DataFrame:
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | |
---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 |
1 | 4.9 | 3.0 | 1.4 | 0.2 |
Converted DataFrame:
inputs | |
---|---|
0 | [5.1, 3.5, 1.4, 0.2] |
1 | [4.9, 3.0, 1.4, 0.2] |
XGBoost Schema Outputs
Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output
. In the Wallaroo inference result, it is grouped with the metadata out
as out.output
.
output_schema = pa.schema([
pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
time | in.inputs | out.output | check_failures | |
---|---|---|---|---|
0 | 2023-07-05 15:11:29.776 | [5.1, 3.5, 1.4, 0.2] | 0 | 0 |
1 | 2023-07-05 15:11:29.776 | [4.9, 3.0, 1.4, 0.2] | 0 | 0 |
Uploading XGBoost Models
XGBoost models are uploaded to Wallaroo through the Wallaroo Client upload_model
method.
Upload XGBoost Model Parameters
The following parameters are required for XGBoost models. Note that while some fields are considered as optional for the upload_model
method, they are required for proper uploading of a XGBoost model to Wallaroo.
Parameter | Type | Description |
---|---|---|
name | string (Required) | The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model. |
path | string (Required) | The path to the model file being uploaded. |
framework | string (Upload Method Optional, SKLearn model Required) | Set as the Framework.XGBOOST . |
input_schema | pyarrow.lib.Schema (Upload Method Optional, SKLearn model Required) | The input schema in Apache Arrow schema format. |
output_schema | pyarrow.lib.Schema (Upload Method Optional, SKLearn model Required) | The output schema in Apache Arrow schema format. |
convert_wait | bool (Upload Method Optional, SKLearn model Optional) (Default: True) |
|
Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.
Upload XGBoost Model Return
The following is returned with a successful model upload and conversion.
Field | Type | Description |
---|---|---|
name | string | The name of the model. |
version | string | The model version as a unique UUID. |
file_name | string | The file name of the model as stored in Wallaroo. |
image_path | string | The image used to deploy the model in the Wallaroo engine. |
last_update_time | DateTime | When the model was last updated. |
Upload XGBoost Model Example
The following example is of uploading a PyTorch ML Model to a Wallaroo instance.
input_schema = pa.schema([
pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])
output_schema = pa.schema([
pa.field('output', pa.float64())
])
output_schema = pa.schema([
pa.field('output', pa.float64())
])
model = wl.upload_model(f"{prefix}",
'models/model-auto-conversion_xgboost_xgb_ranker_model.pkl',
framework=Framework.XGBOOST,
input_schema=input_schema, output_schema=output_schema
)
model
Waiting for model conversion... It may take up to 10.0min.
Model is Pending conversion...Converting..Pending conversion.Converting.........Ready.
{
'name': 'xgb-ranker',
'version': 'c53c6a84-9f56-41c6-bb2f-049ef6b067e8',
'file_name': 'model-auto-conversion_xgboost_xgb_ranker_model.pkl',
'image_path': 'proxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.3.0-main-3367',
'last_update_time': datetime.datetime(2023, 6, 16, 18, 51, 15, 27969, tzinfo=tzutc())
}
data = pd.read_json('data/test-xgboost-classification-data.json')
display(data)
dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)
results = pipeline.infer(dataframe)
display(results)
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | |
---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 |
1 | 4.9 | 3.0 | 1.4 | 0.2 |
inputs | |
---|---|
0 | [5.1, 3.5, 1.4, 0.2] |
1 | [4.9, 3.0, 1.4, 0.2] |
time | in.inputs | out.output | check_failures | |
---|---|---|---|---|
0 | 2023-07-05 16:15:55.802 | [5.1, 3.5, 1.4, 0.2] | 0.0 | 0 |
1 | 2023-07-05 16:15:55.802 | [4.9, 3.0, 1.4, 0.2] | 0.0 | 0 |
Model Status
Pipeline Deployment Configurations
Pipeline deployment configurations are dependent on whether the model is converted to the Native Runtime space, or Containerized Model Runtime space. This is determined when the model is uploaded based on the size, complexity, and other factors.
Once uploaded, the Model method config().runtime()
will display which space the model is in.
Runtime Display | Model Runtime Space | Pipeline Configuration |
---|---|---|
tensorflow | Native | Native Runtime Configuration Methods |
onnx | Native | Native Runtime Configuration Methods |
python | Native | Native Runtime Configuration Methods |
mlflow | Containerized | Containerized Runtime Deployment |
For example, uploading an runtime model to a Wallaroo workspace would return the following config().runtime()
:
ccfraud_model = wl.upload_model(model_name, model_file_name, Framework.ONNX).configure()
ccfraud_model.config().runtime()
'onnx'
For example, the following containerized model after conversion is allocated to the containerized runtime as follows:
model = wl.upload_model(model_name, model_file_name,
framework=framework,
input_schema=input_schema,
output_schema=output_schema
)
model.config().runtime()
'mlflow'
Native Runtime Pipeline Deployment Configuration Example
The following configuration allocates 0.25 CPU and 1 Gi RAM to the native runtime models for a pipeline.
deployment_config = DeploymentConfigBuilder()
.cpus(0.25)
.memory('1Gi')
.build()
Containerized Runtime Deployment Example
The following configuration allocates 0.25 CPU and 1 Gi RAM to a specific containerized model in the containerized runtime, along with other environmental variables for the containerized model. Note that for containerized models, resources must be allocated per specific model.
deployment_config = DeploymentConfigBuilder()
.sidekick_cpus(sm_model, 0.25)
.sidekick_memory(sm_model, '1Gi')
.sidekick_env(sm_model,
{"GUNICORN_CMD_ARGS":
"__timeout=188 --workers=1"}
)
.build()