.

.

Wallaroo Feature Tutorials

Wallaroo Feature Tutorials

Welcome to Wallaroo! Whether you’re using our free Community Edition or the Enterprise Edition, these Quick Start Guides are made to help learn how to deploy your ML Models with Wallaroo. Each of these guides includes a Jupyter Notebook with the documentation, code, and sample data that you can download and run wherever you have Wallaroo installed.

1 - Setting Up Wallaroo for Sample Pipelines and Models

How to get Wallaroo running for your sample models

Welcome to Wallaroo! We’re excited to help you get your ML models deployed as quickly as possible. These Quick Start Guides are meant to show simple but thorough steps in creating new pipelines and models. For full guides on using Wallaroo, see the Wallaroo SDK and the Wallaroo Operations Guide.

This first guide is just how to get Wallaroo ready for your first models and pipelines.

Prerequisites

This guide assumes that you’ve installed Wallaroo in a cluster cloud Kubernetes cluster.

Basic Wallaroo Configuration

The following will install the Wallaroo Quick Start samples into your Wallaroo Jupyter Hub environment.

  1. From the Wallaroo Quick Start Samples repository, visit the Releases page. As of the 2023.1 release, each tutorial can be downloaded as a separate .zip file instead of downloading the entire release.

  2. Download the release that matches your version of Wallaroo, Either a .zip or .tar.gz file can be downloaded. The example below uses the .zip file.

  3. Access your Jupyter Hub from your Wallaroo environment.

  4. Either drag and drop or use the Upload feature to upload the quick start guide .zip file.

  5. From the launcher, select Terminal.

  6. Unzip the files with unzip {ZIP FILE NAME}. For example, if the most recent release is Wallaroo_Tutorials.zip, the command would be:

    unzip Wallaroo_Tutorials.zip
    
  7. Exit the terminal shell when complete.

The quick start samples along with models and data will be ready for use in the unzipped folder.

2 - Wallaroo ML Model Upload and Registrations

Tutorials on uploading ML Models of various frameworks into a Wallaroo instance.

Tutorials on uploading and registering ML models of different frameworks into Wallaroo.

Additional References

2.1 - Model Registry Service with Wallaroo Tutorials

How to use a Model Registry Service to upload models to a Wallaroo instance.

The following tutorials demonstrate how to add ML Models from a model registry service into a Wallaroo instance.

Artifact Requirements

Models are uploaded to the Wallaroo instance as the specific artifact - the “file” or other data that represents the file itself. This must comply with the Wallaroo model requirements framework and version or it will not be deployed. Note that for models that fall outside of the supported model types, they can be registered to a Wallaroo workspace as MLFlow 1.30.0 containerized models.

Supported Models

The following frameworks are supported. Frameworks fall under either Native or Containerized runtimes in the Wallaroo engine. For more details, see the specific framework what runtime a specific model framework runs in.

Runtime DisplayModel Runtime SpacePipeline Configuration
tensorflowNativeNative Runtime Configuration Methods
onnxNativeNative Runtime Configuration Methods
pythonNativeNative Runtime Configuration Methods
mlflowContainerizedContainerized Runtime Deployment
flightContainerizedContainerized Runtime Deployment

Please note the following.

Wallaroo ONNX Requirements

Wallaroo natively supports Open Neural Network Exchange (ONNX) models into the Wallaroo engine.

ParameterDescription
Web Sitehttps://onnx.ai/
Supported LibrariesSee table below.
FrameworkFramework.ONNX aka onnx
RuntimeNative aka onnx

The following ONNX versions models are supported:

Wallaroo VersionONNX VersionONNX IR VersionONNX OPset VersionONNX ML Opset Version
2023.4.0 (October 2023)1.12.18173
2023.2.1 (July 2023)1.12.18173

For the most recent release of Wallaroo 2023.4.0, the following native runtimes are supported:

  • If converting another ML Model to ONNX (PyTorch, XGBoost, etc) using the onnxconverter-common library, the supported DEFAULT_OPSET_NUMBER is 17.

Using different versions or settings outside of these specifications may result in inference issues and other unexpected behavior.

ONNX models always run in the Wallaroo Native Runtime space.

Data Schemas

ONNX models deployed to Wallaroo have the following data requirements.

  • Equal rows constraint: The number of input rows and output rows must match.
  • All inputs are tensors: The inputs are tensor arrays with the same shape.
  • Data Type Consistency: Data types within each tensor are of the same type.

Equal Rows Constraint

Inference performed through ONNX models are assumed to be in batch format, where each input row corresponds to an output row. This is reflected in the in fields returned for an inference. In the following example, each input row for an inference is related directly to the inference output.

df = pd.read_json('./data/cc_data_1k.df.json')
display(df.head())

result = ccfraud_pipeline.infer(df.head())
display(result)

INPUT

 tensor
0[-1.0603297501, 2.3544967095000002, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192000001, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526000001, 1.9870535692, 0.7005485718000001, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439]
1[-1.0603297501, 2.3544967095000002, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192000001, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526000001, 1.9870535692, 0.7005485718000001, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439]
2[-1.0603297501, 2.3544967095000002, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192000001, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526000001, 1.9870535692, 0.7005485718000001, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439]
3[-1.0603297501, 2.3544967095000002, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192000001, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526000001, 1.9870535692, 0.7005485718000001, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439]
4[0.5817662108, 0.09788155100000001, 0.1546819424, 0.4754101949, -0.19788623060000002, -0.45043448540000003, 0.016654044700000002, -0.0256070551, 0.0920561602, -0.2783917153, 0.059329944100000004, -0.0196585416, -0.4225083157, -0.12175388770000001, 1.5473094894000001, 0.2391622864, 0.3553974881, -0.7685165301, -0.7000849355000001, -0.1190043285, -0.3450517133, -1.1065114108, 0.2523411195, 0.0209441826, 0.2199267436, 0.2540689265, -0.0450225094, 0.10867738980000001, 0.2547179311]

OUTPUT

 timein.tensorout.dense_1check_failures
02023-11-17 20:34:17.005[-1.0603297501, 2.3544967095, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526, 1.9870535692, 0.7005485718, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439][0.99300325]0
12023-11-17 20:34:17.005[-1.0603297501, 2.3544967095, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526, 1.9870535692, 0.7005485718, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439][0.99300325]0
22023-11-17 20:34:17.005[-1.0603297501, 2.3544967095, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526, 1.9870535692, 0.7005485718, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439][0.99300325]0
32023-11-17 20:34:17.005[-1.0603297501, 2.3544967095, -3.5638788326, 5.1387348926, -1.2308457019, -0.7687824608, -3.5881228109, 1.8880837663, -3.2789674274, -3.9563254554, 4.0993439118, -5.6539176395, -0.8775733373, -9.131571192, -0.6093537873, -3.7480276773, -5.0309125017, -0.8748149526, 1.9870535692, 0.7005485718, 0.9204422758, -0.1041491809, 0.3229564351, -0.7418141657, 0.0384120159, 1.0993439146, 1.2603409756, -0.1466244739, -1.4463212439][0.99300325]0
42023-11-17 20:34:17.005[0.5817662108, 0.097881551, 0.1546819424, 0.4754101949, -0.1978862306, -0.4504344854, 0.0166540447, -0.0256070551, 0.0920561602, -0.2783917153, 0.0593299441, -0.0196585416, -0.4225083157, -0.1217538877, 1.5473094894, 0.2391622864, 0.3553974881, -0.7685165301, -0.7000849355, -0.1190043285, -0.3450517133, -1.1065114108, 0.2523411195, 0.0209441826, 0.2199267436, 0.2540689265, -0.0450225094, 0.1086773898, 0.2547179311][0.0010916889]0

All Inputs Are Tensors

All inputs into an ONNX model must be tensors. This requires that the shape of each element is the same. For example, the following is a proper input:

t [
    [2.35, 5.75],
    [3.72, 8.55],
    [5.55, 97.2]
]
Standard tensor array

Another example is a 2,2,3 tensor, where the shape of each element is (3,), and each element has 2 rows.

t = [
        [2.35, 5.75, 19.2],
        [3.72, 8.55, 10.5]
    ],
    [
        [5.55, 7.2, 15.7],
        [9.6, 8.2, 2.3]
    ]

In this example each element has a shape of (2,). Tensors with elements of different shapes, known as ragged tensors, are not supported. For example:

t = [
    [2.35, 5.75],
    [3.72, 8.55, 10.5],
    [5.55, 97.2]
])

**INVALID SHAPE**
Ragged tensor array - unsupported

For models that require ragged tensor or other shapes, see other data formatting options such as Bring Your Own Predict models.

Data Type Consistency

All inputs into an ONNX model must have the same internal data type. For example, the following is valid because all of the data types within each element are float32.

t = [
    [2.35, 5.75],
    [3.72, 8.55],
    [5.55, 97.2]
]

The following is invalid, as it mixes floats and strings in each element:

t = [
    [2.35, "Bob"],
    [3.72, "Nancy"],
    [5.55, "Wani"]
]

The following inputs are valid, as each data type is consistent within the elements.

df = pd.DataFrame({
    "t": [
        [2.35, 5.75, 19.2],
        [5.55, 7.2, 15.7],
    ],
    "s": [
        ["Bob", "Nancy", "Wani"],
        ["Jason", "Rita", "Phoebe"]
    ]
})
df
 ts
0[2.35, 5.75, 19.2][Bob, Nancy, Wani]
1[5.55, 7.2, 15.7][Jason, Rita, Phoebe]
ParameterDescription
Web Sitehttps://www.tensorflow.org/
Supported Librariestensorflow==2.9.3
FrameworkFramework.TENSORFLOW aka tensorflow
RuntimeNative aka tensorflow
Supported File TypesSavedModel format as .zip file

TensorFlow File Format

TensorFlow models are .zip file of the SavedModel format. For example, the Aloha sample TensorFlow model is stored in the directory alohacnnlstm:

├── saved_model.pb
└── variables
    ├── variables.data-00000-of-00002
    ├── variables.data-00001-of-00002
    └── variables.index

This is compressed into the .zip file alohacnnlstm.zip with the following command:

zip -r alohacnnlstm.zip alohacnnlstm/

ML models that meet the Tensorflow and SavedModel format will run as Wallaroo Native runtimes by default.

See the SavedModel guide for full details.

ParameterDescription
Web Sitehttps://www.python.org/
Supported Librariespython==3.8
FrameworkFramework.PYTHON aka python
RuntimeNative aka python

Python models uploaded to Wallaroo are executed as a native runtime.

Note that Python models - aka “Python steps” - are standalone python scripts that use the python libraries natively supported by the Wallaroo platform. These are used for either simple model deployment (such as ARIMA Statsmodels), or data formatting such as the postprocessing steps. A Wallaroo Python model will be composed of one Python script that matches the Wallaroo requirements.

This is contrasted with Arbitrary Python models, also known as Bring Your Own Predict (BYOP) allow for custom model deployments with supporting scripts and artifacts. These are used with pre-trained models (PyTorch, Tensorflow, etc) along with whatever supporting artifacts they require. Supporting artifacts can include other Python modules, model files, etc. These are zipped with all scripts, artifacts, and a requirements.txt file that indicates what other Python models need to be imported that are outside of the typical Wallaroo platform.

Python Models Requirements

Python models uploaded to Wallaroo are Python scripts that must include the wallaroo_json method as the entry point for the Wallaroo engine to use it as a Pipeline step.

This method receives the results of the previous Pipeline step, and its return value will be used in the next Pipeline step.

If the Python model is the first step in the pipeline, then it will be receiving the inference request data (for example: a preprocessing step). If it is the last step in the pipeline, then it will be the data returned from the inference request.

In the example below, the Python model is used as a post processing step for another ML model. The Python model expects to receive data from a ML Model who’s output is a DataFrame with the column dense_2. It then extracts the values of that column as a list, selects the first element, and returns a DataFrame with that element as the value of the column output.

def wallaroo_json(data: pd.DataFrame):
    print(data)
    return [{"output": [data["dense_2"].to_list()[0][0]]}]

In line with other Wallaroo inference results, the outputs of a Python step that returns a pandas DataFrame or Arrow Table will be listed in the out. metadata, with all inference outputs listed as out.{variable 1}, out.{variable 2}, etc. In the example above, this results the output field as the out.output field in the Wallaroo inference result.

 timein.tensorout.outputcheck_failures
02023-06-20 20:23:28.395[0.6878518042, 0.1760734021, -0.869514083, 0.3..[12.886651039123535]0
ParameterDescription
Web Sitehttps://huggingface.co/models
Supported Libraries
  • transformers==4.34.1
  • diffusers==0.14.0
  • accelerate==0.23.0
  • torchvision==0.14.1
  • torch==1.13.1
FrameworksThe following Hugging Face pipelines are supported by Wallaroo.
  • Framework.HUGGING_FACE_FEATURE_EXTRACTION aka hugging-face-feature-extraction
  • Framework.HUGGING_FACE_IMAGE_CLASSIFICATION aka hugging-face-image-classification
  • Framework.HUGGING_FACE_IMAGE_SEGMENTATION aka hugging-face-image-segmentation
  • Framework.HUGGING_FACE_IMAGE_TO_TEXT aka hugging-face-image-to-text
  • Framework.HUGGING_FACE_OBJECT_DETECTION aka hugging-face-object-detection
  • Framework.HUGGING_FACE_QUESTION_ANSWERING aka hugging-face-question-answering
  • Framework.HUGGING_FACE_STABLE_DIFFUSION_TEXT_2_IMG aka hugging-face-stable-diffusion-text-2-img
  • Framework.HUGGING_FACE_SUMMARIZATION aka hugging-face-summarization
  • Framework.HUGGING_FACE_TEXT_CLASSIFICATION aka hugging-face-text-classification
  • Framework.HUGGING_FACE_TRANSLATION aka hugging-face-translation
  • Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION aka hugging-face-zero-shot-classification
  • Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION aka hugging-face-zero-shot-image-classification
  • Framework.HUGGING_FACE_ZERO_SHOT_OBJECT_DETECTION aka hugging-face-zero-shot-object-detection
  • Framework.HUGGING_FACE_SENTIMENT_ANALYSIS aka hugging-face-sentiment-analysis
  • Framework.HUGGING_FACE_TEXT_GENERATION aka hugging-face-text-generation
  • Framework.HUGGING_FACE_AUTOMATIC_SPEECH_RECOGNITION aka hugging-face-automatic-speech-recognition
RuntimeContainerized flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

Hugging Face Schemas

Input and output schemas for each Hugging Face pipeline are defined below. Note that adding additional inputs not specified below will raise errors, except for the following:

  • Framework.HUGGING_FACE_IMAGE_TO_TEXT
  • Framework.HUGGING_FACE_TEXT_CLASSIFICATION
  • Framework.HUGGING_FACE_SUMMARIZATION
  • Framework.HUGGING_FACE_TRANSLATION

Additional inputs added to these Hugging Face pipelines will be added as key/pair value arguments to the model’s generate method. If the argument is not required, then the model will default to the values coded in the original Hugging Face model’s source code.

See the Hugging Face Pipeline documentation for more details on each pipeline and framework.

Wallaroo FrameworkReference
Framework.HUGGING_FACE_FEATURE_EXTRACTION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string())
])
output_schema = pa.schema([
    pa.field('output', pa.list_(
        pa.list_(
            pa.float64(),
            list_size=128
        ),
    ))
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.list_(
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=100
        ),
        list_size=100
    )),
    pa.field('top_k', pa.int64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64(), list_size=2)),
    pa.field('label', pa.list_(pa.string(), list_size=2)),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_SEGMENTATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('threshold', pa.float64()),
    pa.field('mask_threshold', pa.float64()),
    pa.field('overlap_mask_area_threshold', pa.float64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())),
    pa.field('label', pa.list_(pa.string())),
    pa.field('mask', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=100
                ),
                list_size=100
            ),
    )),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_TO_TEXT

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.list_( #required
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=100
        ),
        list_size=100
    )),
    # pa.field('max_new_tokens', pa.int64()),  # optional
])

output_schema = pa.schema([
    pa.field('generated_text', pa.list_(pa.string())),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_OBJECT_DETECTION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('threshold', pa.float64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())),
    pa.field('label', pa.list_(pa.string())),
    pa.field('box', 
        pa.list_( # dynamic output, i.e. dynamic number of boxes per input image, each sublist contains the 4 box coordinates 
            pa.list_(
                    pa.int64(),
                    list_size=4
                ),
            ),
    ),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_QUESTION_ANSWERING

Schemas:

input_schema = pa.schema([
    pa.field('question', pa.string()),
    pa.field('context', pa.string()),
    pa.field('top_k', pa.int64()),
    pa.field('doc_stride', pa.int64()),
    pa.field('max_answer_len', pa.int64()),
    pa.field('max_seq_len', pa.int64()),
    pa.field('max_question_len', pa.int64()),
    pa.field('handle_impossible_answer', pa.bool_()),
    pa.field('align_to_words', pa.bool_()),
])

output_schema = pa.schema([
    pa.field('score', pa.float64()),
    pa.field('start', pa.int64()),
    pa.field('end', pa.int64()),
    pa.field('answer', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_STABLE_DIFFUSION_TEXT_2_IMG

Schemas:

input_schema = pa.schema([
    pa.field('prompt', pa.string()),
    pa.field('height', pa.int64()),
    pa.field('width', pa.int64()),
    pa.field('num_inference_steps', pa.int64()), # optional
    pa.field('guidance_scale', pa.float64()), # optional
    pa.field('negative_prompt', pa.string()), # optional
    pa.field('num_images_per_prompt', pa.string()), # optional
    pa.field('eta', pa.float64()) # optional
])

output_schema = pa.schema([
    pa.field('images', pa.list_(
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=128
        ),
        list_size=128
    )),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_SUMMARIZATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()),
    pa.field('return_text', pa.bool_()),
    pa.field('return_tensors', pa.bool_()),
    pa.field('clean_up_tokenization_spaces', pa.bool_()),
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('summary_text', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TEXT_CLASSIFICATION

Schemas

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('top_k', pa.int64()), # optional
    pa.field('function_to_apply', pa.string()), # optional
])

output_schema = pa.schema([
    pa.field('label', pa.list_(pa.string(), list_size=2)), # list with a number of items same as top_k, list_size can be skipped but may lead in worse performance
    pa.field('score', pa.list_(pa.float64(), list_size=2)), # list with a number of items same as top_k, list_size can be skipped but may lead in worse performance
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TRANSLATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('return_tensors', pa.bool_()), # optional
    pa.field('return_text', pa.bool_()), # optional
    pa.field('clean_up_tokenization_spaces', pa.bool_()), # optional
    pa.field('src_lang', pa.string()), # optional
    pa.field('tgt_lang', pa.string()), # optional
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('translation_text', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
    pa.field('multi_label', pa.bool_()), # optional
])

output_schema = pa.schema([
    pa.field('sequence', pa.string()),
    pa.field('scores', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
    pa.field('labels', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', # required
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
]) 

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels
    pa.field('label', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_OBJECT_DETECTION

Schemas:

input_schema = pa.schema([
    pa.field('images', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=640
            ),
        list_size=480
    )),
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=3)),
    pa.field('threshold', pa.float64()),
    # pa.field('top_k', pa.int64()), # we want the model to return exactly the number of predictions, we shouldn't specify this
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())), # variable output, depending on detected objects
    pa.field('label', pa.list_(pa.string())), # variable output, depending on detected objects
    pa.field('box', 
        pa.list_( # dynamic output, i.e. dynamic number of boxes per input image, each sublist contains the 4 box coordinates 
            pa.list_(
                    pa.int64(),
                    list_size=4
                ),
            ),
    ),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_SENTIMENT_ANALYSISHugging Face Sentiment Analysis
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TEXT_GENERATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

input_schema = pa.schema([
    pa.field('inputs', pa.string()),
    pa.field('return_tensors', pa.bool_()), # optional
    pa.field('return_text', pa.bool_()), # optional
    pa.field('return_full_text', pa.bool_()), # optional
    pa.field('clean_up_tokenization_spaces', pa.bool_()), # optional
    pa.field('prefix', pa.string()), # optional
    pa.field('handle_long_generation', pa.string()), # optional
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('generated_text', pa.list_(pa.string(), list_size=1))
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_AUTOMATIC_SPEECH_RECOGNITION

Sample input and output schema.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float32())), # required: the audio stored in numpy arrays of shape (num_samples,) and data type `float32`
    pa.field('return_timestamps', pa.string()) # optional: return start & end times for each predicted chunk
]) 

output_schema = pa.schema([
    pa.field('text', pa.string()), # required: the output text corresponding to the audio input
    pa.field('chunks', pa.list_(pa.struct([('text', pa.string()), ('timestamp', pa.list_(pa.float32()))]))), # required (if `return_timestamps` is set), start & end times for each predicted chunk
])
ParameterDescription
Web Sitehttps://pytorch.org/
Supported Libraries
  • torch==1.13.1
  • torchvision==0.14.1
FrameworkFramework.PYTORCH aka pytorch
Supported File Typespt ot pth in TorchScript format
Runtimeonnx/flight
  • IMPORTANT NOTE: The PyTorch model must be in TorchScript format. scripting (i.e. torch.jit.script() is always recommended over tracing (i.e. torch.jit.trace()). From the PyTorch documentation: “Scripting preserves dynamic control flow and is valid for inputs of different sizes.” For more details, see TorchScript-based ONNX Exporter: Tracing vs Scripting.

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

  • IMPORTANT CONFIGURATION NOTE: For PyTorch input schemas, the floats must be pyarrow.float32() for the PyTorch model to be converted to the Native Wallaroo Runtime during the upload process.

Sci-kit Learn aka SKLearn.

ParameterDescription
Web Sitehttps://scikit-learn.org/stable/index.html
Supported Libraries
  • scikit-learn==1.3.0
FrameworkFramework.SKLEARN aka sklearn
Runtimeonnx / flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

SKLearn Schema Inputs

SKLearn schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an SKLearn model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

SKLearn Schema Outputs

Outputs for SKLearn that are meant to be predictions or probabilities when output by the model are labeled in the output schema for the model when uploaded to Wallaroo. For example, a model that outputs either 1 or 0 as its output would have the output schema as follows:

output_schema = pa.schema([
    pa.field('predictions', pa.int32())
])

When used in Wallaroo, the inference result is contained in the out metadata as out.predictions.

pipeline.infer(dataframe)
 timein.inputsout.predictionscheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
ParameterDescription
Web Sitehttps://www.tensorflow.org/api_docs/python/tf/keras/Model
Supported Libraries
  • tensorflow==2.9.3
  • keras==2.9.0
FrameworkFramework.KERAS aka keras
Supported File TypesSavedModel format as .zip file and HDF5 format
Runtimeonnx/flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

TensorFlow Keras SavedModel Format

TensorFlow Keras SavedModel models are .zip file of the SavedModel format. For example, the Aloha sample TensorFlow model is stored in the directory alohacnnlstm:

├── saved_model.pb
└── variables
    ├── variables.data-00000-of-00002
    ├── variables.data-00001-of-00002
    └── variables.index

This is compressed into the .zip file alohacnnlstm.zip with the following command:

zip -r alohacnnlstm.zip alohacnnlstm/

See the SavedModel guide for full details.

TensorFlow Keras H5 Format

Wallaroo supports the H5 for Tensorflow Keras models.

ParameterDescription
Web Sitehttps://xgboost.ai/
Supported Libraries
  • scikit-learn==1.3.0
  • xgboost==1.7.4
FrameworkFramework.XGBOOST aka xgboost
Supported File Typespickle (XGB files are not supported.)
Runtimeonnx / flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
ParameterDescription
Web Sitehttps://www.python.org/
Supported Librariespython==3.8
FrameworkFramework.CUSTOM aka custom
RuntimeContainerized aka flight

Arbitrary Python models, also known as Bring Your Own Predict (BYOP) allow for custom model deployments with supporting scripts and artifacts. These are used with pre-trained models (PyTorch, Tensorflow, etc) along with whatever supporting artifacts they require. Supporting artifacts can include other Python modules, model files, etc. These are zipped with all scripts, artifacts, and a requirements.txt file that indicates what other Python models need to be imported that are outside of the typical Wallaroo platform.

Contrast this with Wallaroo Python models - aka “Python steps”. These are standalone python scripts that use the python libraries natively supported by the Wallaroo platform. These are used for either simple model deployment (such as ARIMA Statsmodels), or data formatting such as the postprocessing steps. A Wallaroo Python model will be composed of one Python script that matches the Wallaroo requirements.

Arbitrary Python File Requirements

Arbitrary Python (BYOP) models are uploaded to Wallaroo via a ZIP file with the following components:

ArtifactTypeDescription
Python scripts aka .py files with classes that extend mac.inference.Inference and mac.inference.creation.InferenceBuilderPython ScriptExtend the classes mac.inference.Inference and mac.inference.creation.InferenceBuilder. These are included with the Wallaroo SDK. Further details are in Arbitrary Python Script Requirements. Note that there is no specified naming requirements for the classes that extend mac.inference.Inference and mac.inference.creation.InferenceBuilder - any qualified class name is sufficient as long as these two classes are extended as defined below.
requirements.txtPython requirements fileThis sets the Python libraries used for the arbitrary python model. These libraries should be targeted for Python 3.8 compliance. These requirements and the versions of libraries should be exactly the same between creating the model and deploying it in Wallaroo. This insures that the script and methods will function exactly the same as during the model creation process.
Other artifactsFilesOther models, files, and other artifacts used in support of this model.

For example, the if the arbitrary python model will be known as vgg_clustering, the contents may be in the following structure, with vgg_clustering as the storage directory:

vgg_clustering\
    feature_extractor.h5
    kmeans.pkl
    custom_inference.py
    requirements.txt

Note the inclusion of the custom_inference.py file. This file name is not required - any Python script or scripts that extend the classes listed above are sufficient. This Python script could have been named vgg_custom_model.py or any other name as long as it includes the extension of the classes listed above.

The sample arbitrary python model file is created with the command zip -r vgg_clustering.zip vgg_clustering/.

Wallaroo Arbitrary Python uses the Wallaroo SDK mac module, included in the Wallaroo SDK 2023.2.1 and above. See the Wallaroo SDK Install Guides for instructions on installing the Wallaroo SDK.

Arbitrary Python Script Requirements

The entry point of the arbitrary python model is any python script that extends the following classes. These are included with the Wallaroo SDK. The required methods that must be overridden are specified in each section below.

  • mac.inference.Inference interface serves model inferences based on submitted input some input. Its purpose is to serve inferences for any supported arbitrary model framework (e.g. scikit, keras etc.).

    classDiagram
        class Inference {
            <<Abstract>>
            +model Optional[Any]
            +expected_model_types()* Set
            +predict(input_data: InferenceData)*  InferenceData
            -raise_error_if_model_is_not_assigned() None
            -raise_error_if_model_is_wrong_type() None
        }
  • mac.inference.creation.InferenceBuilder builds a concrete Inference, i.e. instantiates an Inference object, loads the appropriate model and assigns the model to to the Inference object.

    classDiagram
        class InferenceBuilder {
            +create(config InferenceConfig) * Inference
            -inference()* Any
        }

mac.inference.Inference

mac.inference.Inference Objects
ObjectTypeDescription
model (Required)[Any]One or more objects that match the expected_model_types. This can be a ML Model (for inference use), a string (for data conversion), etc. See Arbitrary Python Examples for examples.
mac.inference.Inference Methods
MethodReturnsDescription
expected_model_types (Required)SetReturns a Set of models expected for the inference as defined by the developer. Typically this is a set of one. Wallaroo checks the expected model types to verify that the model submitted through the InferenceBuilder method matches what this Inference class expects.
_predict (input_data: mac.types.InferenceData) (Required)mac.types.InferenceDataThe entry point for the Wallaroo inference with the following input and output parameters that are defined when the model is updated.
  • mac.types.InferenceData: The input InferenceData is a Dictionary of numpy arrays derived from the input_schema detailed when the model is uploaded, defined in PyArrow.Schema format.
  • mac.types.InferenceData: The output is a Dictionary of numpy arrays as defined by the output parameters defined in PyArrow.Schema format.
The InferenceDataValidationError exception is raised when the input data does not match mac.types.InferenceData.
raise_error_if_model_is_not_assignedN/AError when a model is not set to Inference.
raise_error_if_model_is_wrong_typeN/AError when the model does not match the expected_model_types.

The example, the expected_model_types can be defined for the KMeans model.

from sklearn.cluster import KMeans

class SampleClass(mac.inference.Inference):
    @property
    def expected_model_types(self) -> Set[Any]:
        return {KMeans}

mac.inference.creation.InferenceBuilder

InferenceBuilder builds a concrete Inference, i.e. instantiates an Inference object, loads the appropriate model and assigns the model to the Inference.

classDiagram
    class InferenceBuilder {
        +create(config InferenceConfig) * Inference
        -inference()* Any
    }

Each model that is included requires its own InferenceBuilder. InferenceBuilder loads one model, then submits it to the Inference class when created. The Inference class checks this class against its expected_model_types() Set.

mac.inference.creation.InferenceBuilder Methods
MethodReturnsDescription
create(config mac.config.inference.CustomInferenceConfig) (Required)The custom Inference instance.Creates an Inference subclass, then assigns a model and attributes. The CustomInferenceConfig is used to retrieve the config.model_path, which is a pathlib.Path object pointing to the folder where the model artifacts are saved. Every artifact loaded must be relative to config.model_path. This is set when the arbitrary python .zip file is uploaded and the environment for running it in Wallaroo is set. For example: loading the artifact vgg_clustering\feature_extractor.h5 would be set with config.model_path \ feature_extractor.h5. The model loaded must match an existing module. For our example, this is from sklearn.cluster import KMeans, and this must match the Inference expected_model_types.
inferencecustom Inference instance.Returns the instantiated custom Inference object created from the create method.

Arbitrary Python Runtime

Arbitrary Python always run in the containerized model runtime.

ParameterDescription
Web Sitehttps://mlflow.org
Supported Librariesmlflow==1.3.0
RuntimeContainerized aka mlflow

For models that do not fall under the supported model frameworks, organizations can use containerized MLFlow ML Models.

This guide details how to add ML Models from a model registry service into Wallaroo.

Wallaroo supports both public and private containerized model registries. See the Wallaroo Private Containerized Model Container Registry Guide for details on how to configure a Wallaroo instance with a private model registry.

Wallaroo users can register their trained MLFlow ML Models from a containerized model container registry into their Wallaroo instance and perform inferences with it through a Wallaroo pipeline.

As of this time, Wallaroo only supports MLFlow 1.30.0 containerized models. For information on how to containerize an MLFlow model, see the MLFlow Documentation.

Wallaroo supports both public and private containerized model registries. See the Wallaroo Private Containerized Model Container Registry Guide for details on how to configure a Wallaroo instance with a private model registry.

List Wallaroo Frameworks

Wallaroo frameworks are listed from the Wallaroo.Framework class. The following demonstrates listing all available supported frameworks.

from wallaroo.framework import Framework

[e.value for e in Framework]

    ['onnx',
    'tensorflow',
    'python',
    'keras',
    'sklearn',
    'pytorch',
    'xgboost',
    'hugging-face-feature-extraction',
    'hugging-face-image-classification',
    'hugging-face-image-segmentation',
    'hugging-face-image-to-text',
    'hugging-face-object-detection',
    'hugging-face-question-answering',
    'hugging-face-stable-diffusion-text-2-img',
    'hugging-face-summarization',
    'hugging-face-text-classification',
    'hugging-face-translation',
    'hugging-face-zero-shot-classification',
    'hugging-face-zero-shot-image-classification',
    'hugging-face-zero-shot-object-detection',
    'hugging-face-sentiment-analysis',
    'hugging-face-text-generation']

2.1.1 - Model Registry Service with Wallaroo SDK Demonstration

How to use the Wallaroo SDK with Model Registry Services

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

MLFLow Registry Model Upload Demonstration

Wallaroo users can register their trained machine learning models from a model registry into their Wallaroo instance and perform inferences with it through a Wallaroo pipeline.

This guide details how to add ML Models from a model registry service into a Wallaroo instance.

Artifact Requirements

Models are uploaded to the Wallaroo instance as the specific artifact - the “file” or other data that represents the file itself. This must comply with the Wallaroo model requirements framework and version or it will not be deployed.

This tutorial will:

  • Create a Wallaroo workspace and pipeline.
  • Show how to connect a Wallaroo Registry that connects to a Model Registry Service.
  • Use the registry connection details to upload a sample model to Wallaroo.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.
  • A Model (aka Artifact) Registry Service

References

Tutorial Steps

Import Libraries

We’ll start with importing the libraries we need for the tutorial.

import os
import wallaroo

Connect to the Wallaroo Instance through the User Interface

The next step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

wl=wallaroo.Client()

Connect to Model Registry

The Wallaroo Registry stores the URL and authentication token to the Model Registry service, with the assigned name. Note that in this demonstration all URLs and token are examples.

registry = wl.create_model_registry(name="JeffRegistry45", 
                                    token="dapi67c8c0b04606f730e78b7ae5e3221015-3", 
                                    url="https://sample.registry.service.azuredatabricks.net")
registry
FieldValue
NameJeffRegistry45
URLhttps://sample.registry.service.azuredatabricks.net
Workspacesjohn.hummel@wallaroo.ai - Default Workspace
Created At2023-17-Jul 19:54:49
Updated At2023-17-Jul 19:54:49

List Model Registries

Registries associated with a workspace are listed with the Wallaroo.Client.list_model_registries() method.

# List all registries in this workspace
registries = wl.list_model_registries()
registries
nameregistry urlcreated atupdated at
JeffRegistry45https://sample.registry.service.azuredatabricks.net2023-17-Jul 17:56:522023-17-Jul 17:56:52
JeffRegistry45https://sample.registry.service.azuredatabricks.net2023-17-Jul 19:54:492023-17-Jul 19:54:49

Create Workspace

For this demonstration, we will create a random Wallaroo workspace, then attach our registry to the workspace so it is accessible by other workspace users.

Add Registry to Workspace

Registries are assigned to a Wallaroo workspace with the Wallaroo.registry.add_registry_to_workspace method. This allows members of the workspace to access the registry connection. A registry can be associated with one or more workspaces.

Add Registry to Workspace Parameters

ParameterTypeDescription
namestring (Required)The numerical identifier of the workspace.
# Make a random new workspace
import math
import random
num = math.floor(random.random()* 1000)
workspace_id = wl.create_workspace(f"test{num}").id()

registry.add_registry_to_workspace(workspace_id=workspace_id)
FieldValue
NameJeffRegistry45
URLhttps://sample.registry.service.azuredatabricks.net
Workspacestest68, john.hummel@wallaroo.ai - Default Workspace
Created At2023-17-Jul 19:54:49
Updated At2023-17-Jul 19:54:49

Remove Registry from Workspace

Registries are removed from a Wallaroo workspace with the Registry remove_registry_from_workspace method.

Remove Registry from Workspace Parameters

ParameterTypeDescription
workspace_idInteger (Required)The numerical identifier of the workspace.
registry.remove_registry_from_workspace(workspace_id=workspace_id)
FieldValue
NameJeffRegistry45
URLhttps://sample.registry.service.azuredatabricks.net
Workspacesjohn.hummel@wallaroo.ai - Default Workspace
Created At2023-17-Jul 19:54:49
Updated At2023-17-Jul 19:54:49

List Models in a Registry

A List of models available to the Wallaroo instance through the MLFlow Registry is performed with the Wallaroo.Registry.list_models() method.

registry_models = registry.list_models()
registry_models
NameRegistry UserVersionsCreated AtUpdated At
logreg1gib.bhojraj@wallaroo.ai12023-06-Jul 14:36:542023-06-Jul 14:36:56
sidekick-testgib.bhojraj@wallaroo.ai12023-11-Jul 14:42:142023-11-Jul 14:42:14
testmodelgib.bhojraj@wallaroo.ai12023-16-Jun 12:38:422023-06-Jul 15:03:41
testmodel2gib.bhojraj@wallaroo.ai12023-16-Jun 12:41:042023-29-Jun 18:08:33
verified-workinggib.bhojraj@wallaroo.ai12023-11-Jul 16:18:032023-11-Jul 16:57:54
wine_qualitygib.bhojraj@wallaroo.ai22023-16-Jun 13:05:532023-16-Jun 13:09:57

Select Model from Registry

Registry models are selected from the Wallaroo.Registry.list_models() method, then specifying the model to use.

single_registry_model = registry_models[4]
single_registry_model
Nameverified-working
Registry Usergib.bhojraj@wallaroo.ai
Versions1
Created At2023-11-Jul 16:18:03
Updated At2023-11-Jul 16:57:54

List Model Versions

The Registry Model attribute versions shows the complete list of versions for the particular model.

single_registry_model.versions()
NameVersionDescription
verified-working3None

List Model Version Artifacts

Artifacts belonging to a MLFlow registry model are listed with the Model Version list_artifacts() method. This returns all artifacts for the model.

single_registry_model.versions()[1].list_artifacts()
File NameFile SizeFull Path
MLmodel559Bhttps://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/MLmodel
conda.yaml182Bhttps://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/conda.yaml
model.pkl829Bhttps://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/model.pkl
python_env.yaml122Bhttps://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/python_env.yaml
requirements.txt73Bhttps://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/requirements.txt

Configure Data Schemas

To upload a ML Model to Wallaroo, the input and output schemas must be defined in pyarrow.lib.Schema format.

from wallaroo.framework import Framework
import pyarrow as pa

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('predictions', pa.int32()),
    pa.field('probabilities', pa.list_(pa.float64(), list_size=3))
])

Upload a Model from a Registry

Models uploaded to the Wallaroo workspace are uploaded from a MLFlow Registry with the Wallaroo.Registry.upload method.

Upload a Model from a Registry Parameters

ParameterTypeDescription
namestring (Required)The name to assign the model once uploaded. Model names are unique within a workspace. Models assigned the same name as an existing model will be uploaded as a new model version.
pathstring (Required)The full path to the model artifact in the registry.
frameworkstring (Required)The Wallaroo model Framework. See Model Uploads and Registrations Supported Frameworks
input_schemapyarrow.lib.Schema (Required for non-native runtimes)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Required for non-native runtimes)The output schema in Apache Arrow schema format.
model = registry.upload_model(
  name="verified-working", 
  path="https://sample.registry.service.azuredatabricks.net/api/2.0/dbfs/read?path=/databricks/mlflow-registry/9168792a16cb40a88de6959ef31e42a2/models/√erified-working/model.pkl", 
  framework=Framework.SKLEARN,
  input_schema=input_schema,
  output_schema=output_schema)
model
Nameverified-working
Versioncf194b65-65b2-4d42-a4e2-6ca6fa5bfc42
File Namemodel.pkl
SHA5f4c25b0b564ab9fe0ea437424323501a460aa74463e81645a6419be67933ca4
Statuspending_conversion
Image PathNone
Updated At2023-17-Jul 17:57:23

Verify the Model Status

Once uploaded, the model will undergo conversion. The following will loop through the model status until it is ready. Once ready, it is available for deployment.

import time
while model.status() != "ready" and model.status() != "error":
    print(model.status())
    time.sleep(3)
print(model.status())
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
pending_conversion
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
converting
ready

Model Runtime

Once uploaded and converted, the model runtime is derived. This determines whether to allocate resources to pipeline’s native runtime environment or containerized runtime environment. For more details, see the Wallaroo SDK Essentials Guide: Pipeline Deployment Configuration guide.

model.config().runtime()
'mlflow'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.5 cpu to the runtime environment and 1 CPU to the containerized runtime environment.

import os, json
from wallaroo.deployment_config import DeploymentConfigBuilder
deployment_config = DeploymentConfigBuilder().cpus(0.5).sidekick_cpus(model, 1).build()
pipeline = wl.build_pipeline("jefftest1")
pipeline = pipeline.add_model_step(model)
deployment = pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.244.3.148',
   'name': 'engine-86c7fc5c95-8kwh5',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'jefftest1',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'verified-working',
      'version': 'cf194b65-65b2-4d42-a4e2-6ca6fa5bfc42',
      'sha': '5f4c25b0b564ab9fe0ea437424323501a460aa74463e81645a6419be67933ca4',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.244.4.203',
   'name': 'engine-lb-584f54c899-tpv5b',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': [{'ip': '10.244.0.225',
   'name': 'engine-sidekick-verified-working-43-74f957566d-9zdfh',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': '\n'}]}

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

import pandas as pd
from sklearn.datasets import load_iris

data = load_iris(as_frame=True)

X = data['data'].values
dataframe = pd.DataFrame({"inputs": data['data'][:2].values.tolist()})
dataframe
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
deployment.infer(dataframe)
timein.inputsout.predictionsout.probabilitiescheck_failures
02023-07-17 17:59:18.840[5.1, 3.5, 1.4, 0.2]0[0.981814913291491, 0.018185072312411506, 1.43...0
12023-07-17 17:59:18.840[4.9, 3.0, 1.4, 0.2]0[0.9717552971628304, 0.02824467272952288, 3.01...0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
namejefftest1
created2023-07-17 17:59:05.922172+00:00
last_updated2023-07-17 17:59:06.684060+00:00
deployedFalse
tags
versionsc2cca319-fcad-47b2-9de0-ad5b2852d1a2, f1e6d1b5-96ee-46a1-bfdf-174310ff4270
stepsverified-working

2.2 - Wallaroo SDK Upload Tutorials: Arbitrary Python

How to upload different Arbitrary Python models to Wallaroo.

The following tutorials cover how to upload sample arbitrary python models into a Wallaroo instance.

ParameterDescription
Web Sitehttps://www.python.org/
Supported Librariespython==3.8
FrameworkFramework.CUSTOM aka custom
RuntimeContainerized aka flight

Arbitrary Python models, also known as Bring Your Own Predict (BYOP) allow for custom model deployments with supporting scripts and artifacts. These are used with pre-trained models (PyTorch, Tensorflow, etc) along with whatever supporting artifacts they require. Supporting artifacts can include other Python modules, model files, etc. These are zipped with all scripts, artifacts, and a requirements.txt file that indicates what other Python models need to be imported that are outside of the typical Wallaroo platform.

Contrast this with Wallaroo Python models - aka “Python steps”. These are standalone python scripts that use the python libraries natively supported by the Wallaroo platform. These are used for either simple model deployment (such as ARIMA Statsmodels), or data formatting such as the postprocessing steps. A Wallaroo Python model will be composed of one Python script that matches the Wallaroo requirements.

Arbitrary Python File Requirements

Arbitrary Python (BYOP) models are uploaded to Wallaroo via a ZIP file with the following components:

ArtifactTypeDescription
Python scripts aka .py files with classes that extend mac.inference.Inference and mac.inference.creation.InferenceBuilderPython ScriptExtend the classes mac.inference.Inference and mac.inference.creation.InferenceBuilder. These are included with the Wallaroo SDK. Further details are in Arbitrary Python Script Requirements. Note that there is no specified naming requirements for the classes that extend mac.inference.Inference and mac.inference.creation.InferenceBuilder - any qualified class name is sufficient as long as these two classes are extended as defined below.
requirements.txtPython requirements fileThis sets the Python libraries used for the arbitrary python model. These libraries should be targeted for Python 3.8 compliance. These requirements and the versions of libraries should be exactly the same between creating the model and deploying it in Wallaroo. This insures that the script and methods will function exactly the same as during the model creation process.
Other artifactsFilesOther models, files, and other artifacts used in support of this model.

For example, the if the arbitrary python model will be known as vgg_clustering, the contents may be in the following structure, with vgg_clustering as the storage directory:

vgg_clustering\
    feature_extractor.h5
    kmeans.pkl
    custom_inference.py
    requirements.txt

Note the inclusion of the custom_inference.py file. This file name is not required - any Python script or scripts that extend the classes listed above are sufficient. This Python script could have been named vgg_custom_model.py or any other name as long as it includes the extension of the classes listed above.

The sample arbitrary python model file is created with the command zip -r vgg_clustering.zip vgg_clustering/.

Wallaroo Arbitrary Python uses the Wallaroo SDK mac module, included in the Wallaroo SDK 2023.2.1 and above. See the Wallaroo SDK Install Guides for instructions on installing the Wallaroo SDK.

Arbitrary Python Script Requirements

The entry point of the arbitrary python model is any python script that extends the following classes. These are included with the Wallaroo SDK. The required methods that must be overridden are specified in each section below.

  • mac.inference.Inference interface serves model inferences based on submitted input some input. Its purpose is to serve inferences for any supported arbitrary model framework (e.g. scikit, keras etc.).

    classDiagram
        class Inference {
            <<Abstract>>
            +model Optional[Any]
            +expected_model_types()* Set
            +predict(input_data: InferenceData)*  InferenceData
            -raise_error_if_model_is_not_assigned() None
            -raise_error_if_model_is_wrong_type() None
        }
  • mac.inference.creation.InferenceBuilder builds a concrete Inference, i.e. instantiates an Inference object, loads the appropriate model and assigns the model to to the Inference object.

    classDiagram
        class InferenceBuilder {
            +create(config InferenceConfig) * Inference
            -inference()* Any
        }

mac.inference.Inference

mac.inference.Inference Objects
ObjectTypeDescription
model (Required)[Any]One or more objects that match the expected_model_types. This can be a ML Model (for inference use), a string (for data conversion), etc. See Arbitrary Python Examples for examples.
mac.inference.Inference Methods
MethodReturnsDescription
expected_model_types (Required)SetReturns a Set of models expected for the inference as defined by the developer. Typically this is a set of one. Wallaroo checks the expected model types to verify that the model submitted through the InferenceBuilder method matches what this Inference class expects.
_predict (input_data: mac.types.InferenceData) (Required)mac.types.InferenceDataThe entry point for the Wallaroo inference with the following input and output parameters that are defined when the model is updated.
  • mac.types.InferenceData: The input InferenceData is a Dictionary of numpy arrays derived from the input_schema detailed when the model is uploaded, defined in PyArrow.Schema format.
  • mac.types.InferenceData: The output is a Dictionary of numpy arrays as defined by the output parameters defined in PyArrow.Schema format.
The InferenceDataValidationError exception is raised when the input data does not match mac.types.InferenceData.
raise_error_if_model_is_not_assignedN/AError when a model is not set to Inference.
raise_error_if_model_is_wrong_typeN/AError when the model does not match the expected_model_types.

The example, the expected_model_types can be defined for the KMeans model.

from sklearn.cluster import KMeans

class SampleClass(mac.inference.Inference):
    @property
    def expected_model_types(self) -> Set[Any]:
        return {KMeans}

mac.inference.creation.InferenceBuilder

InferenceBuilder builds a concrete Inference, i.e. instantiates an Inference object, loads the appropriate model and assigns the model to the Inference.

classDiagram
    class InferenceBuilder {
        +create(config InferenceConfig) * Inference
        -inference()* Any
    }

Each model that is included requires its own InferenceBuilder. InferenceBuilder loads one model, then submits it to the Inference class when created. The Inference class checks this class against its expected_model_types() Set.

mac.inference.creation.InferenceBuilder Methods
MethodReturnsDescription
create(config mac.config.inference.CustomInferenceConfig) (Required)The custom Inference instance.Creates an Inference subclass, then assigns a model and attributes. The CustomInferenceConfig is used to retrieve the config.model_path, which is a pathlib.Path object pointing to the folder where the model artifacts are saved. Every artifact loaded must be relative to config.model_path. This is set when the arbitrary python .zip file is uploaded and the environment for running it in Wallaroo is set. For example: loading the artifact vgg_clustering\feature_extractor.h5 would be set with config.model_path \ feature_extractor.h5. The model loaded must match an existing module. For our example, this is from sklearn.cluster import KMeans, and this must match the Inference expected_model_types.
inferencecustom Inference instance.Returns the instantiated custom Inference object created from the create method.

Arbitrary Python Runtime

Arbitrary Python always run in the containerized model runtime.

2.2.1 - Wallaroo SDK Upload Arbitrary Python Tutorial: Generate VGG16 Model

How to generate a VGG166 model for arbitrary python model deployment in Wallaroo.

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo SDK Upload Arbitrary Python Tutorial: Generate Model

This tutorial demonstrates how to use arbitrary python as a ML Model in Wallaroo. Arbitrary Python allows organizations to use Python scripts that require specific libraries and artifacts as models in the Wallaroo engine. This allows for highly flexible use of ML models with supporting scripts.

Tutorial Goals

This tutorial is split into two parts:

  • Wallaroo SDK Upload Arbitrary Python Tutorial: Generate Model: Train a dummy KMeans model for clustering images using a pre-trained VGG16 model on cifar10 as a feature extractor. The Python entry points used for Wallaroo deployment will be added and described.
    • A copy of the arbitrary Python model models/model-auto-conversion-BYOP-vgg16-clustering.zip is included in this tutorial, so this step can be skipped.
  • Arbitrary Python Tutorial Deploy Model in Wallaroo Upload and Deploy: Deploys the KMeans model in an arbitrary Python package in Wallaroo, and perform sample inferences. The file models/model-auto-conversion-BYOP-vgg16-clustering.zip is provided so users can go right to testing deployment.

Arbitrary Python Script Requirements

The entry point of the arbitrary python model is any python script that must include the following.

  • class ImageClustering(Inference): The default inference class. This is used to perform the actual inferences. Wallaroo uses the _predict method to receive the inference data and call the appropriate functions for the inference.
    • def __init__: Used to initialize this class and load in any other classes or other required settings.
    • def expected_model_types: Used by Wallaroo to anticipate what model types are used by the script.
    • def model(self, model): Defines the model used for the inference. Accepts the model instance used in the inference.
      • self._raise_error_if_model_is_wrong_type(model): Returns the error if the wrong model type is used. This verifies that only the anticipated model type is used for the inference.
      • self._model = model: Sets the submitted model as the model for this class, provided _raise_error_if_model_is_wrong_type is not raised.
    • def _predict(self, input_data: InferenceData): This is the entry point for Wallaroo to perform the inference. This will receive the inference data, then perform whatever steps and return a dictionary of numpy arrays.
  • class ImageClusteringBuilder(InferenceBuilder): Loads the model and prepares it for inferencing.
    • def inference(self) -> ImageClustering: Sets the inference class being used for the inferences.
    • def create(self, config: CustomInferenceConfig) -> ImageClustering: Creates an inference subclass, assigning the model and any attributes required for it to function.

All other methods used for the functioning of these classes are optional, as long as they meet the requirements listed above.

Tutorial Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

VGG16 Model Training Steps

This process will train a dummy KMeans model for clustering images using a pre-trained VGG16 model on cifar10 as a feature extractor. This model consists of the following elements:

  • All elements are stored in the folder models/vgg16_clustering. This will be converted to the zip file model-auto-conversion-BYOP-vgg16-clustering.zip.
  • models/vgg16_clustering will contain the following:
    • All necessary model artifacts
    • One or multiple Python files implementing the classes Inference and InferenceBuilder. The implemented classes can have any naming they desire as long as they inherit from the appropriate base classes.
    • a requirements.txt file with all necessary pip requirements to successfully run the inference

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import numpy as np
import pandas as pd
import json
import os
import pickle
import pyarrow as pa
import tensorflow as tf
import warnings
warnings.filterwarnings('ignore')

from sklearn.cluster import KMeans
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import Model
from tensorflow.keras.layers import Flatten
2023-07-07 16:16:26.511340: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-07-07 16:16:26.511369: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

Variables

We’ll use these variables in later steps rather than hard code them in. In this case, the directory where we’ll store our artifacts.

model_directory = './models/vgg16_clustering'

Load Data Set

In this section, we will load our sample data and shape it.

# Load and preprocess the CIFAR-10 dataset
(X_train, y_train), (X_test, y_test) = cifar10.load_data()

# Normalize the pixel values to be between 0 and 1
X_train = X_train / 255.0
X_test = X_test / 255.0
X_train.shape
(50000, 32, 32, 3)

Train KMeans with VGG16 as feature extractor

Now we will train our model.

pretrained_model = tf.keras.applications.VGG16(include_top=False, 
                                               weights='imagenet', 
                                               input_shape=(32, 32, 3)
                                               )
embedding_model = Model(inputs=pretrained_model.input, 
                        outputs=Flatten()(pretrained_model.output))
2023-07-07 16:16:30.207936: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-07-07 16:16:30.207966: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2023-07-07 16:16:30.207987: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (jupyter-john-2ehummel-40wallaroo-2eai): /proc/driver/nvidia/version does not exist
2023-07-07 16:16:30.208169: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
X_train_embeddings = embedding_model.predict(X_train[:100])
X_test_embeddings = embedding_model.predict(X_test[:100])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X_train_embeddings)

Save Models

Let’s first create the directory where the model artifacts will be saved:

os.makedirs(model_directory, exist_ok=True)

And now save the two models:

with  open(f'{model_directory}/kmeans.pkl', 'wb') as fp:
    pickle.dump(kmeans, fp)
embedding_model.save(f'{model_directory}/feature_extractor.h5')
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.

All needed model artifacts have been now saved under our model directory.

Sample Arbitrary Python Script

The following shows an example of extending the Inference and InferenceBuilder classes for our specific model. This script is located in our model directory under ./models/vgg16_clustering.

"""This module features an example implementation of a custom Inference and its
corresponding InferenceBuilder."""

import pathlib
import pickle
from typing import Any, Set

import tensorflow as tf
from mac.config.inference import CustomInferenceConfig
from mac.inference import Inference
from mac.inference.creation import InferenceBuilder
from mac.types import InferenceData
from sklearn.cluster import KMeans

class ImageClustering(Inference):
    """Inference class for image clustering, that uses
    a pre-trained VGG16 model on cifar10 as a feature extractor
    and performs clustering on a trained KMeans model.

    Attributes:
        - feature_extractor: The embedding model we will use
        as a feature extractor (i.e. a trained VGG16).
        - expected_model_types: A set of model instance types that are expected by this inference.
        - model: The model on which the inference is calculated.
    """

    def __init__(self, feature_extractor: tf.keras.Model):
        self.feature_extractor = feature_extractor
        super().__init__()

    @property
    def expected_model_types(self) -> Set[Any]:
        return {KMeans}

    @Inference.model.setter  # type: ignore
    def model(self, model) -> None:
        """Sets the model on which the inference is calculated.

        :param model: A model instance on which the inference is calculated.

        :raises TypeError: If the model is not an instance of expected_model_types
            (i.e. KMeans).
        """
        self._raise_error_if_model_is_wrong_type(model) # this will make sure an error will be raised if the model is of wrong type
        self._model = model

    def _predict(self, input_data: InferenceData) -> InferenceData:
        """Calculates the inference on the given input data.
        This is the core function that each subclass needs to implement
        in order to calculate the inference.

        :param input_data: The input data on which the inference is calculated.
        It is of type InferenceData, meaning it comes as a dictionary of numpy
        arrays.

        :raises InferenceDataValidationError: If the input data is not valid.
        Ideally, every subclass should raise this error if the input data is not valid.

        :return: The output of the model, that is a dictionary of numpy arrays.
        """

        # input_data maps to the input_schema we have defined
        # with PyArrow, coming as a dictionary of numpy arrays
        inputs = input_data["images"]

        # Forward inputs to the models
        embeddings = self.feature_extractor(inputs)
        predictions = self.model.predict(embeddings.numpy())

        # Return predictions as dictionary of numpy arrays
        return {"predictions": predictions}

class ImageClusteringBuilder(InferenceBuilder):
    """InferenceBuilder subclass for ImageClustering, that loads
    a pre-trained VGG16 model on cifar10 as a feature extractor
    and a trained KMeans model, and creates an ImageClustering object."""

    @property
    def inference(self) -> ImageClustering:
        return ImageClustering

    def create(self, config: CustomInferenceConfig) -> ImageClustering:
        """Creates an Inference subclass and assigns a model and additionally
        needed attributes to it.

        :param config: Custom inference configuration. In particular, we're
        interested in `config.model_path` that is a pathlib.Path object
        pointing to the folder where the model artifacts are saved.
        Every artifact we need to load from this folder has to be
        relative to `config.model_path`.

        :return: A custom Inference instance.
        """
        feature_extractor = self._load_feature_extractor(
            config.model_path / "feature_extractor.h5"
        )
        inference = self.inference(feature_extractor)
        model = self._load_model(config.model_path / "kmeans.pkl")
        inference.model = model

        return inference

    def _load_feature_extractor(
        self, file_path: pathlib.Path
    ) -> tf.keras.Model:
        return tf.keras.models.load_model(file_path)

    def _load_model(self, file_path: pathlib.Path) -> KMeans:
        with open(file_path.as_posix(), "rb") as fp:
            model = pickle.load(fp)
        return model

Create Requirements File

As a last step we need to create a requirements.txt file and save it under our vgg_clustering/. The file should contain all the necessary pip requirements needed to run the inference. It will have this data inside.

tensorflow==2.8.0
scikit-learn==1.2.2

Zip model folder

Assuming we have stored the following files inside out model directory models/vgg_clustering/:

  1. feature_extractor.h5
  2. kmeans.pkl
  3. custom_inference.py
  4. requirements.txt

Now we will zip the file. This is performed with the zip command and the -r option to zip the contents of the entire directory.

zip -r model-auto-conversion-BYOP-vgg16-clustering.zip vgg16_clustering/

The arbitrary Python custom model can now be uploaded to the Wallaroo instance.

2.2.2 - Wallaroo SDK Upload Arbitrary Python Tutorial: Deploy VGG16 Model

How to deploy a VGG166 model as a arbitrary python model in Wallaroo.

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Arbitrary Python Tutorial Deploy Model in Wallaroo Upload and Deploy

This tutorial demonstrates how to use arbitrary python as a ML Model in Wallaroo. Arbitrary Python allows organizations to use Python scripts that require specific libraries and artifacts as models in the Wallaroo engine. This allows for highly flexible use of ML models with supporting scripts.

Tutorial Goals

This tutorial is split into two parts:

  • Wallaroo SDK Upload Arbitrary Python Tutorial: Generate Model: Train a dummy KMeans model for clustering images using a pre-trained VGG16 model on cifar10 as a feature extractor. The Python entry points used for Wallaroo deployment will be added and described.
    • A copy of the arbitrary Python model models/model-auto-conversion-BYOP-vgg16-clustering.zip is included in this tutorial, so this step can be skipped.
  • Arbitrary Python Tutorial Deploy Model in Wallaroo Upload and Deploy: Deploys the KMeans model in an arbitrary Python package in Wallaroo, and perform sample inferences. The file models/model-auto-conversion-BYOP-vgg16-clustering.zip is provided so users can go right to testing deployment.

Arbitrary Python Script Requirements

The entry point of the arbitrary python model is any python script that must include the following.

  • class ImageClustering(Inference): The default inference class. This is used to perform the actual inferences. Wallaroo uses the _predict method to receive the inference data and call the appropriate functions for the inference.
    • def __init__: Used to initialize this class and load in any other classes or other required settings.
    • def expected_model_types: Used by Wallaroo to anticipate what model types are used by the script.
    • def model(self, model): Defines the model used for the inference. Accepts the model instance used in the inference.
      • self._raise_error_if_model_is_wrong_type(model): Returns the error if the wrong model type is used. This verifies that only the anticipated model type is used for the inference.
      • self._model = model: Sets the submitted model as the model for this class, provided _raise_error_if_model_is_wrong_type is not raised.
    • def _predict(self, input_data: InferenceData): This is the entry point for Wallaroo to perform the inference. This will receive the inference data, then perform whatever steps and return a dictionary of numpy arrays.
  • class ImageClusteringBuilder(InferenceBuilder): Loads the model and prepares it for inferencing.
    • def inference(self) -> ImageClustering: Sets the inference class being used for the inferences.
    • def create(self, config: CustomInferenceConfig) -> ImageClustering: Creates an inference subclass, assigning the model and any attributes required for it to function.

All other methods used for the functioning of these classes are optional, as long as they meet the requirements listed above.

Tutorial Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import numpy as np
import pandas as pd
import json
import os
import pickle
import pyarrow as pa
import tensorflow as tf
import wallaroo

from sklearn.cluster import KMeans
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import Model
from tensorflow.keras.layers import Flatten
from wallaroo.pipeline   import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.framework import Framework
from wallaroo.object import EntityNotFoundError

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

# import string
# import random

# # make a random 4 character suffix to prevent overwriting other user's workspaces
# suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'vgg16-clustering-workspace{suffix}'
pipeline_name = f'vgg16-clustering-pipeline'

model_name = 'vgg16-clustering'
model_file_name = './models/model-auto-conversion-BYOP-vgg16-clustering.zip'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline that is used to deploy our arbitrary Python model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Upload Arbitrary Python Model

Arbitrary Python models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload Arbitrary Python Model Parameters

The following parameters are required for Arbitrary Python models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a Arbitrary Python model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, Arbitrary Python model Required)Set as Framework.CUSTOM.
input_schemapyarrow.lib.Schema (Upload Method Optional, Arbitrary Python model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, Arbitrary Python model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, Arbitrary Python model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

Upload Arbitrary Python Model Return

The following is returned with a successful model upload and conversion.

FieldTypeDescription
namestringThe name of the model.
versionstringThe model version as a unique UUID.
file_namestringThe file name of the model as stored in Wallaroo.
image_pathstringThe image used to deploy the model in the Wallaroo engine.
last_update_timeDateTimeWhen the model was last updated.

For our example, we’ll start with setting the input_schema and output_schema that is expected by our ImageClustering._predict() method.

input_schema = pa.schema([
    pa.field('images', pa.list_(
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=32
        ),
        list_size=32
    )),
])

output_schema = pa.schema([
    pa.field('predictions', pa.int64()),
])

Upload Model

Now we’ll upload our model. The framework is Framework.CUSTOM for arbitrary Python models, and we’ll specify the input and output schemas for the upload.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.CUSTOM, 
                        input_schema=input_schema, 
                        output_schema=output_schema, 
                        convert_wait=True)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime..........................successful

Ready

Namevgg16-clustering
Version49e8e007-27ae-49da-8a69-6ed5c4aec890
File Namemodel-auto-conversion-BYOP-vgg16-clustering.zip
SHA7bb3362b1768c92ea7e593451b2b8913d3b7616c19fd8d25b73fb6990f9283e0
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 15:40:13
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

pipeline.add_model_step(model)
namevgg16-clustering-pipeline
created2023-10-20 15:37:53.960266+00:00
last_updated2023-10-20 15:37:53.960266+00:00
deployed(none)
tags
versionsa7cc9967-3311-4540-93d6-9ea30fa48cd5
steps
publishedFalse
deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('4Gi') \
    .build()

pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
pipeline.status()
{'status': 'Error',
 'details': [],
 'engines': [{'ip': '10.244.3.130',
   'name': 'engine-6fcbccd45c-wrrfq',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'vgg16-clustering-pipeline',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'vgg16-clustering',
      'version': '49e8e007-27ae-49da-8a69-6ed5c4aec890',
      'sha': '7bb3362b1768c92ea7e593451b2b8913d3b7616c19fd8d25b73fb6990f9283e0',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.244.2.188',
   'name': 'engine-lb-584f54c899-ssbjw',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': [{'ip': '10.244.3.129',
   'name': 'engine-sidekick-vgg16-clustering-55-7bcb767b67-ljl6k',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': None}]}

Run inference

Everything is in place - we’ll now run a sample inference with some toy data. In this case we’re randomly generating some values in the data shape the model expects, then submitting an inference request through our deployed pipeline.

input_data = {
        "images": [np.random.randint(0, 256, (32, 32, 3), dtype=np.uint8)] * 2,
}
dataframe = pd.DataFrame(input_data)
dataframe
images
0[[[181, 128, 112], [109, 189, 153], [127, 171,...
1[[[181, 128, 112], [109, 189, 153], [127, 171,...
pipeline.infer(dataframe, timeout=10000)
timein.imagesout.predictionscheck_failures
02023-10-20 15:41:48.657[181, 128, 112, 109, 189, 153, 127, 171, 206, ...10
12023-10-20 15:41:48.657[181, 128, 112, 109, 189, 153, 127, 171, 206, ...10

Undeploy Pipelines

The inference is successful, so we will undeploy the pipeline and return the resources back to the cluster.

pipeline.undeploy()

2.3 - Wallaroo SDK Upload Tutorials: Hugging Face

How to upload different Hugging Face models to Wallaroo.

The following tutorials cover how to upload sample Hugging Face models. The complete list of supported Hugging Face pipelines are as follows.

ParameterDescription
Web Sitehttps://huggingface.co/models
Supported Libraries
  • transformers==4.34.1
  • diffusers==0.14.0
  • accelerate==0.23.0
  • torchvision==0.14.1
  • torch==1.13.1
FrameworksThe following Hugging Face pipelines are supported by Wallaroo.
  • Framework.HUGGING_FACE_FEATURE_EXTRACTION aka hugging-face-feature-extraction
  • Framework.HUGGING_FACE_IMAGE_CLASSIFICATION aka hugging-face-image-classification
  • Framework.HUGGING_FACE_IMAGE_SEGMENTATION aka hugging-face-image-segmentation
  • Framework.HUGGING_FACE_IMAGE_TO_TEXT aka hugging-face-image-to-text
  • Framework.HUGGING_FACE_OBJECT_DETECTION aka hugging-face-object-detection
  • Framework.HUGGING_FACE_QUESTION_ANSWERING aka hugging-face-question-answering
  • Framework.HUGGING_FACE_STABLE_DIFFUSION_TEXT_2_IMG aka hugging-face-stable-diffusion-text-2-img
  • Framework.HUGGING_FACE_SUMMARIZATION aka hugging-face-summarization
  • Framework.HUGGING_FACE_TEXT_CLASSIFICATION aka hugging-face-text-classification
  • Framework.HUGGING_FACE_TRANSLATION aka hugging-face-translation
  • Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION aka hugging-face-zero-shot-classification
  • Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION aka hugging-face-zero-shot-image-classification
  • Framework.HUGGING_FACE_ZERO_SHOT_OBJECT_DETECTION aka hugging-face-zero-shot-object-detection
  • Framework.HUGGING_FACE_SENTIMENT_ANALYSIS aka hugging-face-sentiment-analysis
  • Framework.HUGGING_FACE_TEXT_GENERATION aka hugging-face-text-generation
  • Framework.HUGGING_FACE_AUTOMATIC_SPEECH_RECOGNITION aka hugging-face-automatic-speech-recognition
RuntimeContainerized flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

Hugging Face Schemas

Input and output schemas for each Hugging Face pipeline are defined below. Note that adding additional inputs not specified below will raise errors, except for the following:

  • Framework.HUGGING_FACE_IMAGE_TO_TEXT
  • Framework.HUGGING_FACE_TEXT_CLASSIFICATION
  • Framework.HUGGING_FACE_SUMMARIZATION
  • Framework.HUGGING_FACE_TRANSLATION

Additional inputs added to these Hugging Face pipelines will be added as key/pair value arguments to the model’s generate method. If the argument is not required, then the model will default to the values coded in the original Hugging Face model’s source code.

See the Hugging Face Pipeline documentation for more details on each pipeline and framework.

Wallaroo FrameworkReference
Framework.HUGGING_FACE_FEATURE_EXTRACTION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string())
])
output_schema = pa.schema([
    pa.field('output', pa.list_(
        pa.list_(
            pa.float64(),
            list_size=128
        ),
    ))
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.list_(
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=100
        ),
        list_size=100
    )),
    pa.field('top_k', pa.int64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64(), list_size=2)),
    pa.field('label', pa.list_(pa.string(), list_size=2)),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_SEGMENTATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('threshold', pa.float64()),
    pa.field('mask_threshold', pa.float64()),
    pa.field('overlap_mask_area_threshold', pa.float64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())),
    pa.field('label', pa.list_(pa.string())),
    pa.field('mask', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=100
                ),
                list_size=100
            ),
    )),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_IMAGE_TO_TEXT

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.list_( #required
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=100
        ),
        list_size=100
    )),
    # pa.field('max_new_tokens', pa.int64()),  # optional
])

output_schema = pa.schema([
    pa.field('generated_text', pa.list_(pa.string())),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_OBJECT_DETECTION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('threshold', pa.float64()),
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())),
    pa.field('label', pa.list_(pa.string())),
    pa.field('box', 
        pa.list_( # dynamic output, i.e. dynamic number of boxes per input image, each sublist contains the 4 box coordinates 
            pa.list_(
                    pa.int64(),
                    list_size=4
                ),
            ),
    ),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_QUESTION_ANSWERING

Schemas:

input_schema = pa.schema([
    pa.field('question', pa.string()),
    pa.field('context', pa.string()),
    pa.field('top_k', pa.int64()),
    pa.field('doc_stride', pa.int64()),
    pa.field('max_answer_len', pa.int64()),
    pa.field('max_seq_len', pa.int64()),
    pa.field('max_question_len', pa.int64()),
    pa.field('handle_impossible_answer', pa.bool_()),
    pa.field('align_to_words', pa.bool_()),
])

output_schema = pa.schema([
    pa.field('score', pa.float64()),
    pa.field('start', pa.int64()),
    pa.field('end', pa.int64()),
    pa.field('answer', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_STABLE_DIFFUSION_TEXT_2_IMG

Schemas:

input_schema = pa.schema([
    pa.field('prompt', pa.string()),
    pa.field('height', pa.int64()),
    pa.field('width', pa.int64()),
    pa.field('num_inference_steps', pa.int64()), # optional
    pa.field('guidance_scale', pa.float64()), # optional
    pa.field('negative_prompt', pa.string()), # optional
    pa.field('num_images_per_prompt', pa.string()), # optional
    pa.field('eta', pa.float64()) # optional
])

output_schema = pa.schema([
    pa.field('images', pa.list_(
        pa.list_(
            pa.list_(
                pa.int64(),
                list_size=3
            ),
            list_size=128
        ),
        list_size=128
    )),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_SUMMARIZATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()),
    pa.field('return_text', pa.bool_()),
    pa.field('return_tensors', pa.bool_()),
    pa.field('clean_up_tokenization_spaces', pa.bool_()),
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('summary_text', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TEXT_CLASSIFICATION

Schemas

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('top_k', pa.int64()), # optional
    pa.field('function_to_apply', pa.string()), # optional
])

output_schema = pa.schema([
    pa.field('label', pa.list_(pa.string(), list_size=2)), # list with a number of items same as top_k, list_size can be skipped but may lead in worse performance
    pa.field('score', pa.list_(pa.float64(), list_size=2)), # list with a number of items same as top_k, list_size can be skipped but may lead in worse performance
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TRANSLATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('return_tensors', pa.bool_()), # optional
    pa.field('return_text', pa.bool_()), # optional
    pa.field('clean_up_tokenization_spaces', pa.bool_()), # optional
    pa.field('src_lang', pa.string()), # optional
    pa.field('tgt_lang', pa.string()), # optional
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('translation_text', pa.string()),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
    pa.field('multi_label', pa.bool_()), # optional
])

output_schema = pa.schema([
    pa.field('sequence', pa.string()),
    pa.field('scores', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
    pa.field('labels', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION

Schemas:

input_schema = pa.schema([
    pa.field('inputs', # required
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=100
            ),
        list_size=100
    )),
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
]) 

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels
    pa.field('label', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_ZERO_SHOT_OBJECT_DETECTION

Schemas:

input_schema = pa.schema([
    pa.field('images', 
        pa.list_(
            pa.list_(
                pa.list_(
                    pa.int64(),
                    list_size=3
                ),
                list_size=640
            ),
        list_size=480
    )),
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=3)),
    pa.field('threshold', pa.float64()),
    # pa.field('top_k', pa.int64()), # we want the model to return exactly the number of predictions, we shouldn't specify this
])

output_schema = pa.schema([
    pa.field('score', pa.list_(pa.float64())), # variable output, depending on detected objects
    pa.field('label', pa.list_(pa.string())), # variable output, depending on detected objects
    pa.field('box', 
        pa.list_( # dynamic output, i.e. dynamic number of boxes per input image, each sublist contains the 4 box coordinates 
            pa.list_(
                    pa.int64(),
                    list_size=4
                ),
            ),
    ),
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_SENTIMENT_ANALYSISHugging Face Sentiment Analysis
Wallaroo FrameworkReference
Framework.HUGGING_FACE_TEXT_GENERATION

Any parameter that is not part of the required inputs list will be forwarded to the model as a key/pair value to the underlying models generate method. If the additional input is not supported by the model, an error will be returned.

input_schema = pa.schema([
    pa.field('inputs', pa.string()),
    pa.field('return_tensors', pa.bool_()), # optional
    pa.field('return_text', pa.bool_()), # optional
    pa.field('return_full_text', pa.bool_()), # optional
    pa.field('clean_up_tokenization_spaces', pa.bool_()), # optional
    pa.field('prefix', pa.string()), # optional
    pa.field('handle_long_generation', pa.string()), # optional
    # pa.field('extra_field', pa.int64()), # every extra field you specify will be forwarded as a key/value pair
])

output_schema = pa.schema([
    pa.field('generated_text', pa.list_(pa.string(), list_size=1))
])
Wallaroo FrameworkReference
Framework.HUGGING_FACE_AUTOMATIC_SPEECH_RECOGNITION

Sample input and output schema.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float32())), # required: the audio stored in numpy arrays of shape (num_samples,) and data type `float32`
    pa.field('return_timestamps', pa.string()) # optional: return start & end times for each predicted chunk
]) 

output_schema = pa.schema([
    pa.field('text', pa.string()), # required: the output text corresponding to the audio input
    pa.field('chunks', pa.list_(pa.struct([('text', pa.string()), ('timestamp', pa.list_(pa.float32()))]))), # required (if `return_timestamps` is set), start & end times for each predicted chunk
])

2.3.1 - Wallaroo API Upload Tutorial: Hugging Face Zero Shot Classification

How to upload a Hugging Face Zero Shot Classification model to Wallaroo via the MLOps API.

The Wallaroo 101 tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via MLops API: Hugging Face Zero Shot Classification

The following tutorial demonstrates how to upload a Hugging Face Zero Shot model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Hugging Face Zero Shot Model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance

References

Tutorial Steps

Import Libraries

import json
import os
import requests
import base64

import wallaroo
from wallaroo.pipeline   import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.framework import Framework

import pyarrow as pa
import numpy as np
import pandas as pd

Connect to Wallaroo

To perform the various Wallaroo MLOps API requests, we will use the Wallaroo SDK to generate the necessary tokens. For details on other methods of requesting and using authentication tokens with the Wallaroo MLOps API, see the Wallaroo API Connection Guide.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

wl = wallaroo.Client()

Variables

The following variables will be set for the rest of the tutorial to set the following:

  • Wallaroo Workspace
  • Wallaroo Pipeline
  • Wallaroo Model name and path
  • Wallaroo Model Framework
  • The DNS prefix and suffix for the Wallaroo instance.

To allow this tutorial to be run multiple times or by multiple users in the same Wallaroo instance, a random 4 character prefix will be added to the workspace, pipeline, and model.

Verify that the DNS prefix and suffix match the Wallaroo instance used for this tutorial. See the DNS Integration Guide for more details.

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))
workspace_name = f'hugging-face-zero-shot-api{suffix}'
pipeline_name = f'hugging-face-zero-shot'
model_name = f'zero-shot-classification'
model_file_name = "./models/model-auto-conversion_hugging-face_dummy-pipelines_zero-shot-classification-pipeline.zip"
framework = "hugging-face-zero-shot-classification"

wallarooPrefix = "YOUR PREFIX."
wallarooPrefix = "YOUR SUFFIX"

APIURL=f"https://{wallarooPrefix}api.{wallarooSuffix}"
APIURL
'https://doc-test.api.wallarooexample.ai'

Create the Workspace

In a production environment, the Wallaroo workspace that contains the pipeline and models would be created and deployed. We will quickly recreate those steps using the MLOps API.

Workspaces are created through the MLOps API with the /v1/api/workspaces/create command. This requires the workspace name be provided, and that the workspace not already exist in the Wallaroo instance.

Reference: MLOps API Create Workspace

# Retrieve the token
headers = wl.auth.auth_header()

# set Content-Type type
headers['Content-Type']='application/json'

# Create workspace
apiRequest = f"{APIURL}/v1/api/workspaces/create"

data = {
  "workspace_name": workspace_name
}

response = requests.post(apiRequest, json=data, headers=headers, verify=True).json()
display(response)
# Stored for future examples
workspaceId = response['workspace_id']
{'workspace_id': 9}

Upload the Model

  • Endpoint:
    • /v1/api/models/upload_and_convert
  • Headers:
    • Content-Type: multipart/form-data
  • Parameters
    • name (String Required): The model name.
    • visibility (String Required): Either public or private.
    • workspace_id (String Required): The numerical ID of the workspace to upload the model to.
    • conversion (String Required): The conversion parameters that include the following:
      • framework (String Required): The framework of the model being uploaded. See the list of supported models for more details.
      • python_version (String Required): The version of Python required for model.
      • requirements (String Required): Required libraries. Can be [] if the requirements are default Wallaroo JupyterHub libraries.
      • input_schema (String Optional): The input schema from the Apache Arrow pyarrow.lib.Schema format, encoded with base64.b64encode. Only required for non-native runtime models.
      • output_schema (String Optional): The output schema from the Apache Arrow pyarrow.lib.Schema format, encoded with base64.b64encode. Only required for non-native runtime models.

Set the Schemas

The input and output schemas will be defined according to the Wallaroo Hugging Face schema requirements. The inputs are then base64 encoded for attachment in the API request.

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
    pa.field('multi_label', pa.bool_()), # optional
])

output_schema = pa.schema([
    pa.field('sequence', pa.string()),
    pa.field('scores', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
    pa.field('labels', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
])
encoded_input_schema = base64.b64encode(
                bytes(input_schema.serialize())
            ).decode("utf8")
encoded_output_schema = base64.b64encode(
                bytes(output_schema.serialize())
            ).decode("utf8")

Build the Request

We will now build the request to include the required data. We will be using the workspaceId returned when we created our workspace in a previous step, specifying the input and output schemas, and the framework.

metadata = {
    "name": model_name,
    "visibility": "private",
    "workspace_id": workspaceId,
    "conversion": {
        "framework": framework,
        "python_version": "3.8",
        "requirements": []
    },
    "input_schema": encoded_input_schema,
    "output_schema": encoded_output_schema,
}

Upload Model API Request

Now we will make our upload and convert request. The model is is stored for the next set of steps.

headers = wl.auth.auth_header()

files = {
    'metadata': (None, json.dumps(metadata), "application/json"),
    'file': (model_name, open(model_file_name,'rb'),'application/octet-stream')
}

response = requests.post(f'{APIURL}/v1/api/models/upload_and_convert', 
                         headers=headers, 
                         files=files)
print(response.json())
{'insert_models': {'returning': [{'models': [{'id': 9}]}]}}
# Get the model details

# Retrieve the token
headers = wl.auth.auth_header()

# set Content-Type type
headers['Content-Type']='application/json'

apiRequest = f"{APIURL}/v1/api/models/list_versions"

data = {
  "model_id": model_name,
  "models_pk_id" : modelId
}

status = None

while status != 'ready':
    response = requests.post(apiRequest, json=data, headers=headers, verify=True).json()
    # verify we have the right version
    display(model)
    model = next(model for model in response if model["id"] == modelId)
    display(model)
    status = model['status']
{'sha': '3dcc14dd925489d4f0a3960e90a7ab5917ab685ce955beca8924aa7bb9a69398',
 'models_pk_id': 7,
 'model_version': '284a2e7c-679a-40cc-9121-2747b81a0228',
 'owner_id': '""',
 'model_id': 'zero-shot-classification',
 'id': 7,
 'file_name': 'zero-shot-classification',
 'image_path': 'proxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.3.0-main-3509',
 'status': 'ready'}

{‘sha’: ‘3dcc14dd925489d4f0a3960e90a7ab5917ab685ce955beca8924aa7bb9a69398’,
‘models_pk_id’: 7,
‘model_version’: ‘284a2e7c-679a-40cc-9121-2747b81a0228’,
‘owner_id’: ‘""’,
‘model_id’: ‘zero-shot-classification’,
‘id’: 7,
‘file_name’: ‘zero-shot-classification’,
‘image_path’: ‘proxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.3.0-main-3509’,
‘status’: ‘ready’}

Model Upload Complete

With that, the model upload is complete and can be deployed into a Wallaroo pipeline.

2.3.2 - Wallaroo SDK Upload Tutorial: Hugging Face Zero Shot Classification

How to upload a Hugging Face Zero Shot Classification model to Wallaroo via the Wallaroo SDK.

The Wallaroo 101 tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: Hugging Face Zero Shot Classification

The following tutorial demonstrates how to upload a Hugging Face Zero Shot model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Hugging Face Zero Shot Model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os

import wallaroo
from wallaroo.pipeline   import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.framework import Framework
from wallaroo.object import EntityNotFoundError

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'hf-zero-shot-classification{suffix}'
pipeline_name = f'hf-zero-shot-classification'

model_name = 'hf-zero-shot-classification'
model_file_name = './models/model-auto-conversion_hugging-face_dummy-pipelines_zero-shot-classification-pipeline.zip'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

The following parameters are required for Hugging Face models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a Hugging Face model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, Hugging Face model Required)Set as the framework.
input_schemapyarrow.lib.Schema (Upload Method Optional, Hugging Face model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, Hugging Face model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, Hugging Face model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

The input and output schemas will be configured for the data inputs and outputs. More information on the available inputs under the official 🤗 Hugging Face source code.

input_schema = pa.schema([
    pa.field('inputs', pa.string()), # required
    pa.field('candidate_labels', pa.list_(pa.string(), list_size=2)), # required
    pa.field('hypothesis_template', pa.string()), # optional
    pa.field('multi_label', pa.bool_()), # optional
])

output_schema = pa.schema([
    pa.field('sequence', pa.string()),
    pa.field('scores', pa.list_(pa.float64(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
    pa.field('labels', pa.list_(pa.string(), list_size=2)), # same as number of candidate labels, list_size can be skipped by may result in slightly worse performance
])

Upload Model

The model will be uploaded with the framework set as Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION.

framework=Framework.HUGGING_FACE_ZERO_SHOT_CLASSIFICATION

model = wl.upload_model(model_name,
                        model_file_name,
                        framework=framework,
                        input_schema=input_schema,
                        output_schema=output_schema,
                        convert_wait=True)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime................................................successful

Ready

Namehf-zero-shot-classification
Version6953b047-bfea-424e-9496-348b1f57039f
File Namemodel-auto-conversion_hugging-face_dummy-pipelines_zero-shot-classification-pipeline.zip
SHA3dcc14dd925489d4f0a3960e90a7ab5917ab685ce955beca8924aa7bb9a69398
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 15:50:58
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
pipeline = get_pipeline(pipeline_name)
# clear the pipeline if used previously
pipeline.undeploy()
pipeline.clear()
pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)
pipeline.status()

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

input_data = {
        "inputs": ["this is a test", "this is another test"], # required
        "candidate_labels": [["english", "german"], ["english", "german"]], # optional: using the defaults, similar to not passing this parameter
        "hypothesis_template": ["This example is {}.", "This example is {}."], # optional: using the defaults, similar to not passing this parameter
        "multi_label": [False, False], # optional: using the defaults, similar to not passing this parameter
}
dataframe = pd.DataFrame(input_data)
dataframe
inputscandidate_labelshypothesis_templatemulti_label
0this is a test[english, german]This example is {}.False
1this is another test[english, german]This example is {}.False
%time
pipeline.infer(dataframe)
CPU times: user 2 µs, sys: 0 ns, total: 2 µs
Wall time: 5.48 µs
timein.candidate_labelsin.hypothesis_templatein.inputsin.multi_labelout.labelsout.scoresout.sequencecheck_failures
02023-10-20 15:52:07.129[english, german]This example is {}.this is a testFalse[english, german][0.504054605960846, 0.49594545364379883]this is a test0
12023-10-20 15:52:07.129[english, german]This example is {}.this is another testFalse[english, german][0.5037839412689209, 0.4962160289287567]this is another test0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ........................................ ok
namehf-zero-shot-classification
created2023-10-20 15:46:51.341680+00:00
last_updated2023-10-20 15:51:04.118540+00:00
deployedFalse
tags
versionsd63a6bd8-7bf0-4af1-a22a-ca19c72bde52, 66b81ce8-8ab5-4634-8e6b-5534f328ef34
stepshf-zero-shot-classification
publishedFalse

2.4 - Wallaroo SDK Upload Tutorials: Python Step Shape

How to upload different Python Step Shape models to Wallaroo.

The following tutorials cover how to upload sample Python Step Shape models into a Wallaroo instance.

Python scripts are uploaded to Wallaroo and and treated like an ML Models in Pipeline steps. These will be referred to as Python steps.

Python steps can include:

  • Preprocessing steps to prepare the data received to be handed to ML Model deployed as another Pipeline step.
  • Postprocessing steps to take data output by a ML Model as part of a Pipeline step, and prepare the data to be received by some other data store or entity.
  • A model contained within a Python script.

In all of these, the requirements for uploading a Python script as a ML Model in Wallaroo are the same.

ParameterDescription
Web Sitehttps://www.python.org/
Supported Librariespython==3.8
FrameworkFramework.PYTHON aka python
RuntimeNative aka python

Python models uploaded to Wallaroo are executed as a native runtime.

Note that Python models - aka “Python steps” - are standalone python scripts that use the python libraries natively supported by the Wallaroo platform. These are used for either simple model deployment (such as ARIMA Statsmodels), or data formatting such as the postprocessing steps. A Wallaroo Python model will be composed of one Python script that matches the Wallaroo requirements.

This is contrasted with Arbitrary Python models, also known as Bring Your Own Predict (BYOP) allow for custom model deployments with supporting scripts and artifacts. These are used with pre-trained models (PyTorch, Tensorflow, etc) along with whatever supporting artifacts they require. Supporting artifacts can include other Python modules, model files, etc. These are zipped with all scripts, artifacts, and a requirements.txt file that indicates what other Python models need to be imported that are outside of the typical Wallaroo platform.

Python Models Requirements

Python models uploaded to Wallaroo are Python scripts that must include the wallaroo_json method as the entry point for the Wallaroo engine to use it as a Pipeline step.

This method receives the results of the previous Pipeline step, and its return value will be used in the next Pipeline step.

If the Python model is the first step in the pipeline, then it will be receiving the inference request data (for example: a preprocessing step). If it is the last step in the pipeline, then it will be the data returned from the inference request.

In the example below, the Python model is used as a post processing step for another ML model. The Python model expects to receive data from a ML Model who’s output is a DataFrame with the column dense_2. It then extracts the values of that column as a list, selects the first element, and returns a DataFrame with that element as the value of the column output.

def wallaroo_json(data: pd.DataFrame):
    print(data)
    return [{"output": [data["dense_2"].to_list()[0][0]]}]

In line with other Wallaroo inference results, the outputs of a Python step that returns a pandas DataFrame or Arrow Table will be listed in the out. metadata, with all inference outputs listed as out.{variable 1}, out.{variable 2}, etc. In the example above, this results the output field as the out.output field in the Wallaroo inference result.

 timein.tensorout.outputcheck_failures
02023-06-20 20:23:28.395[0.6878518042, 0.1760734021, -0.869514083, 0.3..[12.886651039123535]0

2.4.1 - Python Model Shape Upload to Wallaroo with the Wallaroo SDK

How to upload a Python Step Shape model to Wallaroo via the SDK

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Python Model Upload to Wallaroo

Python scripts can be deployed to Wallaroo as Python Models. These are treated like other models, and are used for:

  • ML Models: Models written entirely in Python script.
  • Data Formatting: Typically preprocess or post process modules that shape incoming data into what a ML model expects, or receives data output by a ML model and changes the data for other processes to accept.

Models are added to Wallaroo pipelines as pipeline steps, with the data from the previous step submitted to the next one. Python steps require the entry method wallaroo_json. These methods should be structured to receive and send pandas DataFrames as the inputs and outputs.

This allows inference requests to a Wallaroo pipeline to receive pandas DataFrames or Apache Arrow tables, and return the same for consistent results.

This tutorial will:

  • Create a Wallaroo workspace and pipeline.
  • Upload the sample Python model and ONNX model.
  • Demonstrate the outputs of the ONNX model to an inference request.
  • Demonstrate the functionality of the Python model in reshaping data after an inference request.
  • Use both the ONNX model and the Python model together as pipeline steps to perform an inference request and export the data for use.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

We’ll start with importing the libraries we need for the tutorial. The main libraries used are:

  • Wallaroo: To connect with the Wallaroo instance and perform the MLOps commands.
  • pyarrow: Used for formatting the data.
  • pandas: Used for pandas DataFrame tables.
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework
from wallaroo.deployment_config import DeploymentConfigBuilder

import pandas as pd

#import os
# import json
import pyarrow as pa

Connect to the Wallaroo Instance through the User Interface

The next step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))
suffix='jch'
workspace_name = f'python-demo{suffix}'
pipeline_name = f'python-step-demo-pipeline'

onnx_model_name = 'house-price-sample'
onnx_model_file_name = './models/house_price_keras.onnx'
python_model_name = 'python-step'
python_model_file_name = './models/step.py'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

Create a New Workspace

For our tutorial, we’ll create the workspace, set it as the current workspace, then the pipeline we’ll add our models to.

Create New Workspace References

workspace = get_workspace(workspace_name)

wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)
pipeline

Model Descriptions

We have two models we’ll be using.

  • ./models/house_price_keras.onnx: A ML model trained to forecast hour prices based on inputs. This forecast is stored in the column dense_2.
  • ./models/step.py: A Python script that accepts the data from the house price model, and reformats the output. We’ll be using it as a post-processing step.

For the Python step, it contains the method wallaroo_json as the entry point used by Wallaroo when deployed as a pipeline step. Our sample script has the following:

# take a dataframe output of the house price model, and reformat the `dense_2`
# column as `output`
def wallaroo_json(data: pd.DataFrame):
    print(data)
    return [{"output": [data["dense_2"].to_list()[0][0]]}]

As seen from the description, all those function will do it take the DataFrame output of the house price model, and output a DataFrame replacing the first element in the list from column dense_2 with output.

Upload Models

Both of these models will be uploaded to our current workspace using the method upload_model(name, path, framework).configure(framework, input_schema, output_schema).

  • For ./models/house_price_keras.onnx, we will specify it as Framework.ONNX. We do not need to specify the input and output schemas.
  • For ./models/step.py, we will set the input and output schemas in the required pyarrow.lib.Schema format.

Upload Model References

house_price_model = (wl.upload_model(onnx_model_name, 
                                    onnx_model_file_name, 
                                    framework=Framework.ONNX)
                                    .configure('onnx', 
                                               tensor_fields=["tensor"]
                                              )
                    )

input_schema = pa.schema([
    pa.field('dense_2', pa.list_(pa.float64()))
])
output_schema = pa.schema([
    pa.field('output', pa.list_(pa.float64()))
])

step = (wl.upload_model(python_model_name, 
                        python_model_file_name, 
                        framework=Framework.PYTHON)
                        .configure(
                            'python', 
                            input_schema=input_schema, 
                            output_schema=output_schema
                        )
        )

Pipeline Steps

With our models uploaded, we’ll perform different configurations of the pipeline steps.

First we’ll add just the house price model to the pipeline, deploy it, and submit a sample inference.

# used to restrict the resources needed for this demonstration
deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if this tutorial was run before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(house_price_model).deploy(deployment_config=deployment_config)
## sample inference data

data = pd.DataFrame.from_dict({"tensor": [[0.6878518042239091,
                                            0.17607340208535074,
                                            -0.8695140830357148,
                                            0.34638762962802144,
                                            -0.0916270832672289,
                                            -0.022063226781124278,
                                            -0.13969884765926363,
                                            1.002792335666138,
                                            -0.3067449033633758,
                                            0.9272000630461978,
                                            0.28326687982544635,
                                            0.35935375728372815,
                                            -0.682562654045523,
                                            0.532642794275658,
                                            -0.22705189652659302,
                                            0.5743846356405602,
                                            -0.18805086358065454
                                            ]]})

results = pipeline.infer(data)
display(results)

Inference with Pipeline Step

Our inference result had the results in the out.dense_2 column. We’ll clear the pipeline, then add in as the pipeline step just the Python postprocessing step we’ve created. Then for our inference request, we’ll just submit the output of the house price model. Our result should be the first element in the array returned in the out.output column.

pipeline.clear()
pipeline.add_model_step(step)

pipeline.deploy(deployment_config=deployment_config)
data = pd.DataFrame.from_dict({"dense_2": [12.886651]})
python_result = pipeline.infer(data)
display(python_result)

Putting Both Models Together

Now we’ll do one last pipeline deployment with 2 steps:

  • First the house price model that outputs the inference result into dense_2.
  • Second the python step so it will accept the output of the house price model, and reshape it into output.
import datetime
inference_start = datetime.datetime.now()

pipeline.undeploy()
pipeline.clear()
pipeline.add_model_step(house_price_model)
pipeline.add_model_step(step)
pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
data = pd.DataFrame.from_dict({"tensor": [[0.6878518042239091,
                                            0.17607340208535074,
                                            -0.8695140830357148,
                                            0.34638762962802144,
                                            -0.0916270832672289,
                                            -0.022063226781124278,
                                            -0.13969884765926363,
                                            1.002792335666138,
                                            -0.3067449033633758,
                                            0.9272000630461978,
                                            0.28326687982544635,
                                            0.35935375728372815,
                                            -0.682562654045523,
                                            0.532642794275658,
                                            -0.22705189652659302,
                                            0.5743846356405602,
                                            -0.18805086358065454
                                        ]]})

results = pipeline.infer(data)
display(results)

Pipeline Logs

As the data was exported by the pipeline step as a pandas DataFrame, it will be reflected in the pipeline logs. We’ll retrieve the most recent log from our most recent inference.

inference_end = datetime.datetime.now()

pipeline.logs(start_datetime=inference_start, end_datetime=inference_end)

Undeploy the Pipeline

With our tutorial complete, we’ll undeploy the pipeline and return the resources back to the cluster.

This process demonstrated how to structure a postprocessing Python script as a Wallaroo Pipeline step. This can be used for pre or post processing, Python based models, and other use cases.

pipeline.undeploy()

2.5 - Wallaroo SDK Upload Tutorials: Pytorch

How to upload different Pytorch models to Wallaroo.

The following tutorials cover how to upload sample Pytorch models.

ParameterDescription
Web Sitehttps://pytorch.org/
Supported Libraries
  • torch==1.13.1
  • torchvision==0.14.1
FrameworkFramework.PYTORCH aka pytorch
Supported File Typespt ot pth in TorchScript format
Runtimeonnx/flight
  • IMPORTANT NOTE: The PyTorch model must be in TorchScript format. scripting (i.e. torch.jit.script() is always recommended over tracing (i.e. torch.jit.trace()). From the PyTorch documentation: “Scripting preserves dynamic control flow and is valid for inputs of different sizes.” For more details, see TorchScript-based ONNX Exporter: Tracing vs Scripting.

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

  • IMPORTANT CONFIGURATION NOTE: For PyTorch input schemas, the floats must be pyarrow.float32() for the PyTorch model to be converted to the Native Wallaroo Runtime during the upload process.

2.5.1 - Wallaroo SDK Upload Tutorial: Pytorch Single IO

How to upload a Pytorch model with single input/output to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: Pytorch Single Input Output

The following tutorial demonstrates how to upload a Pytorch Single Input Output model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Pytorch Single Input Output to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''
workspace_name = f'pytorch-single-io{suffix}'
pipeline_name = 'pytorch-single-io'

model_name = 'pytorch-single-io'
model_file_name = "./models/model-auto-conversion_pytorch_single_io_model.pt"

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

The following parameters are required for PyTorch models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a PyTorch model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, PyTorch model Required)Set as the Framework.PyTorch.
input_schemapyarrow.lib.Schema (Upload Method Optional, PyTorch model Required)The input schema in Apache Arrow schema format. Note that float values must be pyarrow.float32().
output_schemapyarrow.lib.Schema (Upload Method Optional, PyTorch model Required)The output schema in Apache Arrow schema format. Note that float values must be pyarrow.float32().
convert_waitbool (Upload Method Optional, PyTorch model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('input', pa.list_(pa.float32(), list_size=10))
])
output_schema = pa.schema([
    pa.field('output', pa.list_(pa.float32(), list_size=1))
])

Upload Model

The model will be uploaded with the framework set as Framework.PYTORCH.

model = wl.upload_model(model_name, 
                        model_file_name,
                        framework=Framework.PYTORCH, 
                        input_schema=input_schema, 
                        output_schema=output_schema
                       )
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Ready
Namepytorch-single-io
Version7e8be792-3b25-4be0-b0b8-e687e501f8df
File Namemodel-auto-conversion_pytorch_single_io_model.pt
SHA23bdbafc51c3df7ac84e5f8b2833c592d7da2b27715a7da3e45bf732ea85b8bb
Statusready
Image PathNone
ArchitectureNone
Updated At2023-23-Oct 19:14:00
model.config().runtime()
'onnx'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)
 ok
Waiting for deployment - this will take up to 45s ................................... ok
namepytorch-single-io
created2023-10-19 21:44:22.896716+00:00
last_updated2023-10-23 19:14:02.453910+00:00
deployedTrue
tags
versions8221fbd5-2dc6-4761-82a7-1744aa05d81a, 0f430725-0252-4585-80ea-ec0e2028316d, 64edbfa5-d1bb-44e2-beb2-c6ee87d1a4d9
stepspytorch-single-io
publishedFalse

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

mock_inference_data = np.random.rand(10, 10)
mock_dataframe = pd.DataFrame({"input": mock_inference_data.tolist()})
pipeline.infer(mock_dataframe)
timein.inputout.outputcheck_failures
02023-10-23 19:14:39.424[0.4474957532, 0.1724440529, 0.9187094437, 0.7...[-0.07964326]0
12023-10-23 19:14:39.424[0.4042578849, 0.0492679987, 0.8344617226, 0.0...[0.04861839]0
22023-10-23 19:14:39.424[0.5826077484, 0.1242290539, 0.8257548413, 0.2...[-0.040291503]0
32023-10-23 19:14:39.424[0.5482710709, 0.8289840296, 0.4642728375, 0.8...[-0.1577661]0
42023-10-23 19:14:39.424[0.7409150425, 0.093049813, 0.7461292107, 0.70...[-0.07514663]0
52023-10-23 19:14:39.424[0.1703855753, 0.7154656468, 0.7717122989, 0.5...[-0.027950048]0
62023-10-23 19:14:39.424[0.8858337747, 0.3015204777, 0.5810049274, 0.9...[-0.20391503]0
72023-10-23 19:14:39.424[0.0857726256, 0.17188405, 0.1614009234, 0.095...[-0.058097966]0
82023-10-23 19:14:39.424[0.0874623336, 0.1636057129, 0.5464519563, 0.0...[-0.027925014]0
92023-10-23 19:14:39.424[0.4850890686, 0.7002499257, 0.6851349698, 0.0...[0.088012084]0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
namepytorch-single-io
created2023-10-19 21:44:22.896716+00:00
last_updated2023-10-23 19:14:02.453910+00:00
deployedFalse
tags
versions8221fbd5-2dc6-4761-82a7-1744aa05d81a, 0f430725-0252-4585-80ea-ec0e2028316d, 64edbfa5-d1bb-44e2-beb2-c6ee87d1a4d9
stepspytorch-single-io
publishedFalse

2.5.2 - Wallaroo SDK Upload Tutorial: Pytorch Multiple IO

How to upload a Pytorch model with Multiple input/output to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: Pytorch Multiple Input Output

The following tutorial demonstrates how to upload a Pytorch Multiple Input Output model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Pytorch Multiple Input Output to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.


wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'pytorch-multi-io{suffix}'
pipeline_name = f'pytorch-multi-io'

model_name = 'pytorch-multi-io'
model_file_name = "./models/model-auto-conversion_pytorch_multi_io_model.pt"

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

The following parameters are required for PyTorch models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a PyTorch model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, PyTorch model Required)Set as the Framework.PyTorch.
input_schemapyarrow.lib.Schema (Upload Method Optional, PyTorch model Required)The input schema in Apache Arrow schema format. Note that float values must be pyarrow.float32().
output_schemapyarrow.lib.Schema (Upload Method Optional, PyTorch model Required)The output schema in Apache Arrow schema format. Note that float values must be pyarrow.float32().
convert_waitbool (Upload Method Optional, PyTorch model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('input_1', pa.list_(pa.float32(), list_size=10)),
    pa.field('input_2', pa.list_(pa.float32(), list_size=5))
])
output_schema = pa.schema([
    pa.field('output_1', pa.list_(pa.float32(), list_size=3)),
    pa.field('output_2', pa.list_(pa.float32(), list_size=2))
])

Upload Model

The model will be uploaded with the framework set as Framework.PYTORCH.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.PYTORCH, 
                        input_schema=input_schema, 
                        output_schema=output_schema
                       )
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Ready
Namepytorch-multi-io
Versionf8df148e-a006-42c5-ac99-796f115897b8
File Namemodel-auto-conversion_pytorch_multi_io_model.pt
SHA792db9ee9f41aded3c1d4705f50ccdedd21cafb8b6232c03e4a849b6da1050a8
Statusready
Image PathNone
ArchitectureNone
Updated At2023-23-Oct 19:08:41
model.config().runtime()
'onnx'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
Waiting for deployment - this will take up to 45s .................................. ok

{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.244.3.164',
   'name': 'engine-8549d6985f-qfgsp',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'pytorch-multi-io',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'pytorch-multi-io',
      'version': 'f8df148e-a006-42c5-ac99-796f115897b8',
      'sha': '792db9ee9f41aded3c1d4705f50ccdedd21cafb8b6232c03e4a849b6da1050a8',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.244.2.215',
   'name': 'engine-lb-584f54c899-n8t26',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': []}

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

mock_inference_data = [np.random.rand(10, 10), np.random.rand(10, 5)]
mock_dataframe = pd.DataFrame(
    {
        "input_1": mock_inference_data[0].tolist(),
        "input_2": mock_inference_data[1].tolist(),
    }
)
pipeline.infer(mock_dataframe)
timein.input_1in.input_2out.output_1out.output_2check_failures
02023-10-23 19:09:20.035[0.5520269716, 0.353031825, 0.5972010785, 0.77...[0.2850280321, 0.8368284642, 0.532692657, 0.53...[-0.12692596, -0.048615545, 0.16396174][0.037088655, -0.07631089]0
12023-10-23 19:09:20.035[0.665654034, 0.2721328048, 0.611313055, 0.742...[0.2229083231, 0.0462179945, 0.6249161412, 0.4...[0.05110594, -0.0646694, 0.26961502][0.14888486, -0.011880934]0
22023-10-23 19:09:20.035[0.7482700902, 0.867766789, 0.7562958282, 0.14...[0.5123290316, 0.619602395, 0.6079586226, 0.67...[0.025596283, -0.04604797, 0.33752537][0.20393606, 0.020352483]0
32023-10-23 19:09:20.035[0.7405163748, 0.8286719088, 0.165741416, 0.89...[0.3027072867, 0.3416387734, 0.2969483802, 0.1...[0.007636443, -0.09208804, 0.23095742][0.09487003, 0.08022867]0
42023-10-23 19:09:20.035[0.1383739731, 0.1535340384, 0.5779792315, 0.0...[0.3376164512, 0.969855208, 0.1542470748, 0.25...[-0.14555773, 0.1504708, 0.07537982][0.049346283, -0.15848272]0
52023-10-23 19:09:20.035[0.6635943228, 0.1642769142, 0.5543163791, 0.4...[0.822826152, 0.4580905361, 0.9199056247, 0.64...[-0.018602632, -0.057865076, 0.31042594][-0.028693462, -0.06809359]0
62023-10-23 19:09:20.035[0.4162411861, 0.0306481021, 0.9126412322, 0.8...[0.8228113533, 0.351936158, 0.5182544133, 0.74...[0.030514084, -0.063929394, 0.24837358][-0.05689506, 0.047190517]0
72023-10-23 19:09:20.035[0.5263343174, 0.3983799972, 0.6999939214, 0.6...[0.3230482528, 0.2033379246, 0.6991399275, 0.0...[-0.008261979, 0.06275801, 0.16552104][0.15914191, 0.044295207]0
82023-10-23 19:09:20.035[0.5231957471, 0.2019007135, 0.7899041536, 0.2...[0.633605919, 0.1939292389, 0.0518512061, 0.28...[0.0014905855, 0.072615415, 0.21212187][0.16694427, -0.0021356493]0
92023-10-23 19:09:20.035[0.0287641209, 0.9260014076, 0.540519311, 0.10...[0.9722058029, 0.8047130284, 0.671538585, 0.61...[-0.07756777, -0.11604313, 0.25187707][-0.014747229, 0.03463669]0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
namepytorch-multi-io
created2023-10-19 21:47:11.881873+00:00
last_updated2023-10-23 19:08:43.867731+00:00
deployedFalse
tags
versions75b823f2-7f60-4423-ae54-3a52c0de67c4, 39168af2-bd41-49f8-8f5e-b1aba608ee68, d543d8ac-ba93-4dd5-a8d4-8bd2b405eb18, c55a8525-f108-404a-b0da-2f78f2ea2e34, c0358f0b-22b1-431f-8eee-73eef2ce8bbc
stepspytorch-multi-io
publishedFalse

2.6 - Wallaroo SDK Upload Tutorials: SKLearn

How to upload different SKLearn models to Wallaroo.

The following tutorials cover how to upload sample SKLearn models.

Sci-kit Learn aka SKLearn.

ParameterDescription
Web Sitehttps://scikit-learn.org/stable/index.html
Supported Libraries
  • scikit-learn==1.3.0
FrameworkFramework.SKLEARN aka sklearn
Runtimeonnx / flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

SKLearn Schema Inputs

SKLearn schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an SKLearn model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

SKLearn Schema Outputs

Outputs for SKLearn that are meant to be predictions or probabilities when output by the model are labeled in the output schema for the model when uploaded to Wallaroo. For example, a model that outputs either 1 or 0 as its output would have the output schema as follows:

output_schema = pa.schema([
    pa.field('predictions', pa.int32())
])

When used in Wallaroo, the inference result is contained in the out metadata as out.predictions.

pipeline.infer(dataframe)
 timein.inputsout.predictionscheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00

2.6.1 - Wallaroo SDK Upload Tutorial: SKLearn Clustering Kmeans

How to upload a SKLearn Clustering Kmeans to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: SKLearn Clustering KMeans

The following tutorial demonstrates how to upload a SKLearn Clustering KMeans model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a SKLearn Clustering KMeans model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'sklearn-clustering-kmeans{suffix}'
pipeline_name = f'sklearn-clustering-kmeans'

model_name = 'sklearn-clustering-kmeans'
model_file_name = "models/model-auto-conversion_sklearn_kmeans.pkl"

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

SKLearn models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload SKLearn Model Parameters

The following parameters are required for SKLearn models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a SKLearn model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.SKLEARN.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('predictions', pa.int32())
])

Upload Model

The model will be uploaded with the framework set as Framework.SKLEARN.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.SKLEARN, 
                        input_schema=input_schema, 
                        output_schema=output_schema,
                       )
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime..
Model is attempting loading to a container runtime..............successful

Ready
Namesklearn-clustering-kmeans
Version34e40a39-41a1-42b9-a13f-7d49f0c52830
File Namemodel-auto-conversion_sklearn_kmeans.pkl
SHAb378a614854619dd573ec65b9b4ac73d0b397d50a048e733d96b68c5fdbec896
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:04:46
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Inference

SKLearn models must have all of the data as one line to prevent columns from being read out of order when submitting in JSON. The following will take in the data, convert the rows into a single inputs for the table, then perform the inference. From the output_schema we have defined the output as predictions which will be displayed in our inference result output as out.predictions.

data = pd.read_json('./data/test-sklearn-kmeans.json')
display(data)

# move the column values to a single array input
mock_dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(mock_dataframe)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
result = pipeline.infer(mock_dataframe)
display(result)
timein.inputsout.predictionscheck_failures
02023-10-20 21:08:04.496[5.1, 3.5, 1.4, 0.2]10
12023-10-20 21:08:04.496[4.9, 3.0, 1.4, 0.2]10

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ..................................... ok
namesklearn-clustering-kmeans
created2023-10-20 20:47:07.107730+00:00
last_updated2023-10-20 21:05:30.043676+00:00
deployedFalse
tags
versions7df8f5d9-01db-40ae-bd47-2a871db46058, afbe6d0e-ecb6-47e0-9982-2eb1228f82a2, 35562729-c690-4da2-a1fd-37a760b3909f, d9bf604a-0310-4783-ae52-c6606eb3e228, f33e9409-c562-404c-a09e-a0d6d8e63d1a, a4caed26-a5d7-4d08-9c84-11be2f2c02e8
stepssklearn-clustering-kmeans
publishedFalse

2.6.2 - Wallaroo SDK Upload Tutorial: SKLearn Clustering SVM

How to upload a SKLearn Clustering SVM to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: Sklearn Clustering SVM

The following tutorial demonstrates how to upload a SKLearn Clustering Support Vector Machine(SVM) model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Sklearn Clustering SVM model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'sklearn-clustering-svm{suffix}'
pipeline_name = f'sklearn-clustering-svm'

model_name = 'sklearn-clustering-svm'
model_file_name = './models/model-auto-conversion_sklearn_svm_pipeline.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

SKLearn models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload SKLearn Model Parameters

The following parameters are required for SKLearn models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a SKLearn model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.SKLEARN.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('predictions', pa.int32())
])

Upload Model

The model will be uploaded with the framework set as Framework.SKLEARN.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.SKLEARN, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.
Model is attempting loading to a container runtime.........successful

Ready
Namesklearn-clustering-svm
Version9e9f6913-999d-4a19-894c-ecc372debcaf
File Namemodel-auto-conversion_sklearn_svm_pipeline.pkl
SHAc6eec69d96f7eeb3db034600dea6b12da1d2b832c39252ec4942d02f68f52f40
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:10:50
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)
 ok
Waiting for deployment - this will take up to 45s ...........................................

*** An error occurred while deploying your pipeline.

Deployment failed. See status for details.
Status: {'status': 'Error', 'details': [], 'engines': [{'ip': '10.244.3.151', 'name': 'engine-8444db4dcb-md57f', 'status': 'Running', 'reason': None, 'details': [], 'pipeline_statuses': {'pipelines': [{'id': 'sklearn-clustering-svm', 'status': 'Running'}]}, 'model_statuses': {'models': [{'name': 'sklearn-clustering-svm', 'version': '9e9f6913-999d-4a19-894c-ecc372debcaf', 'sha': 'c6eec69d96f7eeb3db034600dea6b12da1d2b832c39252ec4942d02f68f52f40', 'status': 'Running'}]}}], 'engine_lbs': [{'ip': '10.244.2.207', 'name': 'engine-lb-584f54c899-nkj9h', 'status': 'Running', 'reason': None, 'details': []}], 'sidekicks': [{'ip': '10.244.3.150', 'name': 'engine-sidekick-sklearn-clustering-svm-71-59bc7cf755-vkq92', 'status': 'Running', 'reason': None, 'details': [], 'statuses': None}]}

Inference

SKLearn models must have all of the data as one line to prevent columns from being read out of order when submitting in JSON. The following will take in the data, convert the rows into a single inputs for the table, then perform the inference. From the output_schema we have defined the output as predictions which will be displayed in our inference result output as out.predictions.

data = pd.read_json('./data/test_cluster-svm.json')
display(data)

# move the column values to a single array input
dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
pipeline.infer(dataframe)
timein.inputsout.predictionscheck_failures
02023-10-20 21:11:44.879[5.1, 3.5, 1.4, 0.2]00
12023-10-20 21:11:44.879[4.9, 3.0, 1.4, 0.2]00

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ..................................... ok
namesklearn-clustering-svm
created2023-10-20 21:00:38.923149+00:00
last_updated2023-10-20 21:10:52.412340+00:00
deployedFalse
tags
versionsbf6b7315-b5c8-47fe-b088-ded36b18a3c6, 1758d8ac-6f86-4863-b194-1d7baf20fc1f, 8a5fbd0d-028a-478d-990d-4aa34a8e2b1b
stepssklearn-clustering-svm
publishedFalse

2.6.3 - Wallaroo SDK Upload Tutorial: SKLearn Linear Regression

How to upload a SKLearn Linear Regression model to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: SKLearn Linear Regression

The following tutorial demonstrates how to upload a SKLearn Linear Regression model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a SKLearn Linear Regression model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'sklearn-linear-regression{suffix}'
pipeline_name = f'sklearn-linear-regression'

model_name = 'sklearn-linear-regression'
model_file_name = 'models/model-auto-conversion_sklearn_linreg_diabetes.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

SKLearn models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload SKLearn Model Parameters

The following parameters are required for SKLearn models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a SKLearn model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.SKLEARN.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=10))
])

output_schema = pa.schema([
    pa.field('predictions', pa.float32())
])

Upload Model

The model will be uploaded with the framework set as Framework.SKLEARN.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.SKLEARN, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.
Model is attempting loading to a container runtime...........successful

Ready
Namesklearn-linear-regression
Version44d45548-b606-4c53-a741-072a8948b26c
File Namemodel-auto-conversion_sklearn_linreg_diabetes.pkl
SHA6a9085e2d65bf0379934651d2272d3c6c4e020e36030933d85df3a8d15135a45
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:12:03
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Inference

SKLearn models must have all of the data as one line to prevent columns from being read out of order when submitting in JSON. The following will take in the data, convert the rows into a single inputs for the table, then perform the inference. From the output_schema we have defined the output as predictions which will be displayed in our inference result output as out.predictions.

data = pd.read_json('data/test_linear_regression_data.json')
display(data)

# move the column values to a single array input
dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)
agesexbmibps1s2s3s4s5s6
00.0380760.0506800.0616960.021872-0.044223-0.034821-0.043401-0.0025920.019907-0.017646
1-0.001882-0.044642-0.051474-0.026328-0.008449-0.0191630.074412-0.039493-0.068332-0.092204
inputs
0[0.0380759064, 0.0506801187, 0.0616962065, 0.0...
1[-0.0018820165, -0.0446416365, -0.051474061200...
pipeline.infer(dataframe)
timein.inputsout.predictionscheck_failures
02023-10-20 21:13:02.803[0.0380759064, 0.0506801187, 0.0616962065, 0.0...206.1166770
12023-10-20 21:13:02.803[-0.0018820165, -0.0446416365, -0.0514740612, ...68.0710330

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ...................................... ok
namesklearn-linear-regression
created2023-10-20 21:10:45.329449+00:00
last_updated2023-10-20 21:12:08.793882+00:00
deployedFalse
tags
versions96565b51-c90c-4d0d-bee8-8c1a777af534, 682a9e09-a139-4546-a469-969be071ffcb
stepssklearn-linear-regression
publishedFalse

2.6.4 - Wallaroo SDK Upload Tutorial: SKLearn Logistic Regression

How to upload a SKLearn Logistic Regression to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: SKLearn Logistic Regression

The following tutorial demonstrates how to upload a SKLearn Logistic Regression model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a SKLearn Logistic Regression model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'sklearn-logistic-regression{suffix}'
pipeline_name = f'sklearn-logistic-regression'

model_name = 'sklearn-logistic-regression'
model_file_name = 'models/logreg.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

SKLearn models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload SKLearn Model Parameters

The following parameters are required for SKLearn models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a SKLearn model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.SKLEARN.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('predictions', pa.int32()),
    pa.field('probabilities', pa.list_(pa.float64(), list_size=3))
])

Upload Model

The model will be uploaded with the framework set as Framework.SKLEARN.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.SKLEARN, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.
Model is attempting loading to a container runtime...........successful

Ready
Namesklearn-logistic-regression
Version56627137-162a-4437-a417-5f7af8c4241d
File Namelogreg.pkl
SHA9302df6cc64a2c0d12daa257657f07f9db0bb2072bb3fb92396500b21358e0b9
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:11:53
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Inference

SKLearn models must have all of the data as one line to prevent columns from being read out of order when submitting in JSON. The following will take in the data, convert the rows into a single inputs for the table, then perform the inference. From the output_schema we have defined the output as predictions which will be displayed in our inference result output as out.predictions.

data = pd.read_json('data/test_logreg_data.json')
display(data)

# move the column values to a single array input
dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
pipeline.infer(dataframe)
timein.inputsout.predictionsout.probabilitiescheck_failures
02023-10-20 21:12:53.100[5.1, 3.5, 1.4, 0.2]0[0.9815821465852236, 0.018417838912958125, 1.4...0
12023-10-20 21:12:53.100[4.9, 3.0, 1.4, 0.2]0[0.9713374799347873, 0.028662489870060148, 3.0...0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
namesklearn-logistic-regression
created2023-10-20 21:10:37.789707+00:00
last_updated2023-10-20 21:11:58.541031+00:00
deployedFalse
tags
versionsb8170bfd-fafb-44ce-a62c-c82c6da46f4c, 7b233ca1-b1bb-4448-a948-b8fbb1481b75
stepssklearn-logistic-regression
publishedFalse

2.6.5 - Wallaroo SDK Upload Tutorial: SKLearn SVM PCA

How to upload a SKLearn SVM PCA to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: Sklearn Clustering SVM PCA

The following tutorial demonstrates how to upload a SKLearn Clustering Support Vector Machine(SVM) Principal Component Analysis (PCA) model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a Sklearn Clustering SVM PCA model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'sklearn-clustering-svm-pca{suffix}'
pipeline_name = f'sklearn-clustering-svm-pca'

model_name = 'sklearn-clustering-svm-pca'
model_file_name = 'models/model-auto-conversion_sklearn_svm_pca_pipeline.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

SKLearn models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload SKLearn Model Parameters

The following parameters are required for SKLearn models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a SKLearn model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.SKLEARN.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('predictions', pa.int32())
])

Upload Model

The model will be uploaded with the framework set as Framework.SKLEARN.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.SKLEARN, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.
Model is attempting loading to a container runtime.............successful

Ready
Namesklearn-clustering-svm-pca
Version8568d7b1-1f50-48c8-839b-00b6d0dfb734
File Namemodel-auto-conversion_sklearn_svm_pca_pipeline.pkl
SHA524b05d22f13fa4ce5feaf07b86710b447f0c80a02601be86ee5b6bc748fe7fd
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:06:26
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Inference

SKLearn models must have all of the data as one line to prevent columns from being read out of order when submitting in JSON. The following will take in the data, convert the rows into a single inputs for the table, then perform the inference. From the output_schema we have defined the output as predictions which will be displayed in our inference result output as out.predictions.

data = pd.read_json('data/test-sklearn-kmeans.json')
display(data)

# move the column values to a single array input
dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
pipeline.infer(dataframe)
timein.inputsout.predictionscheck_failures
02023-10-20 21:08:35.188[5.1, 3.5, 1.4, 0.2]00
12023-10-20 21:08:35.188[4.9, 3.0, 1.4, 0.2]00

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ...................................... ok
namesklearn-clustering-svm-pca
created2023-10-20 20:49:31.399831+00:00
last_updated2023-10-20 21:06:27.900292+00:00
deployedFalse
tags
versions4f174cb4-a314-4f62-9960-e1bb7dcb8782, 0b7b909b-10e1-4ae4-a2e9-6c07d020fe27, 984f5d05-2799-4d36-a0eb-deb076c1d1be, 76968e9f-774e-4730-9c84-3c4143b5e123
stepssklearn-clustering-svm-pca
publishedFalse

2.7 - Wallaroo SDK Upload Tutorials: Tensorflow

How to upload different Tensorflow models to Wallaroo.

The following tutorials cover how to upload sample Tensorflow models.

ParameterDescription
Web Sitehttps://www.tensorflow.org/
Supported Librariestensorflow==2.9.3
FrameworkFramework.TENSORFLOW aka tensorflow
RuntimeNative aka tensorflow
Supported File TypesSavedModel format as .zip file

TensorFlow File Format

TensorFlow models are .zip file of the SavedModel format. For example, the Aloha sample TensorFlow model is stored in the directory alohacnnlstm:

├── saved_model.pb
└── variables
    ├── variables.data-00000-of-00002
    ├── variables.data-00001-of-00002
    └── variables.index

This is compressed into the .zip file alohacnnlstm.zip with the following command:

zip -r alohacnnlstm.zip alohacnnlstm/

ML models that meet the Tensorflow and SavedModel format will run as Wallaroo Native runtimes by default.

See the SavedModel guide for full details.

2.7.1 - Wallaroo SDK Upload Tutorial: Tensorflow Aloha

How to upload the Tensorflow Aloha model to Wallaroo

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo SDK Upload Tutorial: Tensorflow

In this notebook we will walk through uploading a Tensorflow model to a Wallaroo instance and performing sample inferences. For this example we will be using an open source model that uses an Aloha CNN LSTM model for classifying Domain names as being either legitimate or being used for nefarious purposes such as malware distribution.

Prerequisites

  • An installed Wallaroo instance.
  • The following Python libraries installed:
    • os
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
    • pyarrow: PyArrow for Apache Arrow support

Tutorial Goals

For our example, we will perform the following:

  • Create a workspace for our work.
  • Upload the Aloha model.
  • Create a pipeline that can ingest our submitted data, submit it to the model, and export the results.

All sample data and models are available through the Wallaroo Quick Start Guide Samples repository.

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.e/wallaroo-sdk-essentials-client/).

import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

# to display dataframe tables
from IPython.display import display
# used to display dataframe information without truncating

import os
os.environ["MODELS_ENABLED"] = "true"

import pandas as pd
pd.set_option('display.max_colwidth', None)
import pyarrow as pa
# Login through local Wallaroo instance

wl = wallaroo.Client()

Create the Workspace

We will create a workspace to work in and call it the “alohaworkspace”, then set it as current workspace environment. We’ll also create our pipeline in advance as alohapipeline. The model name and the model file will be specified for use in later steps.

To allow this tutorial to be run multiple times or by multiple users in the same Wallaroo instance, a random 4 character prefix will be added to the workspace, pipeline, and model.

import string
import random

# make a random 4 character suffix to verify uniqueness in tutorials
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'tensorflowuploadexampleworkspace{suffix}'
pipeline_name = f'tensorflowuploadexample{suffix}'
model_name = f'tensorflowuploadexample{suffix}'
model_file_name = './models/alohacnnlstm.zip'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline
workspace = get_workspace(workspace_name)

wl.set_current_workspace(workspace)

aloha_pipeline = get_pipeline(pipeline_name)
aloha_pipeline
nametensorflowuploadexample
created2023-10-20 21:15:17.379310+00:00
last_updated2023-10-20 21:15:17.379310+00:00
deployed(none)
tags
versionsdce97e55-97d7-4bf9-8c75-8f9115e40e23
steps
publishedFalse

We can verify the workspace is created the current default workspace with the get_current_workspace() command.

wl.get_current_workspace()
{'name': 'tensorflowuploadexampleworkspace', 'id': 83, 'archived': False, 'created_by': '420ff140-d2a0-4425-9c12-cac5ed602472', 'created_at': '2023-10-20T21:15:17.207568+00:00', 'models': [], 'pipelines': [{'name': 'tensorflowuploadexample', 'create_time': datetime.datetime(2023, 10, 20, 21, 15, 17, 379310, tzinfo=tzutc()), 'definition': '[]'}]}

Upload the Models

Now we will upload our models. Note that for this example we are applying the model from a .ZIP file. The Aloha model is a protobuf file that has been defined for evaluating web pages, and we will configure it to use data in the tensorflow format.

The following parameters are required for TensorFlow models. Tensorflow models are native runtimes in Wallaroo, so the input_schema and output_schema parameters are optional.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Required)Set as the Framework.TENSORFLOW.
input_schemapyarrow.lib.Schema (Optional)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Optional)The output schema in Apache Arrow schema format.
convert_waitbool (Optional) (Default: True)Not required for native runtimes.
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

TensorFlow File Format

TensorFlow models are .zip file of the SavedModel format. For example, the Aloha sample TensorFlow model is stored in the directory alohacnnlstm:

├── saved_model.pb
└── variables
    ├── variables.data-00000-of-00002
    ├── variables.data-00001-of-00002
    └── variables.index

This is compressed into the .zip file alohacnnlstm.zip with the following command:

zip -r alohacnnlstm.zip alohacnnlstm/
model = wl.upload_model(model_name, model_file_name, Framework.TENSORFLOW).configure("tensorflow")
model.config().runtime()
'tensorflow'

Deploy a model

Now that we have a model that we want to use we will create a deployment for it.

We will tell the deployment we are using a tensorflow model and give the deployment name and the configuration we want for the deployment.

aloha_pipeline.add_model_step(model)
nametensorflowuploadexample
created2023-10-20 21:15:17.379310+00:00
last_updated2023-10-20 21:15:17.379310+00:00
deployed(none)
tags
versionsdce97e55-97d7-4bf9-8c75-8f9115e40e23
steps
publishedFalse
aloha_pipeline.deploy()
Waiting for deployment - this will take up to 45s ........ ok
nametensorflowuploadexample
created2023-10-20 21:15:17.379310+00:00
last_updated2023-10-20 21:15:17.786848+00:00
deployedTrue
tags
versionsceffdd27-2887-4a32-ba90-4eec496aafb1, dce97e55-97d7-4bf9-8c75-8f9115e40e23
stepstensorflowuploadexample
publishedFalse

We can verify that the pipeline is running and list what models are associated with it.

aloha_pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.244.0.33',
   'name': 'engine-69b5b9c587-cg8rx',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'tensorflowuploadexample',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'tensorflowuploadexample',
      'version': '5d6ca253-93b4-4f16-b117-6d116450b108',
      'sha': 'd71d9ffc61aaac58c2b1ed70a2db13d1416fb9d3f5b891e5e4e2e97180fe22f8',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.244.2.210',
   'name': 'engine-lb-584f54c899-z5pcp',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': []}

Interferences

Infer 1 row

Now that the pipeline is deployed and our Aloha model is in place, we’ll perform a smoke test to verify the pipeline is up and running properly. We’ll use the infer_from_file command to load a single encoded URL into the inference engine and print the results back out.

The result should tell us that the tokenized URL is legitimate (0) or fraud (1). This sample data should return close to 1 in out.main.

smoke_test = pd.DataFrame.from_records(
    [
    {
        "text_input":[
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            0,
            28,
            16,
            32,
            23,
            29,
            32,
            30,
            19,
            26,
            17
        ]
    }
]
)

result = aloha_pipeline.infer(smoke_test)
display(result.loc[:, ["time","out.main"]])
timeout.main
02023-10-20 21:15:26.393[0.997564]

Undeploy Pipeline

When finished with our tests, we will undeploy the pipeline so we have the Kubernetes resources back for other tasks. Note that if the deployment variable is unchanged aloha_pipeline.deploy() will restart the inference engine in the same configuration as before.

aloha_pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ..................................... ok
nametensorflowuploadexample
created2023-10-20 21:15:17.379310+00:00
last_updated2023-10-20 21:15:17.786848+00:00
deployedFalse
tags
versionsceffdd27-2887-4a32-ba90-4eec496aafb1, dce97e55-97d7-4bf9-8c75-8f9115e40e23
stepstensorflowuploadexample
publishedFalse

2.8 - Wallaroo SDK Upload Tutorials: Tensorflow Keras

How to upload different Tensorflow Keras models to Wallaroo.

The following tutorials cover how to upload sample Tensorflow Keras models.

ParameterDescription
Web Sitehttps://www.tensorflow.org/api_docs/python/tf/keras/Model
Supported Libraries
  • tensorflow==2.9.3
  • keras==2.9.0
FrameworkFramework.KERAS aka keras
Supported File TypesSavedModel format as .zip file and HDF5 format
Runtimeonnx/flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

TensorFlow Keras SavedModel Format

TensorFlow Keras SavedModel models are .zip file of the SavedModel format. For example, the Aloha sample TensorFlow model is stored in the directory alohacnnlstm:

├── saved_model.pb
└── variables
    ├── variables.data-00000-of-00002
    ├── variables.data-00001-of-00002
    └── variables.index

This is compressed into the .zip file alohacnnlstm.zip with the following command:

zip -r alohacnnlstm.zip alohacnnlstm/

See the SavedModel guide for full details.

TensorFlow Keras H5 Format

Wallaroo supports the H5 for Tensorflow Keras models.

2.8.1 - Wallaroo SDK Upload Tutorial: Keras Sequential Single IO

How to upload a Keras Sequential Single IO model to Wallaroo via the Wallaroo SDK

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: TensorFlow keras Sequential Single IO

The following tutorial demonstrates how to upload a TensorFlow keras Sequential Single IO model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a TensorFlow keras Sequential Single IO to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline   import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.framework import Framework
from wallaroo.object import EntityNotFoundError

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression

import datetime

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'keras-sequential-single-io{suffix}'
pipeline_name = f'keras-sequential-single-io'

model_name = 'keras-sequential-single-io'
model_file_name = 'models/model-auto-conversion_keras_single_io_keras_sequential_model.h5'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

The following parameters are required for TensorFlow keras models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a TensorFlow Keras model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, TensorFlow keras model Required)Set as the Framework.KERAS.
input_schemapyarrow.lib.Schema (Upload Method Optional, TensorFlow Keras model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, TensorFlow Keras model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, TensorFlow model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

input_schema = pa.schema([
    pa.field('input', pa.list_(pa.float64(), list_size=10))
])
output_schema = pa.schema([
    pa.field('output', pa.list_(pa.float64(), list_size=32))
])

Upload Model

The model will be uploaded with the framework set as Framework.KERAS.

framework=Framework.KERAS

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=framework, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime.....................successful

Ready

Namekeras-sequential-single-io
Version3aa90e63-f61b-4418-a70a-1631edfcb7e6
File Namemodel-auto-conversion_keras_single_io_keras_sequential_model.h5
SHAf7e49538e38bebe066ce8df97bac8be239ae8c7d2733e500c8cd633706ae95a8
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 17:52:05
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if used in a previous tutorial
pipeline.undeploy()
pipeline.clear()
pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)
pipeline.status()

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

input_data = np.random.rand(10, 10)
mock_dataframe = pd.DataFrame({
    "input": input_data.tolist()
})
mock_dataframe
input
0[0.1991693928323468, 0.7669068394222905, 0.310...
1[0.5845813884987883, 0.9974851461171536, 0.641...
2[0.932340919861926, 0.5378812209722794, 0.1672...
3[0.40557019984163867, 0.49709830678629907, 0.6...
4[0.044116334060004925, 0.8634686667900255, 0.2...
5[0.9516672781081118, 0.5804880585685775, 0.858...
6[0.6956141885467453, 0.7382529966340766, 0.392...
7[0.21926011010552227, 0.6843926552276196, 0.78...
8[0.9941171972816318, 0.45451319048527616, 0.95...
9[0.3712231387032263, 0.08633906733980612, 0.87...
pipeline.infer(mock_dataframe)
timein.inputout.outputcheck_failures
02023-10-20 17:58:27.143[0.1991693928, 0.7669068394, 0.3105885385, 0.2...[0.028634641, 0.023895813, 0.040356454, 0.0244...0
12023-10-20 17:58:27.143[0.5845813885, 0.9974851461, 0.6415301842, 0.8...[0.039276116, 0.01871082, 0.047833905, 0.01654...0
22023-10-20 17:58:27.143[0.9323409199, 0.537881221, 0.1672841103, 0.79...[0.016344877, 0.039013684, 0.03176823, 0.03096...0
32023-10-20 17:58:27.143[0.4055701998, 0.4970983068, 0.6517088712, 0.2...[0.03115139, 0.019364284, 0.03944681, 0.024314...0
42023-10-20 17:58:27.143[0.0441163341, 0.8634686668, 0.212879749, 0.62...[0.024984468, 0.031172493, 0.06482109, 0.02364...0
52023-10-20 17:58:27.143[0.9516672781, 0.5804880586, 0.8585948355, 0.8...[0.03174607, 0.024760708, 0.051566537, 0.02118...0
62023-10-20 17:58:27.143[0.6956141885, 0.7382529966, 0.3924137787, 0.9...[0.031668007, 0.025940748, 0.06299812, 0.02237...0
72023-10-20 17:58:27.143[0.2192601101, 0.6843926552, 0.78983548, 0.939...[0.03280156, 0.025590684, 0.0276087, 0.0241528...0
82023-10-20 17:58:27.143[0.9941171973, 0.4545131905, 0.95580094, 0.179...[0.024180355, 0.02735067, 0.051267277, 0.03041...0
92023-10-20 17:58:27.143[0.3712231387, 0.0863390673, 0.8778721234, 0.0...[0.02604158, 0.02733598, 0.04093318, 0.0291493...0

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .............

2.9 - Wallaroo SDK Upload Tutorials: XGBoost

How to upload different XGBoost models to Wallaroo.

The following tutorials cover how to upload sample XGBoost models.

ParameterDescription
Web Sitehttps://xgboost.ai/
Supported Libraries
  • scikit-learn==1.3.0
  • xgboost==1.7.4
FrameworkFramework.XGBOOST aka xgboost
Supported File Typespickle (XGB files are not supported.)
Runtimeonnx / flight

During the model upload process, the Wallaroo instance will attempt to convert the model to a Native Wallaroo Runtime. If unsuccessful based , it will create a Wallaroo Containerized Runtime for the model. See the model deployment section for details on how to configure pipeline resources based on the model’s runtime.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00

2.9.1 - Wallaroo SDK Upload Tutorial: RF Regressor Tutorial

How to upload a RF Regressor Tutorial to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: XGBoost RF Regressor

The following tutorial demonstrates how to upload a XGBoost RF Regressor model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a XGBoost RF Regressor model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'xgboost-rf-regressor{suffix}'
pipeline_name = f'xgboost-rf-regressor'

model_name = 'xgboost-rf-regressor'
model_file_name = 'models/model-auto-conversion_xgboost_xgb_rf_regressor_diabetes.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

XGBoost models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload XGBoost Model Parameters

The following parameters are required for XGBoost models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a XGBoost model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.XGBOOST.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=10))
])

output_schema = pa.schema([
    pa.field('output', pa.float64())
])

Upload Model

The model will be uploaded with the framework set as Framework.XGBOOST.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.XGBOOST, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

data = pd.read_json('./data/test_xgb_rf-regressor.json')
display(data)

dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)

pipeline.infer(dataframe)
agesexbmibps1s2s3s4s5s6
00.0380760.0506800.0616960.021872-0.044223-0.034821-0.043401-0.0025920.019907-0.017646
1-0.001882-0.044642-0.051474-0.026328-0.008449-0.0191630.074412-0.039493-0.068332-0.092204
inputs
0[0.0380759064, 0.0506801187, 0.0616962065, 0.0...
1[-0.0018820165, -0.0446416365, -0.051474061200...
timein.inputsout.outputcheck_failures
02023-10-20 21:25:41.653[0.0380759064, 0.0506801187, 0.0616962065, 0.0...166.618770
12023-10-20 21:25:41.653[-0.0018820165, -0.0446416365, -0.0514740612, ...76.189580

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ......................................... ok
namexgboost-rf-regressor
created2023-10-20 21:18:03.238552+00:00
last_updated2023-10-20 21:22:34.399963+00:00
deployedFalse
tags
versionsd7d97f0e-8078-401f-aec7-bb33fae69c83, 165e0dd6-5ca4-4bc7-a6ae-8941ce05321c
stepsxgboost-rf-regressor
publishedFalse

2.9.2 - Wallaroo SDK Upload Tutorial: XGBoost Classification

How to upload a XGBoost Classification to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: XGBoost Classification

The following tutorial demonstrates how to upload a XGBoost Classification model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a XGBoost Classification model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'xgboost-classification{suffix}'
pipeline_name = f'xgboost-classification'

model_name = 'xgboost-classification'
model_file_name = './models/model-auto-conversion_xgboost_xgb_classification_iris.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

XGBoost models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload XGBoost Model Parameters

The following parameters are required for XGBoost models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a XGBoost model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.XGBOOST.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('output', pa.float64())
])

Upload Model

The model will be uploaded with the framework set as Framework.XGBOOST.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.XGBOOST, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime..
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.
Model is attempting loading to a container runtime............successful

Ready
Namexgboost-classification
Version7d116e96-1b77-44b4-b689-3355263ac042
File Namemodel-auto-conversion_xgboost_xgb_classification_iris.pkl
SHA4a1844c460e8c8503207305fb807e3a28e788062588925021807c54ee80cc7f9
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:23:59
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

data = pd.read_json('data/test-xgboost-classification-data.json')
display(data)

dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)

results = pipeline.infer(dataframe)
display(results)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]
timein.inputsout.outputcheck_failures
02023-10-20 21:25:56.406[5.1, 3.5, 1.4, 0.2]00
12023-10-20 21:25:56.406[4.9, 3.0, 1.4, 0.2]00

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ..................................... ok
namexgboost-classification
created2023-10-20 21:17:45.439768+00:00
last_updated2023-10-20 21:25:04.938567+00:00
deployedFalse
tags
versionsc3659936-2f53-46f3-85b7-22563d396e95, 00973faf-6767-4584-9dbf-b0efbb55ae52
stepsxgboost-classification
publishedFalse

2.9.3 - Wallaroo SDK Upload Tutorial: XGBoost Regressor

How to upload a XGBoost Regressor to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: XGBoost Regressor

The following tutorial demonstrates how to upload a XGBoost Regressor model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a XGBoost Regressor model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'xgboost-regressor{suffix}'
pipeline_name = f'xgboost-regressor'

model_name = 'xgboost-regressor'
model_file_name = 'models/model-auto-conversion_xgboost_xgb_regressor_diabetes.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

XGBoost models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload XGBoost Model Parameters

The following parameters are required for XGBoost models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a XGBoost model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.XGBOOST.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=10))
])

output_schema = pa.schema([
    pa.field('output', pa.float64())
])

Upload Model

The model will be uploaded with the framework set as Framework.XGBOOST.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.XGBOOST, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime...........
Model is pending loading to a container runtime......
Model is attempting loading to a container runtime................successful

Ready
Namexgboost-regressor
Versionc7db517c-0e25-49ac-9189-da8dba06b929
File Namemodel-auto-conversion_xgboost_xgb_regressor_diabetes.pkl
SHA17e2e4e635b287f1234ed7c59a8447faebf4d69d7974749113233d0007b08e29
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:20:47
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

data = pd.read_json('./data/test_xgb_regressor.json')
display(data)

dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
display(dataframe)

results = pipeline.infer(dataframe)
display(results)
agesexbmibps1s2s3s4s5s6
00.0380760.0506800.0616960.021872-0.044223-0.034821-0.043401-0.0025920.019907-0.017646
1-0.001882-0.044642-0.051474-0.026328-0.008449-0.0191630.074412-0.039493-0.068332-0.092204
inputs
0[0.0380759064, 0.0506801187, 0.0616962065, 0.0...
1[-0.0018820165, -0.0446416365, -0.051474061200...
timein.inputsout.outputcheck_failures
02023-10-20 21:21:57.555[0.0380759064, 0.0506801187, 0.0616962065, 0.0...151.001360
12023-10-20 21:21:57.555[-0.0018820165, -0.0446416365, -0.0514740612, ...74.999570

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
namexgboost-regressor
created2023-10-20 21:17:53.229541+00:00
last_updated2023-10-20 21:20:48.730760+00:00
deployedFalse
tags
versions09ccbdb0-20a6-4952-9976-f6cbaeeaf12b, 3a7a1aa3-0c22-4839-a1a4-9739e3e187e3
stepsxgboost-regressor
publishedFalse

2.9.4 - Wallaroo SDK Upload Tutorial: XGBoost RF Classification

How to upload a XGBoost RF Classification to Wallaroo

This tutorial can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo Model Upload via the Wallaroo SDK: XGBoost RF Classification

The following tutorial demonstrates how to upload a XGBoost RF Classification model to a Wallaroo instance.

Tutorial Goals

Demonstrate the following:

  • Upload a XGBoost RF Classification model to a Wallaroo instance.
  • Create a pipeline and add the model as a pipeline step.
  • Perform a sample inference.

Prerequisites

  • A Wallaroo version 2023.2.1 or above instance.

References

Tutorial Steps

Import Libraries

The first step is to import the libraries we’ll be using. These are included by default in the Wallaroo instance’s JupyterHub service.

import json
import os
import pickle

import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

import os
os.environ["MODELS_ENABLED"] = "true"

import pyarrow as pa
import numpy as np
import pandas as pd

Open a Connection to Wallaroo

The next step is connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). If logging in externally, update the wallarooPrefix and wallarooSuffix variables with the proper DNS information. For more information on Wallaroo DNS settings, see the Wallaroo DNS Integration Guide.

wl = wallaroo.Client()

Set Variables and Helper Functions

We’ll set the name of our workspace, pipeline, models and files. Workspace names must be unique across the Wallaroo workspace. For this, we’ll add in a randomly generated 4 characters to the workspace name to prevent collisions with other users’ workspaces. If running this tutorial, we recommend hard coding the workspace name so it will function in the same workspace each time it’s run.

We’ll set up some helper functions that will either use existing workspaces and pipelines, or create them if they do not already exist.

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

import string
import random

# make a random 4 character suffix to prevent overwriting other user's workspaces
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))

suffix=''

workspace_name = f'xgboost-rf-classification{suffix}'
pipeline_name = f'xgboost-rf-classification'

model_name = 'xgboost-rf-classification'
model_file_name = './models/model-auto-conversion_xgboost_xgb_rf_classification_iris.pkl'

Create Workspace and Pipeline

We will now create the Wallaroo workspace to store our model and set it as the current workspace. Future commands will default to this workspace for pipeline creation, model uploads, etc. We’ll create our Wallaroo pipeline to deploy our model.

workspace = get_workspace(workspace_name)
wl.set_current_workspace(workspace)

pipeline = get_pipeline(pipeline_name)

Configure Data Schemas

XGBoost models are uploaded to Wallaroo through the Wallaroo Client upload_model method.

Upload XGBoost Model Parameters

The following parameters are required for XGBoost models. Note that while some fields are considered as optional for the upload_model method, they are required for proper uploading of a XGBoost model to Wallaroo.

ParameterTypeDescription
namestring (Required)The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model.
pathstring (Required)The path to the model file being uploaded.
frameworkstring (Upload Method Optional, SKLearn model Required)Set as the Framework.XGBOOST.
input_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The input schema in Apache Arrow schema format.
output_schemapyarrow.lib.Schema (Upload Method Optional, SKLearn model Required)The output schema in Apache Arrow schema format.
convert_waitbool (Upload Method Optional, SKLearn model Optional) (Default: True)
  • True: Waits in the script for the model conversion completion.
  • False: Proceeds with the script without waiting for the model conversion process to display complete.

Once the upload process starts, the model is containerized by the Wallaroo instance. This process may take up to 10 minutes.

XGBoost Schema Inputs

XGBoost schema follows a different format than other models. To prevent inputs from being out of order, the inputs should be submitted in a single row in the order the model is trained to accept, with all of the data types being the same. If a model is originally trained to accept inputs of different data types, it will need to be retrained to only accept one data type for each column - typically pa.float64() is a good choice.

For example, the following DataFrame has 4 columns, each column a float.

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

For submission to an XGBoost model, the data input schema will be a single array with 4 float values.

input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

When submitting as an inference, the DataFrame is converted to rows with the column data expressed as a single array. The data must be in the same order as the model expects, which is why the data is submitted as a single array rather than JSON labeled columns: this insures that the data is submitted in the exact order as the model is trained to accept.

Original DataFrame:

 sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2

Converted DataFrame:

 inputs
0[5.1, 3.5, 1.4, 0.2]
1[4.9, 3.0, 1.4, 0.2]

XGBoost Schema Outputs

Outputs for XGBoost are labeled based on the trained model outputs. For this example, the output is simply a single output listed as output. In the Wallaroo inference result, it is grouped with the metadata out as out.output.

output_schema = pa.schema([
    pa.field('output', pa.int32())
])
pipeline.infer(dataframe)
 timein.inputsout.outputcheck_failures
02023-07-05 15:11:29.776[5.1, 3.5, 1.4, 0.2]00
12023-07-05 15:11:29.776[4.9, 3.0, 1.4, 0.2]00
input_schema = pa.schema([
    pa.field('inputs', pa.list_(pa.float64(), list_size=4))
])

output_schema = pa.schema([
    pa.field('output', pa.float64())
])

Upload Model

The model will be uploaded with the framework set as Framework.XGBOOST.

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.XGBOOST, 
                        input_schema=input_schema, 
                        output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a native runtime.............
Model is attempting loading to a native runtime..incompatible

Model is pending loading to a container runtime.Please log into the following URL in a web browser:

	https://product-uat-ee.keycloak.wallarooexample.ai/auth/realms/master/device?user_code=CYBR-VWUQ

Login successful!

Ready
Namexgboost-rf-classification
Version4462b788-99b7-4a02-b346-a4ab47293791
File Namemodel-auto-conversion_xgboost_xgb_rf_classification_iris.pkl
SHA2aeb56c084a279770abdd26d14caba949159698c1a5d260d2aafe73090e6cb03
Statusready
Image Pathproxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mlflow-deploy:v2023.4.0-main-4005
ArchitectureNone
Updated At2023-20-Oct 21:20:57
model.config().runtime()
'flight'

Deploy Pipeline

The model is uploaded and ready for use. We’ll add it as a step in our pipeline, then deploy the pipeline. For this example we’re allocated 0.25 cpu and 4 Gi RAM to the pipeline through the pipeline’s deployment configuration.

deployment_config = DeploymentConfigBuilder() \
    .cpus(0.25).memory('1Gi') \
    .build()
# clear the pipeline if it was used before
pipeline.undeploy()
pipeline.clear()

pipeline.add_model_step(model)

pipeline.deploy(deployment_config=deployment_config)

Run Inference

A sample inference will be run. First the pandas DataFrame used for the inference is created, then the inference run through the pipeline’s infer method.

data = pd.read_json('./data/test-xgboost-rf-classification-data.json')
data

dataframe = pd.DataFrame({"inputs": data[:2].values.tolist()})
dataframe

pipeline.infer(dataframe)
timein.inputsout.outputcheck_failures
02023-10-20 21:25:28.029[5.1, 3.5, 1.4, 0.2]00
12023-10-20 21:25:28.029[4.9, 3.0, 1.4, 0.2]00

Undeploy Pipelines

With the tutorial complete, the pipeline is undeployed to return the resources back to the cluster.

pipeline.undeploy()
Waiting for undeployment - this will take up to 45s ........................................ ok
namexgboost-rf-classification
created2023-10-20 21:17:57.183979+00:00
last_updated2023-10-20 21:22:13.552727+00:00
deployedFalse
tags
versionscee7509c-3060-423d-933d-f2da5841bef7, a4bc3896-3d85-4dda-ad64-4171cbf2c0b8
stepsxgboost-rf-classification
publishedFalse

2.10 - Model Conversion Tutorials

How to convert ML models into a Wallaroo compatible format.

Wallaroo pipelines support the ONNX standard. The following guides offer tips on converting a ML model to ONNX.

See the Wallaroo ML Models Upload and Registrations tutorials for guides on uploading different model frameworks into a Wallaroo instance.

These sample guides and their machine language models can be downloaded from the Wallaroo Tutorials Repository.

2.10.1 - PyTorch to ONNX Outside Wallaroo

How to convert PyTorch ML models into the ONNX format.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

How to Convert PyTorch to ONNX

The following tutorial is a brief example of how to convert a PyTorth (aka sk-learn) ML model to ONNX. This allows organizations that have trained sk-learn models to convert them and use them with Wallaroo.

This tutorial assumes that you have a Wallaroo instance and are running this Notebook from the Wallaroo Jupyter Hub service. This sample code is based on the guide Convert your PyTorch model to ONNX.

This tutorial provides the following:

  • pytorchbikeshare.pt: a RandomForestRegressor PyTorch model. This model has a total of 58 inputs, and uses the class BikeShareRegressor.

Prerequisites

  • An installed Wallaroo instance.
  • The following Python libraries installed:

Conversion Process

Libraries

The first step is to import our libraries we will be using. For this example, the PyTorth torch library will be imported into this kernel.

# the Pytorch libraries
# Import into this kernel

import torch
import torch.onnx 

Load the Model

To load a PyTorch model into a variable, the model’s class has to be defined. For out example we are using the BikeShareRegressor class as defined below.

class BikeShareRegressor(torch.nn.Module):
    def __init__(self):
        super(BikeShareRegressor, self).__init__()

        
        self.net = nn.Sequential(nn.Linear(input_size, l1),
                                 torch.nn.ReLU(),
                                 torch.nn.Dropout(p=dropout),
                                 nn.BatchNorm1d(l1),
                                 nn.Linear(l1, l2),
                                 torch.nn.ReLU(),
                                 torch.nn.Dropout(p=dropout),                                
                                 nn.BatchNorm1d(l2),                                                                                                   
                                 nn.Linear(l2, output_size))

    def forward(self, x):
        return self.net(x)

Now we will load the model into the variable pytorch_tobe_converted.

# load the Pytorch model
model = torch.load("./pytorch_bikesharingmodel.pt")

Convert_ONNX Inputs

Now we will define our method Convert_ONNX() which has the following inputs:

  • PyTorchModel: the PyTorch we are converting.

  • modelInputs: the model input or tuple for multiple inputs.

  • onnxPath: The location to save the onnx file.

  • opset_version: The ONNX version to export to.

  • input_names: Array of the model’s input names.

  • output_names: Array of the model’s output names.

  • dynamic_axes: Sets variable length axes in the format, replacing the batch_size as necessary:
    {'modelInput' : { 0 : 'batch_size'}, 'modelOutput' : {0 : 'batch_size'}}

  • export_params: Whether to store the trained parameter weight inside the model file. Defaults to True.

  • do_constant_folding: Sets whether to execute constant folding for optimization. Defaults to True.

#Function to Convert to ONNX 
def Convert_ONNX(): 

    # set the model to inference mode 
    model.eval() 

    # Export the model   
    torch.onnx.export(model,         # model being run 
         dummy_input,       # model input (or a tuple for multiple inputs) 
         pypath,       # where to save the model  
         export_params=True,  # store the trained parameter weights inside the model file 
         opset_version=15,    # the ONNX version to export the model to 
         do_constant_folding=True,  # whether to execute constant folding for optimization 
         input_names = ['modelInput'],   # the model's input names 
         output_names = ['modelOutput'], # the model's output names 
         dynamic_axes = {'modelInput' : {0 : 'batch_size'}, 'modelOutput' : {0 : 'batch_size'}} # variable length axes 
    ) 
    print(" ") 
    print('Model has been converted to ONNX') 

Convert the Model

We’ll now set our variables and run our conversion. For out example, the input_size is known to be 58, and the device value we’ll derive from torch.cuda. We’ll also set the ONNX version for exporting to 15.

pypath = "pytorchbikeshare.onnx"

input_size = 58

if torch.cuda.is_available():
    device = 'cuda'
else:
    device = 'cpu'

onnx_opset_version = 15

# Set up some dummy input tensor for the model
dummy_input = torch.randn(1, input_size, requires_grad=True).to(device)

Convert_ONNX()
================ Diagnostic Run torch.onnx.export version 2.0.0 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Model has been converted to ONNX

Conclusion

And now our conversion is complete. Please feel free to use this sample code in your own projects.

2.10.2 - XGBoost Convert to ONNX

How to convert XGBoost to ONNX using the onnxmltools.convert library

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

How to Convert XGBoost to ONNX

The following tutorial is a brief example of how to convert a XGBoost ML model to the ONNX standard. This allows organizations that have trained XGBoost models to convert them and use them with Wallaroo.

This tutorial assumes that you have a Wallaroo instance and are running this Notebook from the Wallaroo Jupyter Hub service.

This tutorial provides the following:

Conversion Process

Libraries

The first step is to import our libraries we will be using.

import onnx
import pickle
from onnxmltools.convert import convert_xgboost

from skl2onnx.common.data_types import FloatTensorType, DoubleTensorType

Set Variables

The following variables are required to be known before the process can be started:

  • number of columns: The number of columns used by the model.
  • TARGET_OPSET: Verify the TARGET_OPSET value taht will be used in the conversion process matches the current Wallaroo model uploads requirements.
# set the number of columns
ncols = 18
TARGET_OPSET = 15

Load the XGBoost Model

Next we will load our model that has been saved in the pickle format and unpickle it.

# load the xgboost model
with open("housing_model_xgb.pkl", "rb") as f:
    xgboost_model = pickle.load(f)

Conversion Inputs

The convert_xgboost method has the following format and requires the following inputs:

convert_xgboost({XGBoost Model}, 
                {XGBoost Model Type},
                [
                    ('input', 
                    {Tensor Data Type}([None, {ncols}]))
                ],
                target_opset={TARGET_OPSET})
  1. XGBoost Model: The XGBoost Model to convert.
  2. XGBoost Model Type: The type of XGBoost model. In this example is it a tree-based classifier.
  3. Tensor Data Type: Either FloatTensorType or DoubleTensorType from the skl2onnx.common.data_types library.
  4. ncols: Number of columns in the model.
  5. TARGET_OPSET: The target opset which can be derived in code showed below.

Convert the Model

With all of our data in place we can now convert our XBBoost model to ONNX using the convert_xgboost method.

onnx_model_converted = convert_xgboost(xgboost_model, 'tree-based classifier',
                             [('input', FloatTensorType([None, ncols]))],
                             target_opset=TARGET_OPSET)

Save the Model

With the model converted to ONNX, we can now save it and use it in a Wallaroo pipeline.

onnx.save_model(onnx_model_converted, "housing_model_xgb.onnx")

3 - Wallaroo Observability

Tutorials focused on Observability

3.1 - Wallaroo Edge Computer Vision Observability

A demonstration on observability with computer vision deployed on edge devices.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Computer Vision for Object Detection in Retail

The following tutorial demonstrates using Wallaroo for observability of edge deployments of computer vision models.

Introduction

This tutorial focuses on the resnet50 computer vision model. By default, this provides the following outputs from receiving an image converted to tensor values:

  • boxes: The bounding boxes for detected objects.
  • classes: The class of the detected object (bottle, coat, person, etc).
  • confidences: The confidence the model has that the detected model is the class.

For this demonstration, the model is modified with Wallaroo Bring Your Own Predict (BYOP) to add two additional fields:

  • avg_px_intensity: the average pixel intensity checks the input to determine the average value of the Red/Green/Blue pixel values. This is used as a benchmark to determine if the two images are significantly different. For example, an all white photo would have an avg_px_intensity of 1, while an all blank photo would have a value of 0.
  • avg_confidence: The average confidence of all detected objects.

This demonstration will use avg_confidence to demonstrate observability in our edge deployed computer vision model.

Prerequisites

  • A Wallaroo Ops instance 2023.4 and above with [edge deployment enabled](Edge Deployment Registry Guide).
  • An x64 edge device with Docker installed, recommended with at least 8 cores.

In order for the wallaroo tutorial notebooks to run properly, the videos directory must contain these models in the models directory.

To download the Wallaroo Computer Vision models, use the following link:

https://storage.googleapis.com/wallaroo-public-data/cv-demo-models/cv-retail-models.zip

Unzip the contents into the directory models.

The following models are required to run the tutorial:

onnx==1.12.0
onnxruntime==1.12.1
torchvision
torch
matplotlib==3.5.0
opencv-python
imutils
pytz
ipywidgets

To run this tutorial outside of a Wallaroo Ops center, the Wallaroo SDK is available and is installed via pip with:

pip install wallaroo==2023.4.1

References

Steps

Import Libraries

The following libraries are used to execute this tutorial. The utils.py provides additional helper methods for rendering the images into tensor fields and other useful tasks.

# preload needed libraries 

import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework
from IPython.display import display
from IPython.display import Image
import pandas as pd
import json
import datetime
import time
import cv2
import matplotlib.pyplot as plt
import string
import random
import pyarrow as pa
import sys
import asyncio
import numpy as np

import utils
pd.set_option('display.max_colwidth', None)

import datetime

# api based inference request
import requests

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

The option request_timeout provides additional time for the Wallaroo model upload process to complete.

wl = wallaroo.Client(request_timeout=600)

Create Workspace

We will create a workspace to manage our pipeline and models. The following variables will set the name of our sample workspace then set it as the current workspace.

Workspace names must be unique. The following helper function will either create a new workspace, or retrieve an existing one with the same name. Verify that a pre-existing workspace has been shared with the targeted user.

Set the variables workspace_name to ensure a unique workspace name if required.

The workspace will then be set as the Current Workspace. Model uploads and

def get_workspace(name, client):
    workspace = None
    for ws in client.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = client.create_workspace(name)
    return workspace

workspace_name = "cv-retail-edge-observability"
model_name = "resnet-with-intensity"
model_file_name = "./models/model-with-pixel-intensity.zip"
pipeline_name = "retail-inv-tracker-edge-obs"

workspace = get_workspace(workspace_name, wl)
wl.set_current_workspace(workspace)
{'name': 'cv-retail-edge-observability', 'id': 8, 'archived': False, 'created_by': 'cc3619bb-2cec-4a44-9333-55a0dc6b3997', 'created_at': '2024-01-10T17:37:38.659309+00:00', 'models': [], 'pipelines': []}

Upload Model

The model is uploaded as a BYOP model, where the model, Python script and other artifacts are included in a .zip file. This requires the input and output schemas for the model specified in Apache Arrow Schema format.

input_schema = pa.schema([
    pa.field('tensor', pa.list_(
        pa.list_(
            pa.list_(
                pa.float32(), # images are normalized
                list_size=640
            ),
            list_size=480
        ),
        list_size=3
    )),
])

output_schema = pa.schema([
    pa.field('boxes', pa.list_(pa.list_(pa.float32(), list_size=4))),
    pa.field('classes', pa.list_(pa.int64())),
    pa.field('confidences', pa.list_(pa.float32())),
    pa.field('avg_px_intensity', pa.list_(pa.float32())),
    pa.field('avg_confidence', pa.list_(pa.float32())),
])

model = wl.upload_model(model_name, 
                        model_file_name, 
                        framework=Framework.CUSTOM,
                        input_schema=input_schema, 
                        output_schema=output_schema)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime......
Model is attempting loading to a container runtime.............successful

Ready

Deploy Pipeline

Next we configure the hardware we want to use for deployment. If we plan on eventually deploying to edge, this is a good way to simulate edge hardware conditions. The BYOP model is deployed as a Wallaroo Containerized Runtime, so the hardware allocation is performed through the sidekick options.

deployment_config = wallaroo.DeploymentConfigBuilder() \
    .replica_count(1) \
    .cpus(1) \
    .memory("2Gi") \
    .sidekick_cpus(model, 1) \
    .sidekick_memory(model, '6Gi') \
    .build()

We create the pipeline with the wallaroo.client.build_pipeline method, and assign our model as a model pipeline step. Once complete, we will deploy the pipeline to allocate resources from the Kuberntes cluster hosting the Wallaroo Ops to the pipeline.

pipeline = wl.build_pipeline(pipeline_name)
pipeline.clear()
pipeline.add_model_step(model)
pipeline.deploy(deployment_config = deployment_config)
nameretail-inv-tracker-edge-obs
created2024-01-10 17:50:59.035145+00:00
last_updated2024-01-10 17:50:59.700103+00:00
deployedTrue
archNone
tags
versionsd0c2ed1c-3691-49f1-8fc8-e6510e4c39f8, 773099d3-6d64-4a92-b5a7-f614a916965d
stepsresnet-with-intensity
publishedFalse

Monitoring for Model Drift

For this example, we want to track the average confidence of object predictions and get alerted if we see a drop in confidence.

We will convert a set of images to pandas DataFrames with the images converted to tensor values. The first set of images baseline_images as well formed images. The second set blurred_images are the same photos intentionally blurred for our observability demonstration.

baseline_images = [
    "./data/images/input/example/dairy_bottles.png",
    "./data/images/input/example/dairy_products.png",
    "./data/images/input/example/product_cheeses.png"
]
   
blurred_images = [
    "./data/images/input/example/blurred-dairy_bottles.png",
    "./data/images/input/example/blurred-dairy_products.png",
    "./data/images/input/example/blurred-product_cheeses.png"
]

baseline_images_list = utils.processImages(baseline_images)
blurred_images_list = utils.processImages(blurred_images)

The Wallaroo SDK is capable of using numpy arrays in a pandas DataFrame for inference requests. Our demonstration will focus on using API calls for inference requests, so we will flatten the numpy array and use that value for our inference inputs. The following examples show using a baseline and blurred image for inference requests and the sample outputs.

# baseline image
df_test = pd.DataFrame({'tensor': wallaroo.utils.flatten_np_array_columns(baseline_images_list[0], 'tensor')})
df_test
tensor
0[0.9372549, 0.9529412, 0.9490196, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.94509804, 0.9490196, 0.9490196, 0.9529412, 0.9529412, 0.9490196, 0.9607843, 0.96862745, 0.9647059, 0.96862745, 0.9647059, 0.95686275, 0.9607843, 0.9647059, 0.9647059, 0.9607843, 0.9647059, 0.972549, 0.95686275, 0.9607843, 0.91764706, 0.95686275, 0.91764706, 0.8784314, 0.89411765, 0.84313726, 0.8784314, 0.8627451, 0.8509804, 0.9254902, 0.84705883, 0.96862745, 0.89411765, 0.81960785, 0.8509804, 0.92941177, 0.8666667, 0.8784314, 0.8666667, 0.9647059, 0.9764706, 0.98039216, 0.9764706, 0.972549, 0.972549, 0.972549, 0.972549, 0.972549, 0.972549, 0.98039216, 0.89411765, 0.48235294, 0.4627451, 0.43137255, 0.27058825, 0.25882354, 0.29411766, 0.34509805, 0.36862746, 0.4117647, 0.45490196, 0.4862745, 0.5254902, 0.56078434, 0.6039216, 0.64705884, 0.6862745, 0.72156864, 0.74509805, 0.7490196, 0.7882353, 0.8666667, 0.98039216, 0.9882353, 0.96862745, 0.9647059, 0.96862745, 0.972549, 0.9647059, 0.9607843, 0.9607843, 0.9607843, 0.9607843, ...]
result = pipeline.infer(df_test, 
                        dataset=['time', 'out.avg_confidence','out.avg_px_intensity','out.boxes','out.classes','out.confidences','check_failures','metadata'])
result
timeout.avg_confidenceout.avg_px_intensityout.boxesout.classesout.confidencesmetadata.partition
02024-01-10 17:53:50.655[0.35880384][0.425382][[2.1511102, 193.98323, 76.26535, 475.40292], [610.82245, 98.606316, 639.8868, 232.27054], [544.2867, 98.726524, 581.28845, 230.20497], [454.99344, 113.08567, 484.78464, 210.1282], [502.58884, 331.87665, 551.2269, 476.49182], [538.54254, 292.1205, 587.46545, 468.1288], [578.5417, 99.70756, 617.2247, 233.57082], [548.552, 191.84564, 577.30585, 238.47737], [459.83328, 344.29712, 505.42633, 456.7118], [483.47168, 110.56585, 514.0936, 205.00156], [262.1222, 190.36658, 323.4903, 405.20584], [511.6675, 104.53834, 547.01715, 228.23663], [75.39197, 205.62312, 168.49893, 453.44086], [362.50656, 173.16858, 398.66956, 371.8243], [490.42468, 337.627, 534.1234, 461.0242], [351.3856, 169.14897, 390.7583, 244.06992], [525.19824, 291.73895, 570.5553, 417.6439], [563.4224, 285.3889, 609.30853, 452.25943], [425.57935, 366.24915, 480.63535, 474.54], [154.538, 198.0377, 227.64284, 439.84412], [597.02893, 273.60458, 637.2067, 439.03214], [473.88763, 293.41992, 519.7537, 349.2304], [262.77597, 192.03581, 313.3096, 258.3466], [521.1493, 152.89026, 534.8596, 246.52365], [389.89633, 178.07867, 431.87555, 360.59323], [215.99901, 179.52965, 280.2847, 421.9092], [523.6454, 310.7387, 560.36487, 473.57968], [151.7131, 191.41077, 228.71013, 443.32187], [0.50784916, 14.856098, 504.5198, 405.7277], [443.83676, 340.1249, 532.83734, 475.77713], [472.37848, 329.13092, 494.03647, 352.5906], [572.41455, 286.26132, 601.8677, 384.5899], [532.7721, 189.89102, 551.9026, 241.76051], [564.0309, 105.75121, 597.0351, 225.32579], [551.25854, 287.16034, 590.9206, 405.71548], [70.46805, 0.39822236, 92.786545, 84.40113], [349.44534, 3.6184297, 392.61484, 98.43363], [64.40484, 215.14934, 104.09457, 436.50797], [615.1218, 269.46683, 633.30853, 306.03452], [238.31851, 0.73957217, 290.2898, 91.30623], [449.37347, 337.39554, 480.13208, 369.35126], [74.95623, 191.84235, 164.21284, 457.00143], [391.96646, 6.255016, 429.23056, 100.72327], [597.48663, 276.6981, 618.0615, 298.62778], [384.51172, 171.95828, 407.01276, 205.28722], [341.5734, 179.8058, 365.88345, 208.57889], [555.0278, 288.62695, 582.6162, 358.0912], [615.92035, 264.9265, 632.3316, 280.2552], [297.95154, 0.5227982, 347.18744, 95.13106], [311.64868, 203.67934, 369.6169, 392.58063], [163.10356, 0.0, 227.67468, 86.49685], [68.518974, 1.870926, 161.25877, 82.89816], [593.6093, 103.263596, 617.124, 200.96545], [263.3115, 200.12204, 275.699, 234.26517], [592.2285, 279.66064, 619.70496, 379.14764], [597.7549, 269.6543, 618.5473, 286.25214], [478.04306, 204.36165, 530.00745, 239.35196], [501.34528, 289.28003, 525.766, 333.00262], [462.4777, 336.92056, 491.32016, 358.18915], [254.40384, 203.89566, 273.66177, 237.13142], [307.96042, 154.70946, 440.4545, 386.88065], [195.53915, 187.13596, 367.6518, 404.91095], [77.52113, 2.9323518, 160.15236, 81.59642], [577.648, 283.4004, 601.4359, 307.41882], [516.63873, 129.74504, 540.40936, 242.17572], [543.2536, 253.38689, 631.3576, 466.62567], [271.1358, 45.97066, 640.0, 456.88235], [568.77203, 188.66449, 595.866, 235.05397], [400.997, 169.88531, 419.35645, 185.80077], [473.9581, 328.07736, 493.2316, 339.41437], [602.9851, 376.39505, 633.74243, 438.61758], [480.95963, 117.62996, 525.08826, 242.74872], [215.71178, 194.87447, 276.59906, 414.3832], [462.2966, 331.94638, 493.14175, 347.97647], [543.94977, 196.90623, 556.04315, 238.99141], [386.8205, 446.6134, 428.81064, 480.00003], [152.22516, 0.0, 226.98564, 85.44817], [616.91815, 246.15547, 630.8656, 273.48447], [576.33356, 100.67465, 601.9629, 183.61014], [260.99304, 202.73479, 319.40524, 304.172], [333.5814, 169.75925, 390.9659, 278.09406], [286.26767, 3.700516, 319.3962, 88.77158], [532.5545, 209.36183, 552.58417, 240.08965], [572.99915, 207.65208, 595.62415, 235.46635], [6.2744417, 0.5994095, 96.04993, 81.27411], [203.32298, 188.94266, 227.27435, 254.29417], [513.7689, 154.48575, 531.3786, 245.21616], [547.53815, 369.9145, 583.13916, 470.30206], [316.64532, 252.80664, 336.65454, 272.82834], [234.79965, 0.0003570557, 298.05707, 92.014496], [523.0663, 195.29694, 534.485, 242.23671], [467.7621, 202.35623, 536.2362, 245.53342], [146.07208, 0.9752747, 178.93895, 83.210175], [18.780663, 1.0261598, 249.98721, 88.49467], [600.9064, 270.41983, 625.3632, 297.94083], [556.40955, 287.84702, 580.2647, 322.64728], [80.6706, 161.22787, 638.6261, 480.00003], [455.98315, 334.6629, 480.16724, 349.97784], [613.9905, 267.98065, 635.95807, 375.1775], [345.75964, 2.459102, 395.91183, 99.68698]][44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 86, 82, 44, 44, 44, 44, 44, 44, 84, 84, 44, 44, 44, 44, 86, 84, 44, 44, 44, 44, 44, 84, 44, 44, 84, 44, 44, 44, 44, 51, 44, 44, 44, 44, 44, 44, 44, 44, 44, 82, 44, 44, 44, 44, 44, 86, 44, 44, 1, 84, 44, 44, 44, 44, 84, 47, 47, 84, 14, 44, 44, 53, 84, 47, 47, 44, 84, 44, 44, 82, 44, 44, 44][0.9965358, 0.9883404, 0.9700247, 0.9696426, 0.96478045, 0.96037567, 0.9542889, 0.9467539, 0.946524, 0.94484967, 0.93611854, 0.91653466, 0.9133634, 0.8874814, 0.84405905, 0.825526, 0.82326967, 0.81740034, 0.7956525, 0.78669065, 0.7731486, 0.75193685, 0.7360918, 0.7009186, 0.6932351, 0.65077204, 0.63243586, 0.57877576, 0.5023476, 0.50163734, 0.44628552, 0.42804396, 0.4253787, 0.39086252, 0.36836442, 0.3473236, 0.32950658, 0.3105372, 0.29076362, 0.28558296, 0.26680034, 0.26302803, 0.25444376, 0.24568668, 0.2353662, 0.23321979, 0.22612995, 0.22483191, 0.22332378, 0.21442991, 0.20122288, 0.19754867, 0.19439234, 0.19083925, 0.1871393, 0.17646024, 0.16628945, 0.16326219, 0.14825206, 0.13694529, 0.12920643, 0.12815322, 0.122357555, 0.121289656, 0.116281696, 0.11498632, 0.111848116, 0.11016138, 0.1095062, 0.1039151, 0.10385688, 0.097573474, 0.09632071, 0.09557622, 0.091599144, 0.09062039, 0.08262358, 0.08223499, 0.07993951, 0.07989185, 0.078758545, 0.078201495, 0.07737936, 0.07690251, 0.07593444, 0.07503418, 0.07482597, 0.068981, 0.06841128, 0.06764157, 0.065750405, 0.064908616, 0.061884128, 0.06010121, 0.0578873, 0.05717648, 0.056616478, 0.056017116, 0.05458274, 0.053669468]engine-fc59fccbc-bs2sg
# example of an inference from a bad photo

df_test = pd.DataFrame({'tensor': wallaroo.utils.flatten_np_array_columns(blurred_images_list[0], 'tensor')})
result = pipeline.infer(df_test, dataset=['time', 'out.avg_confidence','out.avg_px_intensity','out.boxes','out.classes','out.confidences','check_failures','metadata'])
result
timeout.avg_confidenceout.avg_px_intensityout.boxesout.classesout.confidencesmetadata.partition
02024-01-10 17:53:59.016[0.29440108][0.42540666][[1.104647, 203.48311, 81.29011, 472.4321], [67.34002, 195.64558, 163.41652, 470.1668], [218.88916, 180.21216, 281.3725, 422.2332], [156.47955, 189.82559, 227.1866, 443.35718], [393.81195, 172.34473, 434.30322, 363.96057], [266.56137, 201.46182, 326.2503, 406.3162], [542.941, 99.440956, 588.42365, 229.33481], [426.12668, 260.50723, 638.6193, 476.10742], [511.26102, 106.84715, 546.8103, 243.0127], [0.0, 68.56848, 482.48538, 472.53766], [347.34027, 0.0, 401.10968, 97.51968], [289.03827, 0.32189485, 347.78888, 93.458755], [91.05826, 183.34473, 207.86084, 469.46518], [613.1202, 102.11072, 639.4794, 228.7474], [369.04257, 177.80518, 419.34775, 371.53873], [512.19727, 92.89032, 548.08636, 239.37686], [458.50125, 115.80958, 485.57538, 236.75961], [571.35834, 102.115395, 620.06, 230.47636], [481.23752, 105.288246, 516.5597, 246.37486], [74.288246, 0.4324219, 162.55719, 80.09118], [566.6188, 102.72982, 623.63257, 226.31448], [14.5338335, 0.0, 410.35077, 100.371155], [67.72321, 186.76591, 144.67079, 272.91965], [171.88432, 1.3620621, 220.8489, 82.6873], [455.16003, 109.83146, 486.36246, 243.25917], [320.3717, 211.61632, 373.62762, 397.29614], [476.53476, 105.55374, 517.4519, 240.22443], [530.3071, 97.575066, 617.83466, 235.27464], [146.26923, 184.24777, 186.76619, 459.51907], [610.5376, 99.28521, 638.6954, 235.62247], [316.39325, 194.10446, 375.8869, 401.48578], [540.51245, 105.909325, 584.589, 239.12834], [460.5496, 313.47333, 536.9969, 447.4658], [222.15643, 206.45018, 282.35947, 423.1165], [80.06503, 0.0, 157.40846, 79.61287], [396.70865, 235.83214, 638.7461, 473.58328], [494.6364, 115.012085, 520.81445, 241.98145], [432.90045, 145.19109, 464.7877, 264.47726], [200.3818, 181.47552, 232.85869, 429.13736], [50.631256, 161.2574, 321.71106, 465.66733], [545.57556, 106.189095, 593.3653, 227.64984], [338.0726, 1.0913361, 413.84973, 101.5233], [364.8136, 178.95511, 410.21368, 373.15686], [392.6712, 173.77844, 434.40182, 370.18982], [361.36926, 175.07799, 397.51382, 371.78812], [158.44263, 182.24762, 228.91519, 445.61328], [282.683, 0.0, 348.24307, 92.91383], [0.0, 194.40187, 640.0, 474.7329], [276.38458, 260.773, 326.8054, 407.18048], [528.4028, 105.582886, 561.3014, 239.953], [506.40353, 115.89468, 526.7106, 233.26082], [20.692535, 4.8851624, 441.1723, 215.57448], [193.52037, 188.48592, 329.2185, 428.5391], [1.6791562, 122.02866, 481.69287, 463.82855], [255.57025, 0.0, 396.8555, 100.11973], [457.83475, 91.354, 534.8592, 250.44174], [313.2646, 156.99405, 445.05853, 389.01157], [344.55948, 0.0, 370.23212, 94.05032], [24.93765, 11.427448, 439.70956, 184.92136], [433.3421, 132.6041, 471.16473, 259.3983]][44, 44, 44, 44, 44, 44, 44, 61, 90, 82, 44, 84, 44, 47, 44, 44, 90, 44, 44, 84, 47, 84, 44, 84, 44, 84, 90, 44, 44, 44, 44, 90, 61, 84, 44, 67, 90, 44, 44, 44, 47, 84, 84, 84, 44, 86, 44, 67, 84, 90, 90, 82, 44, 78, 84, 44, 44, 44, 78, 84][0.99679935, 0.9928388, 0.95979476, 0.94534546, 0.76680815, 0.7245405, 0.6529537, 0.6196737, 0.61694986, 0.6146526, 0.52818304, 0.51962215, 0.51650614, 0.50039023, 0.48194215, 0.48113948, 0.4220569, 0.35743266, 0.3185851, 0.31218198, 0.3114053, 0.29015902, 0.2836629, 0.24364658, 0.23470096, 0.23113059, 0.20228004, 0.19990075, 0.19283496, 0.18304716, 0.17492934, 0.16523221, 0.1606256, 0.15927774, 0.14796422, 0.1388699, 0.1340389, 0.13308196, 0.11703869, 0.10279331, 0.10200763, 0.0987304, 0.09823867, 0.09219642, 0.09162199, 0.088787705, 0.08765345, 0.080090344, 0.07868707, 0.07560313, 0.07533865, 0.07433937, 0.07159829, 0.069288105, 0.065867245, 0.06332389, 0.057103153, 0.05622299, 0.052092217, 0.05025773]engine-fc59fccbc-bs2sg

Inference Request via API

The following code performs the same inference request through the pipeline’s inference URL. This is used to demonstrate how the API inference result appears, which is used for the later examples.

headers = wl.auth.auth_header()

headers['Content-Type'] = 'application/json; format=pandas-records'

deploy_url = pipeline._deployment._url()

response = requests.post(
                    deploy_url, 
                    headers=headers, 
                    data=df_test.to_json(orient="records")
                )

display(pd.DataFrame(response.json()))
timeoutmetadata
01704909276772{'avg_confidence': [0.29440108], 'avg_px_intensity': [0.42540666], 'boxes': [[1.104647, 203.48311, 81.29011, 472.4321], [67.34002, 195.64558, 163.41652, 470.1668], [218.88916, 180.21216, 281.3725, 422.2332], [156.47955, 189.82559, 227.1866, 443.35718], [393.81195, 172.34473, 434.30322, 363.96057], [266.56137, 201.46182, 326.2503, 406.3162], [542.941, 99.440956, 588.42365, 229.33481], [426.12668, 260.50723, 638.6193, 476.10742], [511.26102, 106.84715, 546.8103, 243.0127], [0.0, 68.56848, 482.48538, 472.53766], [347.34027, 0.0, 401.10968, 97.51968], [289.03827, 0.32189485, 347.78888, 93.458755], [91.05826, 183.34473, 207.86084, 469.46518], [613.1202, 102.11072, 639.4794, 228.7474], [369.04257, 177.80518, 419.34775, 371.53873], [512.19727, 92.89032, 548.08636, 239.37686], [458.50125, 115.80958, 485.57538, 236.75961], [571.35834, 102.115395, 620.06, 230.47636], [481.23752, 105.288246, 516.5597, 246.37486], [74.288246, 0.4324219, 162.55719, 80.09118], [566.6188, 102.72982, 623.63257, 226.31448], [14.5338335, 0.0, 410.35077, 100.371155], [67.72321, 186.76591, 144.67079, 272.91965], [171.88432, 1.3620621, 220.8489, 82.6873], [455.16003, 109.83146, 486.36246, 243.25917], [320.3717, 211.61632, 373.62762, 397.29614], [476.53476, 105.55374, 517.4519, 240.22443], [530.3071, 97.575066, 617.83466, 235.27464], [146.26923, 184.24777, 186.76619, 459.51907], [610.5376, 99.28521, 638.6954, 235.62247], [316.39325, 194.10446, 375.8869, 401.48578], [540.51245, 105.909325, 584.589, 239.12834], [460.5496, 313.47333, 536.9969, 447.4658], [222.15643, 206.45018, 282.35947, 423.1165], [80.06503, 0.0, 157.40846, 79.61287], [396.70865, 235.83214, 638.7461, 473.58328], [494.6364, 115.012085, 520.81445, 241.98145], [432.90045, 145.19109, 464.7877, 264.47726], [200.3818, 181.47552, 232.85869, 429.13736], [50.631256, 161.2574, 321.71106, 465.66733], [545.57556, 106.189095, 593.3653, 227.64984], [338.0726, 1.0913361, 413.84973, 101.5233], [364.8136, 178.95511, 410.21368, 373.15686], [392.6712, 173.77844, 434.40182, 370.18982], [361.36926, 175.07799, 397.51382, 371.78812], [158.44263, 182.24762, 228.91519, 445.61328], [282.683, 0.0, 348.24307, 92.91383], [0.0, 194.40187, 640.0, 474.7329], [276.38458, 260.773, 326.8054, 407.18048], [528.4028, 105.582886, 561.3014, 239.953], [506.40353, 115.89468, 526.7106, 233.26082], [20.692535, 4.8851624, 441.1723, 215.57448], [193.52037, 188.48592, 329.2185, 428.5391], [1.6791562, 122.02866, 481.69287, 463.82855], [255.57025, 0.0, 396.8555, 100.11973], [457.83475, 91.354, 534.8592, 250.44174], [313.2646, 156.99405, 445.05853, 389.01157], [344.55948, 0.0, 370.23212, 94.05032], [24.93765, 11.427448, 439.70956, 184.92136], [433.3421, 132.6041, 471.16473, 259.3983]], 'classes': [44, 44, 44, 44, 44, 44, 44, 61, 90, 82, 44, 84, 44, 47, 44, 44, 90, 44, 44, 84, 47, 84, 44, 84, 44, 84, 90, 44, 44, 44, 44, 90, 61, 84, 44, 67, 90, 44, 44, 44, 47, 84, 84, 84, 44, 86, 44, 67, 84, 90, 90, 82, 44, 78, 84, 44, 44, 44, 78, 84], 'confidences': [0.99679935, 0.9928388, 0.95979476, 0.94534546, 0.76680815, 0.7245405, 0.6529537, 0.6196737, 0.61694986, 0.6146526, 0.52818304, 0.51962215, 0.51650614, 0.50039023, 0.48194215, 0.48113948, 0.4220569, 0.35743266, 0.3185851, 0.31218198, 0.3114053, 0.29015902, 0.2836629, 0.24364658, 0.23470096, 0.23113059, 0.20228004, 0.19990075, 0.19283496, 0.18304716, 0.17492934, 0.16523221, 0.1606256, 0.15927774, 0.14796422, 0.1388699, 0.1340389, 0.13308196, 0.11703869, 0.10279331, 0.10200763, 0.0987304, 0.09823867, 0.09219642, 0.09162199, 0.088787705, 0.08765345, 0.080090344, 0.07868707, 0.07560313, 0.07533865, 0.07433937, 0.07159829, 0.069288105, 0.065867245, 0.06332389, 0.057103153, 0.05622299, 0.052092217, 0.05025773]}{'last_model': '{"model_name":"resnet-with-intensity","model_sha":"6d58039b1a02c5cce85646292965d29056deabdfcc6b18c34adf566922c212b0"}', 'pipeline_version': '', 'elapsed': [146497846, 6506643289], 'dropped': [], 'partition': 'engine-fc59fccbc-bs2sg'}

Store the Ops Pipeline Partition

Wallaroo pipeline logs include the metadata.partition field that indicates what instance of the pipeline performed the inference. This partition name updates each time a new pipeline version is created; modifying the pipeline steps and other actions changes the pipeline version.

ops_partition = result.loc[0, 'metadata.partition']
ops_partition
'engine-fc59fccbc-bs2sg'

API Inference Helper Functions

The following helper functions are set up to perform inferences through either the Wallaroo Ops pipeline, or an edge deployed version of the pipeline.

For this example, update the hostname testboy.local to the hostname of the deployed edge device.

import requests

def ops_pipeline_inference(df):
    df_flattened = pd.DataFrame({'tensor': wallaroo.utils.flatten_np_array_columns(df, 'tensor')})
    # api based inference request
    headers = wl.auth.auth_header()

    headers['Content-Type'] = 'application/json; format=pandas-records'

    deploy_url = pipeline._deployment._url()

    response = requests.post(
                        deploy_url, 
                        headers=headers, 
                        data=df_flattened.to_json(orient="records")
                    )

    display(pd.DataFrame(response.json()).loc[:, ['time', 'metadata']])

def edge_pipeline_inference(df):
    df_flattened = pd.DataFrame({'tensor': wallaroo.utils.flatten_np_array_columns(df, 'tensor')})
    # api based inference request
    # headers = wl.auth.auth_header()

    headers = {
        'Content-Type': 'application/json; format=pandas-records'
    }

    deploy_url = 'http://testboy.local:8080/pipelines/retail-inv-tracker-edge-obs'

    response = requests.post(
                        deploy_url, 
                        headers=headers, 
                        data=df_flattened.to_json(orient="records")
                    )

    display(pd.DataFrame(response.json()).loc[:, ['time', 'out', 'metadata']])

Edge Deployment

We can now deploy the pipeline to an edge device. This will require the following steps:

  • Publish the pipeline: Publishes the pipeline to the OCI registry.
  • Add Edge: Add the edge location to the pipeline publish.
  • Deploy Edge: Deploy the edge device with the edge location settings.
pup = pipeline.publish()
Waiting for pipeline publish... It may take up to 600 sec.
Pipeline is Publishing......Published.
display(pup)
ID5
Pipeline Versionc2dd2c7d-618d-497f-8515-3aa1469d7985
StatusPublished
Engine URLghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalone-mini:v2023.4.1-4351
Pipeline URLghcr.io/wallaroolabs/doc-samples/pipelines/retail-inv-tracker-edge-obs:c2dd2c7d-618d-497f-8515-3aa1469d7985
Helm Chart URLoci://ghcr.io/wallaroolabs/doc-samples/charts/retail-inv-tracker-edge-obs
Helm Chart Referenceghcr.io/wallaroolabs/doc-samples/charts@sha256:306df0921321c299d0b04c8e0499ab0678082a197da97676e280e5d752ab415b
Helm Chart Version0.0.1-c2dd2c7d-618d-497f-8515-3aa1469d7985
Engine Config{'engine': {'resources': {'limits': {'cpu': 4.0, 'memory': '3Gi'}, 'requests': {'cpu': 4.0, 'memory': '3Gi'}, 'arch': 'x86', 'gpu': False}}, 'engineAux': {}, 'enginelb': {'resources': {'limits': {'cpu': 1.0, 'memory': '512Mi'}, 'requests': {'cpu': 0.2, 'memory': '512Mi'}, 'arch': 'x86', 'gpu': False}}}
User Images[]
Created Byjohn.hummel@wallaroo.ai
Created At2024-01-10 18:09:25.001491+00:00
Updated At2024-01-10 18:09:25.001491+00:00
Docker Run Variables{}

Add Edge

The edge location is added with the publish.add_edge(name) method. This returns the OCI registration information, and the EDGE_BUNDLE information. The EDGE_BUNDLE data is a base64 encoded set of parameters for the pipeline that the edge device is associated with.

Edge names must be unique. Update the edge name below if required.

edge_name = 'cv-observability-demo-sample'
edge_publish = pup.add_edge(edge_name)
display(edge_publish)
ID5
Pipeline Versionc2dd2c7d-618d-497f-8515-3aa1469d7985
StatusPublished
Engine URLghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalone-mini:v2023.4.1-4351
Pipeline URLghcr.io/wallaroolabs/doc-samples/pipelines/retail-inv-tracker-edge-obs:c2dd2c7d-618d-497f-8515-3aa1469d7985
Helm Chart URLoci://ghcr.io/wallaroolabs/doc-samples/charts/retail-inv-tracker-edge-obs
Helm Chart Referenceghcr.io/wallaroolabs/doc-samples/charts@sha256:306df0921321c299d0b04c8e0499ab0678082a197da97676e280e5d752ab415b
Helm Chart Version0.0.1-c2dd2c7d-618d-497f-8515-3aa1469d7985
Engine Config{'engine': {'resources': {'limits': {'cpu': 4.0, 'memory': '3Gi'}, 'requests': {'cpu': 4.0, 'memory': '3Gi'}, 'arch': 'x86', 'gpu': False}}, 'engineAux': {}, 'enginelb': {'resources': {'limits': {'cpu': 1.0, 'memory': '512Mi'}, 'requests': {'cpu': 0.2, 'memory': '512Mi'}, 'arch': 'x86', 'gpu': False}}}
User Images[]
Created Byjohn.hummel@wallaroo.ai
Created At2024-01-10 18:09:25.001491+00:00
Updated At2024-01-10 18:09:25.001491+00:00
Docker Run Variables{'EDGE_BUNDLE': 'abcde'}

DevOps Deployment

The edge deployment is performed with docker run, docker compose, or helm installations. The following command generates the docker run command, with the following values provided by the DevOps Engineer:

  • $REGISTRYURL
  • $REGISTRYUSERNAME
  • $REGISTRYPASSWORD

Before deploying, create the ./data directory that is used to store the authentication credentials.

# create docker run 

docker_command = f'''
docker run -p 8080:8080 \\
    -v ./data:/persist \\
    -e DEBUG=true \\
    -e OCI_REGISTRY=$REGISTRYURL \\
    -e EDGE_BUNDLE={edge_publish.docker_run_variables['EDGE_BUNDLE']} \\
    -e CONFIG_CPUS=6 \\
    -e OCI_USERNAME=$REGISTRYUSERNAME \\
    -e OCI_PASSWORD=$REGISTRYPASSWORD \\
    -e PIPELINE_URL={edge_publish.pipeline_url} \\
    {edge_publish.engine_url}
'''

print(docker_command)
docker run -p 8080:8080 \
    -v ./data:/persist \
    -e DEBUG=true \
    -e OCI_REGISTRY=$REGISTRYURL \
    -e EDGE_BUNDLE=ZXhwb3J0IEJVTkRMRV9WRVJTSU9OPTEKZXhwb3J0IEVER0VfTkFNRT1jdi1vYnNlcnZhYmlsaXR5LWRlbW8tc2FtcGxlCmV4cG9ydCBKT0lOX1RPS0VOPWQxN2E0NzJlLTYzNjQtNDcxMi05MWUyLWRhNzUzMjQ1MzNiYwpleHBvcnQgT1BTQ0VOVEVSX0hPU1Q9ZG9jLXRlc3QuZWRnZS53YWxsYXJvb2NvbW11bml0eS5uaW5qYQpleHBvcnQgUElQRUxJTkVfVVJMPWdoY3IuaW8vd2FsbGFyb29sYWJzL2RvYy1zYW1wbGVzL3BpcGVsaW5lcy9yZXRhaWwtaW52LXRyYWNrZXItZWRnZS1vYnM6YzJkZDJjN2QtNjE4ZC00OTdmLTg1MTUtM2FhMTQ2OWQ3OTg1CmV4cG9ydCBXT1JLU1BBQ0VfSUQ9OA== \
    -e CONFIG_CPUS=6 \
    -e OCI_USERNAME=$REGISTRYUSERNAME \
    -e OCI_PASSWORD=$REGISTRYPASSWORD \
    -e PIPELINE_URL=ghcr.io/wallaroolabs/doc-samples/pipelines/retail-inv-tracker-edge-obs:c2dd2c7d-618d-497f-8515-3aa1469d7985 \
    ghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/standalone-mini:v2023.4.1-4351

Verify Logs

Before we perform inferences on the edge deployment, we’ll collect the pipeline logs and display the current partitions. These should only include the Wallaroo Ops pipeline.

logs = pipeline.logs(dataset=['time', 'metadata'])

ops_locations = [pd.unique(logs['metadata.partition']).tolist()][0]
display(ops_locations)
ops_location = ops_locations[0]
Warning: The inference log is above the allowable limit and the following columns may have been suppressed for various rows in the logs: ['in.tensor']. To review the dropped columns for an individual inference’s suppressed data, include dataset=["metadata"] in the log request.

['engine-fc59fccbc-bs2sg']

Drift Detection Example

The following uses our baseline and blurred data to create observability values. We start with creating a set of baseline images inference results through our deployed Ops pipeline, storing the start and end dates.

baseline_start = datetime.datetime.now(datetime.timezone.utc)

for i in range(10):
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)

time.sleep(10)

baseline_end = datetime.datetime.now(datetime.timezone.utc)

Build Assay with Baseline

We will use the baseline values to create our assay, specifying the start and end dates to use for the baseline values. From there we will run an interactive assay to view the current values against the baseline. Since they are just the baseline values, everything should be fine.

For our assay window, we will set the locations to both the Ops pipeline and the edge deployed pipeline. This gathers the pipeline log values from both partitions for the assay.

# create baseline from numpy

assay_name_from_dates = "average confidence drift detection example"
step_name = "resnet-with-intensity"
assay_builder_from_dates = wl.build_assay(assay_name_from_dates, 
                               pipeline, 
                               step_name, 
                               iopath="output avg_confidence 0", 
                               baseline_start=baseline_start,
                               baseline_end=baseline_end)
# assay from recent updates

assay_builder_from_dates = assay_builder_from_dates.add_run_until(baseline_end)

# View 1 minute intervals
# just ops
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=1)
    .add_interval(minutes=1)
    .add_start(baseline_start)
    .add_location_filter([ops_partition, edge_name])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 6 analyses

Set Observability Data

The next set of inferences will send all blurred image data to the edge deployed pipeline, and all good to the Ops pipeline. By the end, we will be able to demonstrate viewing the assay results to show detecting the blurred images causing more and more scores outside of the baseline.

Every so often we will rerun the interactive assay to show the updated results.

assay_window_start = datetime.datetime.now(datetime.timezone.utc)

for i in range(1):
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(9):
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
# assay window from dates

assay_window_end = datetime.datetime.now(datetime.timezone.utc)

assay_builder_from_dates = assay_builder_from_dates.add_run_until(assay_window_end)

# View 1 minute intervals
# just combined
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=6)
    .add_interval(minutes=6)
    .add_start(assay_window_start)
    .add_location_filter([ops_partition, edge_name])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 1 analyses
for i in range(8):
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(2):
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(7):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(3):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(6):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(4):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
# assay window from dates

assay_window_end = datetime.datetime.now(datetime.timezone.utc)

assay_builder_from_dates = assay_builder_from_dates.add_run_until(assay_window_end)

# View 1 minute intervals
# just combined
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=7)
    .add_interval(minutes=7)
    .add_start(assay_window_start)
    .add_location_filter([ops_partition, edge_name])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 7 analyses
for i in range(5):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(5):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(4):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(6):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(3):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(7):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(2):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(8):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(1):
    print(i)
    for good_image in baseline_images_list:
        ops_pipeline_inference(good_image)
for i in range(9):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
for i in range(10):
    print(i)
    for bad_image in blurred_images_list:
        edge_pipeline_inference(bad_image)
# assay window from dates

assay_window_end = datetime.datetime.now(datetime.timezone.utc)

assay_builder_from_dates = assay_builder_from_dates.add_run_until(assay_window_end)

# just combined
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=6)
    .add_interval(minutes=6)
    .add_start(assay_window_start)
    .add_location_filter([ops_partition, edge_name])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 16 analyses

If we isolate to just the Ops center pipeline, we see a different result.

# assay window from dates

assay_window_end = datetime.datetime.now(datetime.timezone.utc)

assay_builder_from_dates = assay_builder_from_dates.add_run_until(assay_window_end)

# just combined
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=6)
    .add_interval(minutes=6)
    .add_start(assay_window_start)
    .add_location_filter([ops_partition])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 13 analyses

If we isolate to only the edge location, we see where the out of baseline scores are coming from.

# assay window from dates

assay_window_end = datetime.datetime.now(datetime.timezone.utc)

assay_builder_from_dates = assay_builder_from_dates.add_run_until(assay_window_end)

# just combined
(assay_builder_from_dates
    .window_builder()
    .add_width(minutes=6)
    .add_interval(minutes=6)
    .add_start(assay_window_start)
    .add_location_filter([edge_name])
)
<wallaroo.assay_config.WindowBuilder at 0x14850dd00>
assay_config = assay_builder_from_dates.build()
assay_results = assay_config.interactive_run()

print(f"Generated {len(assay_results)} analyses")
assay_results.chart_scores()
Generated 12 analyses

With the demonstration complete, we can shut down the edge deployed pipeline and undeploy the pipeline in the Ops center.

pipeline.undeploy()
nameretail-inv-tracker-edge-obs
created2024-01-10 17:50:59.035145+00:00
last_updated2024-01-10 18:09:22.128239+00:00
deployedFalse
archNone
tags
versionsc2dd2c7d-618d-497f-8515-3aa1469d7985, bee8f4f6-b1d3-40e7-9fc5-221db3ff1b87, 79321f4e-ca2d-4be1-8590-8c3be5d8953c, 64c06b04-d8fb-4651-ae5e-886c12a30e94, d0c2ed1c-3691-49f1-8fc8-e6510e4c39f8, 773099d3-6d64-4a92-b5a7-f614a916965d
stepsresnet-with-intensity
publishedTrue

3.2 - Wallaroo Model Observability: Anomaly Detection with CCFraud

How to detect anomalous model inputs or outputs using the CCFraud model as an example.

The following tutorials are available from the Wallaroo Tutorials Repository.

Wallaroo Model Observability: Anomaly Detection with CCFraud

The following tutorial demonstrates the use case of detecting anomalies: inference input or output data that does not match typical validations.

Wallaroo provides validations to detect anomalous data from inference inputs and outputs. Validations are added to a Wallaroo pipeline with the wallaroo.pipeline.add_validations method.

Adding validations takes the format:

pipeline.add_validations(
    validation_name_01 = polars.col(in|out.{column_name}) EXPRESSION,
    validation_name_02 = polars.col(in|out.{column_name}) EXPRESSION
    ...{additional rules}
)
  • validation_name: The user provided name of the validation. The names must match Python variable naming requirements.
    • IMPORTANT NOTE: Using the name count as a validation name returns an error. Any validation rules named count are dropped upon request and a warning returned.
  • polars.col(in|out.{column_name}): Specifies the input or output for a specific field aka “column” in an inference result. Wallaroo inference requests are in the format in.{field_name} for inputs, and out.{field_name} for outputs.
  • EXPRESSION: The expression to validate. When the expression returns True, that indicates an anomaly detected.

The polars library version 0.18.5 is used to create the validation rule. This is installed by default with the Wallaroo SDK. This provides a powerful range of comparisons to organizations tracking anomalous data from their ML models.

When validations are added to a pipeline, inference request outputs return the following fields:

FieldTypeDescription
anomaly.countIntegerThe total of all validations that returned True.
anomaly.{validation name}BoolThe output of the validation {validation_name}.

When validation returns True, an anomaly is detected.

For example, adding the validation fraud to the following pipeline returns anomaly.count of 1 when the validation fraud returns True. The validation fraud returns True when the output field dense_1 at index 0 is greater than 0.9.

sample_pipeline = wallaroo.client.build_pipeline("sample-pipeline")
sample_pipeline.add_model_step(ccfraud_model)

# add the validation
sample_pipeline.add_validations(
    fraud=pl.col("out.dense_1").list.get(0) > 0.9,
    )

# deploy the pipeline
sample_pipeline.deploy()

# sample inference
display(sample_pipeline.infer_from_file("dev_high_fraud.json", data_format='pandas-records'))
 timein.tensorout.dense_1anomaly.countanomaly.fraud
02024-02-02 16:05:42.152[1.0678324729, 18.1555563975, -1.6589551058, 5…][0.981199]1True

Detecting Anom