Model Drift Detection for Edge Deployments: Demonstration

How to detect model drift in Wallaroo Run Anywhere deployments using the house price model as an example.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Model Drift Detection for Edge Deployments Tutorial: Demonstration

The Model Insights feature lets you monitor how the environment that your model operates within may be changing in ways that affect it’s predictions so that you can intervene (retrain) in an efficient and timely manner. Changes in the inputs, data drift, can occur due to errors in the data processing pipeline or due to changes in the environment such as user preference or behavior.

Wallaroo Run Anywhere allows models to be deployed on edge and other locations, and have their inference result logs uploaded to the Wallaroo Ops center. Wallaroo assays allow for model drift detection to include the inference results from one or more deployment locations and compare any one or multiple locations results against an established baseline.

This notebook is designed to demonstrate the Wallaroo Run Anywhere with Model Drift Observability with Wallaroo Assays. This notebook will walk through the process of:

  • Preparation: This notebook focuses on setting up the conditions for model edge deployments to different locations. This includes:
    • Setting up a workspace, pipeline, and model for deriving the price of a house based on inputs.
    • Performing a sample set of inferences to verify the model deployment.
    • Publish the deployed model to an Open Container Initiative (OCI) Registry, and use that to deploy the model to two difference edge locations.
  • Model Drift by Location:
    • Perform inference requests on each of the model edge deployments.
    • Perform the steps in creating an assay:
      • Build an assay baseline with a specified location for inference results.
      • Preview the assay and show different assay configurations based on selecting the inference data from the Wallaroo Ops model deployment versus the edge deployment.
      • Create the assay.
      • View assay results.

This notebook focuses on Model Drift by Location.

Goal

Model insights monitors the output of the house price model over a designated time window and compares it to an expected baseline distribution. We measure the performance of model deployments in different locations and compare that to the baseline to detect model drift.

Resources

This tutorial provides the following:

  • Models:
    • models/rf_model.onnx: The champion model that has been used in this environment for some time.
    • Various inputs:
      • smallinputs.df.json: A set of house inputs that tends to generate low house price values.
      • biginputs.df.json: A set of house inputs that tends to generate high house price values.

Prerequisites

  • A deployed Wallaroo instance with Edge Registry Services and Edge Observability enabled.
  • The following Python libraries installed:
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
  • A X64 Docker deployment to deploy the model on an edge location.
  • The notebook “Wallaroo Run Anywhere Model Drift Observability with Assays: Preparation” has been run, and the model edge deployments executed.

Steps

  • Deploying a sample ML model used to determine house prices based on a set of input parameters.
  • Build an assay baseline from a set of baseline start and end dates, and an assay baseline from a numpy array.
  • Preview the assay and show different assay configurations.
  • Upload the assay.
  • View assay results.
  • Pause and resume the assay.

This notebook requires the notebook “Wallaroo Run Anywhere Model Drift Observability with Assays: Preparation” has been run, and the model edge deployments executed. The name of the workspaces, pipelines, and edge locations in this notebook must match the same ones in “Wallaroo Run Anywhere Model Drift Observability with Assays: Preparation”.

Import Libraries

The first step will be to import our libraries, and set variables used through this tutorial.

import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework

from IPython.display import display

# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

import datetime
import time
import json

workspace_name = f'run-anywhere-assay-demonstration-tutorial'
main_pipeline_name = f'assay-demonstration-tutorial'
model_name_control = f'house-price-estimator'
model_file_name_control = './models/rf_model.onnx'

# Set the name of the assay
assay_name="ops assay example"
edge_assay_name = "edge assay example"
combined_assay_name = "combined assay example"

# ignoring warnings for demonstration
import warnings
warnings.filterwarnings('ignore')

# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

wl = wallaroo.Client()

Retrieve Workspace and Pipeline

For our example, we will retrieve the same workspace and pipeline that were used to create the edge locations. This requires that the preparation notebook is run first and the same workspace and pipeline names are used.

workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)

wl.set_current_workspace(workspace)
{'name': 'run-anywhere-assay-demonstration-tutorial', 'id': 10, 'archived': False, 'created_by': '07256c6a-1f1e-4cc8-bff8-94c9fb7cb843', 'created_at': '2024-04-19T18:44:04.24582+00:00', 'models': [{'name': 'house-price-estimator', 'versions': 1, 'owner_id': '""', 'last_update_time': datetime.datetime(2024, 4, 19, 18, 44, 5, 522119, tzinfo=tzutc()), 'created_at': datetime.datetime(2024, 4, 19, 18, 44, 5, 522119, tzinfo=tzutc())}], 'pipelines': [{'name': 'assay-demonstration-tutorial', 'create_time': datetime.datetime(2024, 4, 19, 18, 44, 5, 950549, tzinfo=tzutc()), 'definition': '[]'}]}

List Pipeline Edges

The pipeline published in the notebook ““Wallaroo Run Anywhere Model Drift Observability with Assays: Preparation” created two edge locations.

We start by retrieving the pipeline, then verifying the pipeline publishes with the wallaroo.pipeline.publishes() methods.

We then list the edges with the wallaroo.pipeline.list_edges methods. This verifies the names of our edge locations, which are used later in the model drift detection by location methods.

mainpipeline = wl.get_pipeline(main_pipeline_name)

# list the publishes

display(mainpipeline.publishes())

# get the edges

display(mainpipeline.list_edges())
idpipeline_version_nameengine_urlpipeline_urlcreated_bycreated_atupdated_at
12c70d1a3-1430-421d-b00e-7d5d8f93413aghcr.io/wallaroolabs/doc-samples/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini:v2024.1.0-main-4963ghcr.io/wallaroolabs/doc-samples/pipelines/assay-demonstration-tutorial:2c70d1a3-1430-421d-b00e-7d5d8f93413ajohn.hansarick@wallaroo.ai2024-19-Apr 18:44:592024-19-Apr 18:44:59
IDNameTagsSPIFFE ID
3ca6f55b-c756-4d8e-a956-c4a9cf5bdde8houseprice-edge-demonstration-01[]wallaroo.ai/ns/deployments/edge/3ca6f55b-c756-4d8e-a956-c4a9cf5bdde8
ce998c28-86ec-42a7-ae7a-35da3da588f1houseprice-edge-demonstration-02[]wallaroo.ai/ns/deployments/edge/ce998c28-86ec-42a7-ae7a-35da3da588f1

Historical Data via Edge Inferences Generation

We will perform sample inference on our edge location. This historical inference data is used later in the drift detection by location examples.

For these example, the edge location is on the hostname HOSTNAME. Change this hostname to the host name of your edge deployment.

We will submit two sets of inferences:

  • A normal set of inferences to generate the baseline, and are unlikely to trigger an assay alert when compared against the baseline. These are run through the location houseprice-edge-01.
  • A set of inferences that will return large house values that is likely to trigger an assay alert when compared against the baseline. These are run through the location houseprice-edge-02.

Each of these will use the inference endpoint /infer. For more details, see How to Publish and Deploy AI Workloads for For Edge and Multicloud Model Deployments.

assay_baseline_start = datetime.datetime.now()

time.sleep(65)

small_houses_inputs = pd.read_json('./data/smallinputs.df.json')
baseline_size = 500

# These inputs will be random samples of small priced houses.  Around 500 is a good number
small_houses = small_houses_inputs.sample(baseline_size, replace=True).reset_index(drop=True)
# small_houses.to_dict(orient="records")
data = small_houses.to_dict(orient="records")

!curl -X POST HOSTNAME:8080/infer \
    -H "Content-Type: Content-Type: application/json; format=pandas-records" \
    --data '{json.dumps(data)}' > results01.df.json

assay_baseline_end = datetime.datetime.now()

time.sleep(65)

# set the start of the assay window period
assay_window_start = datetime.datetime.now()

# generate a set of normal house values
!curl -X POST HOSTNAME:8080/infer \
    -H "Content-Type: Content-Type: application/json; format=pandas-records" \
    --data @./data/normal-inputs.df.json > results01.df.json
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  316k  100  246k  100 71563   536k   152k --:--:-- --:--:-- --:--:--  689k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  589k  100  480k  100  108k   962k   217k --:--:-- --:--:-- --:--:-- 1180k
# set of values for the second edge location

time.sleep(65)

# generate a set of normal house values
!curl -X POST HOSTNAME:8081/infer \
    -H "Content-Type: application/json; format=pandas-records" \
    --data @./data/normal-inputs.df.json > results02.df.json

time.sleep(65)
# generate a set of large house values that will trigger an assay alert based on our baseline
large_houses_inputs = pd.read_json('./data/biginputs.df.json')
baseline_size = 500

# These inputs will be random samples of small priced houses.  Around 500 is a good number
large_houses = large_houses_inputs.sample(baseline_size, replace=True).reset_index(drop=True)
data = large_houses.to_dict(orient="records")

!curl -X POST HOSTNAME:8081/infer \
    -H "Content-Type: Content-Type: application/json; format=pandas-records" \
    --data '{json.dumps(data)}' > results02.df.json
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  589k  100  480k  100  108k   735k   189k --:--:-- --:--:-- --:--:--  924k 100  108k   808k   182k --:--:-- --:--:-- --:--:--  990k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  319k  100  248k  100 73182   933k   268k --:--:-- --:--:-- --:--:-- 1206k

Model Insights via the Wallaroo Dashboard SDK

Assays generated through the Wallaroo SDK can be previewed, configured, and uploaded to the Wallaroo Ops instance. The following is a condensed version of this process. For full details see the Wallaroo SDK Essentials Guide: Assays Management guide.

Model drift detection with assays using the Wallaroo SDK follows this general process.

  • Define the Baseline: From either historical inference data for a specific model in a pipeline, or from a pre-determine array of data, a baseline is formed.
  • Assay Preview: Once the baseline is formed, we preview the assay and configure the different options until we have the the best method of detecting environment or model drift.
  • Create Assay: With the previews and configuration complete, we upload the assay. The assay will perform an analysis on a regular scheduled based on the configuration.
  • Get Assay Results: Retrieve the analyses and use them to detect model drift and possible sources.
  • Pause/Resume Assay: Pause or restart an assay as needed.

Define the Baseline

Assay baselines are defined with the wallaroo.client.build_assay method. Through this process we define the baseline from either a range of dates or pre-generated values.

wallaroo.client.build_assay take the following parameters:

ParameterTypeDescription
assay_nameString (Required) - requiredThe name of the assay. Assay names must be unique across the Wallaroo instance.
pipelinewallaroo.pipeline.Pipeline (Required)The pipeline the assay is monitoring.
model_nameString (Optional) / NoneThe name of the model to monitor. This field should only be used to track the inputs/outputs for a specific model step in a pipeline. If no model_name is to be included, then the parameters must be passed a named parameters not positional ones.
iopathString (Required)The input/output data for the model being tracked in the format input/output field index. Only one value is tracked for any assay. For example, to track the output of the model’s field house_value at index 0, the iopath is 'output house_value 0.
baseline_startdatetime.datetime (Optional)The start time for the inferences to use as the baseline. Must be included with baseline_end. Cannot be included with baseline_data.
baseline_enddatetime.datetime (Optional)The end time of the baseline window. the baseline. Windows start immediately after the baseline window and are run at regular intervals continuously until the assay is deactivated or deleted. Must be included with baseline_start. Cannot be included with baseline_data..
baseline_datanumpy.array (Optional)The baseline data in numpy array format. Cannot be included with either baseline_start or baseline_data.

Note that model_name is an optional parameters when parameters are named. For example:

assay_builder_from_dates = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

or:

assay_builder_from_dates = wl.build_assay("assays from date baseline", 
                                          mainpipeline, 
                                          None, ## since we are using positional parameters, `None` must be included for the model parameter
                                          "output variable 0",
                                          assay_baseline_start, 
                                          assay_baseline_end)

Baselines are created in one of two mutually exclusive methods:

  • Date Range: The baseline_start and baseline_end retrieves the inference requests and results for the pipeline from the start and end period. This data is summarized and used to create the baseline. For our examples, we’re using the variables assay_baseline_start and assay_baseline_end to represent a range of dates, with assay_baseline_start being set before assay_baseline_end.
  • Numpy Values: The baseline_data sets the baseline from a provided numpy array. This allows assay baselines to be created without first performing inferences in Wallaroo.

Define the Baseline by Location Example

This example shows the assay defined from the date ranges from the inferences performed earlier.

By default, all locations are included in the assay location filters. For our example we use wallaroo.assay_config.WindowBuilder.add_location_filter to specify location_01 for our baseline comparison results.

# edge locations

location_01 = "houseprice-edge-demonstration-01"
location_02 = "houseprice-edge-demonstration-02"
# Build the assay, based on the start and end of our baseline time, 
# and tracking the output variable index 0

display(assay_baseline_start)
display(assay_baseline_end)

assay_baseline_from_dates = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# set the location to the edge location
assay_baseline_from_dates.window_builder().add_location_filter([location_01])

# create the baseline from the dates
assay_baseline_run_from_dates = assay_baseline_from_dates.build().interactive_baseline_run()
datetime.datetime(2024, 4, 19, 12, 57, 12, 837401)

datetime.datetime(2024, 4, 19, 12, 58, 18, 555032)

Baseline DataFrame

The method wallaroo.assay_config.AssayBuilder.baseline_dataframe returns a DataFrame of the assay baseline generated from the provided parameters. This includes:

  • metadata: The inference metadata with the model information, inference time, and other related factors.
  • in data: Each input field assigned with the label in.{input field name}.
  • out data: Each output field assigned with the label out.{output field name}

Note that for assays generated from numpy values, there is only the out data based on the supplied baseline data.

In the following example, the baseline DataFrame is retrieved. Note that in this example, the partition is set to houseprice-edge-demonstration-01, the location specified when we set the location when creating the baseline.

display(assay_baseline_from_dates.baseline_dataframe())
timemetadatainput_tensor_0input_tensor_1input_tensor_2input_tensor_3input_tensor_4input_tensor_5input_tensor_6input_tensor_7...input_tensor_9input_tensor_10input_tensor_11input_tensor_12input_tensor_13input_tensor_14input_tensor_15input_tensor_16input_tensor_17output_variable_0
01713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}4.02.502350.06958.02.00.00.03.0...2350.00.047.332100-122.1719972480.06395.016.00.00.0461279.12500
11713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}3.02.501350.0941.03.00.00.03.0...1350.00.047.626499-122.3639981640.01369.08.00.00.0684577.18750
21713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}3.02.752600.012860.01.00.00.03.0...1350.01250.047.695000-121.9179992260.012954.049.00.00.0703282.62500
31713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}4.02.501980.05909.02.00.00.03.0...1980.00.047.391300-122.1849982550.05487.011.00.00.0324875.06250
41713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}3.02.501750.07208.02.00.00.03.0...1750.00.047.431499-122.1920012050.07524.020.00.00.0311909.59375
..................................................................
4951713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}5.02.002300.07897.02.50.00.04.0...2300.00.047.755600-122.3560032030.07902.059.00.00.0523576.25000
4961713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}2.01.00870.04600.01.00.00.04.0...870.00.047.527401-122.378998930.04600.072.00.00.0313906.53125
4971713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}3.01.752350.020820.01.00.00.04.0...1800.0550.047.609501-122.0589982040.010800.036.00.00.0700294.18750
4981713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}4.02.501820.03899.02.00.00.03.0...1820.00.047.735001-121.9850011820.03899.016.00.00.0437177.84375
4991713553098362{'last_model': '{"model_name":"house-price-estimator","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': '2c70d1a3-1430-421d-b00e-7d5d8f93413a', 'elapsed': [4734876, 1569242], 'dropped': [], 'partition': 'houseprice-edge-demonstration-01'}3.02.501660.02890.02.00.00.03.0...1660.00.047.543400-122.2929991540.02890.014.00.00.0544392.12500

500 rows × 21 columns

Baseline Stats

The method wallaroo.assay.AssayAnalysis.baseline_stats() returns a pandas.core.frame.DataFrame of the baseline stats.

The baseline stats are displayed in the sample below.

assay_baseline_run_from_dates.baseline_stats()
Baseline
count500
min236238.65625
max1489624.5
mean527769.114156
median450867.5625
std235592.519303
start2024-04-19T18:57:12.837401+00:00
end2024-04-19T18:58:18.554401+00:00

Baseline Bins

The method wallaroo.assay.AssayAnalysis.baseline_bins a simple dataframe to with the edge/bin data for a baseline.

assay_baseline_run_from_dates.baseline_bins()
b_edgesb_edge_namesb_aggregated_valuesb_aggregation
02.362387e+05left_outlier0.000Aggregation.DENSITY
13.338782e+05q_200.202Aggregation.DENSITY
24.371778e+05q_400.212Aggregation.DENSITY
35.384368e+05q_600.192Aggregation.DENSITY
47.050134e+05q_800.194Aggregation.DENSITY
51.489624e+06q_1000.200Aggregation.DENSITY
6infright_outlier0.000Aggregation.DENSITY

Baseline Histogram Chart

The method wallaroo.assay_config.AssayBuilder.baseline_histogram returns a histogram chart of the assay baseline generated from the provided parameters.

assay_baseline_from_dates.baseline_histogram()

Assay Preview

Now that the baseline is defined, we look at different configuration options and view how the assay baseline and results changes. Once we determine what gives us the best method of determining model drift, we can create the assay.

The following examples show different methods of previewing the assay, then how to configure the assay by collecting data from different locations.

Analysis List Chart Scores

Analysis List scores show the assay scores for each assay result interval in one chart. Values that are outside of the alert threshold are colored red, while scores within the alert threshold are green.

Assay chart scores are displayed with the method wallaroo.assay.AssayAnalysisList.chart_scores(title: Optional[str] = None), with ability to display an optional title with the chart.

The following example shows retrieving the assay results and displaying the chart scores for each window interval for location_01

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.chart_scores()
# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01, location_02])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.chart_scores()

Analysis Chart

The method wallaroo.assay.AssayAnalysis.chart() displays a comparison between the baseline and an interval of inference data.

This is compared to the Chart Scores, which is a list of all of the inference data split into intervals, while the Analysis Chart shows the breakdown of one set of inference data against the baseline.

Score from the Analysis List Chart Scores and each element from the Analysis List DataFrame generates

The following fields are included.

FieldTypeDescription
baseline meanFloatThe mean of the baseline values.
window meanFloatThe mean of the window values.
baseline medianFloatThe median of the baseline values.
window medianFloatThe median of the window values.
bin_modeStringThe binning mode used for the assay.
aggregationStringThe aggregation mode used for the assay.
metricStringThe metric mode used for the assay.
weightedBoolWhether the bins were manually weighted.
scoreFloatThe score from the assay window.
scoresList(Float)The score from each assay window bin.
indexInteger/NoneThe window index. Interactive assay runs are None.
# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results[0].chart()
baseline mean = 527769.11415625
window mean = 539489.484203125
baseline median = 450867.5625
window median = 451046.9375
bin_mode = Quantile
aggregation = Density
metric = PSI
weighted = False
score = 0.004311757135528007
scores = [0.0, 0.0001809182290241255, 0.0011007381890103568, 0.000612816677154065, 4.603670902398101e-05, 2.010067170700292e-05, 0.002351146659608475]
index = None

Analysis List DataFrame

wallaroo.assay.AssayAnalysisList.to_dataframe() returns a DataFrame showing the assay results for each window aka individual analysis. This DataFrame contains the following fields:

FieldTypeDescription
assay_idInteger/NoneThe assay id. Only provided from uploaded and executed assays.
nameString/NoneThe name of the assay. Only provided from uploaded and executed assays.
iopathString/NoneThe iopath of the assay. Only provided from uploaded and executed assays.
scoreFloatThe assay score.
startDateTimeThe DateTime start of the assay window.
minFloatThe minimum value in the assay window.
maxFloatThe maximum value in the assay window.
meanFloatThe mean value in the assay window.
medianFloatThe median value in the assay window.
stdFloatThe standard deviation value in the assay window.
warning_thresholdFloat/NoneThe warning threshold of the assay window.
alert_thresholdFloat/NoneThe alert threshold of the assay window.
statusStringThe assay window status. Values are:
  • OK: The score is within accepted thresholds.
  • Warning: The score has triggered the warning_threshold if exists, but not the alert_threshold.
  • Alert: The score has triggered the the alert_threshold.

For this example, the assay analysis list DataFrame is listed.

From this tutorial, we should have 2 windows of dta to look at, each one minute apart.

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.to_dataframe()
idassay_idassay_nameiopathpipeline_idpipeline_namescorestartminmaxmeanmedianstdwarning_thresholdalert_thresholdstatus
0NoneNoneassays from date baselineNone0.0043122024-04-19T18:59:23.561523+00:00236238.656252016006.0539489.484203451046.9375264051.044244None0.25Ok

Configure Assays

Before creating the assay, configure the assay and continue to preview it until the best method for detecting drift is set.

Location Filter

This tutorial focuses on the assay configuration method wallaroo.assay_config.WindowBuilder.add_location_filter.

Location Filter Parameters

add_location_filter takes the following parameters.

ParameterTypeDescription
locationsList(String)The list of model deployment locations for the assay.
Location Filter Example

By default, the locations parameter includes all locations as part of the pipeline. This is seen in the default where no location filter is set, and of the inference data is shown.

For our examples, we will show different locations and how the assay changes. For the first example, we set the location to location_01 which was used to create the baseline, and included inferences that were likely to not trigger a model drift detection alert.

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.chart_scores()
assay_results.to_dataframe()
idassay_idassay_nameiopathpipeline_idpipeline_namescorestartminmaxmeanmedianstdwarning_thresholdalert_thresholdstatus
0NoneNoneassays from date baselineNone0.0043122024-04-19T18:59:23.561523+00:00236238.656252016006.0539489.484203451046.9375264051.044244None0.25Ok

Now we will set the location to location_02 which had a set of inferences likely to trigger a model drift alert.

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_02])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.chart_scores()
assay_results.to_dataframe()
idassay_idassay_nameiopathpipeline_idpipeline_namescorestartminmaxmeanmedianstdwarning_thresholdalert_thresholdstatus
0NoneNoneassays from date baselineNone0.0043122024-04-19T19:00:23.561523+00:002.362387e+052016006.05.394895e+054.510469e+05264051.044244None0.25Ok
1NoneNoneassays from date baselineNone8.8691152024-04-19T19:01:23.561523+00:001.514080e+062016006.01.888635e+061.946437e+06157648.930624None0.25Alert

The next example includes location_01 and location_02. Since they were performed in distinct times, the model insights scores for each location is seen in the chart.

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01, location_02])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

# build the assay configuration
assay_config = assay_baseline.build()

# perform an interactive run and collect inference data
assay_results = assay_config.interactive_run()

# Preview the assay analyses
assay_results.chart_scores()
assay_results.to_dataframe()
idassay_idassay_nameiopathpipeline_idpipeline_namescorestartminmaxmeanmedianstdwarning_thresholdalert_thresholdstatus
0NoneNoneassays from date baselineNone0.0043122024-04-19T18:59:23.561523+00:002.362387e+052016006.05.394895e+054.510469e+05264051.044244None0.25Ok
1NoneNoneassays from date baselineNone0.0043122024-04-19T19:00:23.561523+00:002.362387e+052016006.05.394895e+054.510469e+05264051.044244None0.25Ok
2NoneNoneassays from date baselineNone8.8691152024-04-19T19:01:23.561523+00:001.514080e+062016006.01.888635e+061.946437e+06157648.930624None0.25Alert

Create Assay

With the assay previewed and configuration options determined, we officially create it by uploading it to the Wallaroo instance.

Once it is uploaded, the assay runs an analysis based on the window width, interval, and the other settings configured.

Assays are uploaded with the wallaroo.assay_config.upload() method. This uploads the assay into the Wallaroo database with the configurations applied and returns the assay id. Note that assay names must be unique across the Wallaroo instance; attempting to upload an assay with the same name as an existing one will return an error.

wallaroo.assay_config.upload() returns the assay id for the assay.

Typically we would just call wallaroo.assay_config.upload() after configuring the assay. For the example below, we will perform the complete configuration in one window to show all of the configuration steps at once before creating the assay, and narrow the locations to location_01 and location_02. By default, all locations associated with a pipeline are included in the assay results unless the add_location_filter method is applied to specify location(s).

# Create the assay baseline
assay_baseline = wl.build_assay(assay_name="assays from date baseline", 
                                          pipeline=mainpipeline, 
                                          iopath="output variable 0",
                                          baseline_start=assay_baseline_start, 
                                          baseline_end=assay_baseline_end)

# Set the assay parameters

# set the location to the edge location
assay_baseline.window_builder().add_location_filter([location_01])

# The end date to gather inference results
assay_baseline.add_run_until(datetime.datetime.now())

# Set the interval and window to one minute each, set the start date for gathering inference results
assay_baseline.window_builder().add_width(minutes=1).add_interval(minutes=1).add_start(assay_window_start)

assay_id = assay_baseline.upload()

The assay is now visible through the Wallaroo UI by selecting the workspace, then the pipeline, then Insights. The following is an example of another assay in the Wallaroo Dashboard.

Get Assay Info

Assay information is retrieved with the wallaroo.client.get_assay_info() which takes the following parameters.

ParameterTypeDescription
assay_idInteger (Required)The numerical id of the assay.

This returns the following:

ParameterTypeDescription
idIntegerThe numerical id of the assay.
nameStringThe name of the assay.
activeBooleanTrue: The assay is active and generates analyses based on its configuration. False: The assay is disabled and will not generate new analyses.
pipeline_nameStringThe name of the pipeline the assay references.
last_runDateTimeThe date and time the assay last ran.
next_runDateTimeTHe date and time the assay analysis will next run.
alert_thresholdFloatThe alert threshold setting for the assay.
baselineDictThe baseline and settings as set from the assay configuration.
iopathStringThe iopath setting for the assay.
metricStringThe metric setting for the assay.
num_binsIntegerThe number of bins for the assay.
bin_weightsList/NoneThe bin weights used if any.
bin_modeStringThe binning mode used.
display(wl.get_assay_info(assay_id))
idnameactivestatuspipeline_namelast_runnext_runalert_thresholdbaselineiopathmetricnum_binsbin_weightsbin_mode
02assays from date baselineTruecreatedassay-demonstration-tutorialNone2024-04-19T19:18:12.465292+00:000.25Start:2024-04-19T18:57:12.837401+00:00, End:2024-04-19T18:58:18.554401+00:00output variable 0PSI5NoneQuantile