Wallaroo Model Observability: Anomaly Detection with CCFraud

How to detect anomalous model inputs or outputs using the CCFraud model as an example.

The following tutorials are available from the Wallaroo Tutorials Repository.

Wallaroo Model Observability: Anomaly Detection with CCFraud

The following tutorial demonstrates the use case of detecting anomalies: inference input or output data that does not match typical validations.

Wallaroo provides validations to detect anomalous data from inference inputs and outputs. Validations are added to a Wallaroo pipeline with the wallaroo.pipeline.add_validations method.

Adding validations takes the format:

pipeline.add_validations(
    validation_name_01 = polars.col(in|out.{column_name}) EXPRESSION,
    validation_name_02 = polars.col(in|out.{column_name}) EXPRESSION
    ...{additional rules}
)
  • validation_name: The user provided name of the validation. The names must match Python variable naming requirements.
    • IMPORTANT NOTE: Using the name count as a validation name returns a warning. Any validation rules named count are dropped upon request and an warning returned.
  • polars.col(in|out.{column_name}): Specifies the input or output for a specific field aka “column” in an inference result. Wallaroo inference requests are in the format in.{field_name} for inputs, and out.{field_name} for outputs.
  • EXPRESSION: The expression to validate. When the expression returns True, that indicates an anomaly detected.

The polars library version 0.18.5 is used to create the validation rule. This is installed by default with the Wallaroo SDK. This provides a powerful range of comparisons to organizations tracking anomalous data from their ML models.

When validations are added to a pipeline, inference request outputs return the following fields:

FieldTypeDescription
anomaly.countIntegerThe total of all validations that returned True.
anomaly.{validation name}BoolThe output of the validation {validation_name}.

When validation returns True, an anomaly is detected.

For example, adding the validation fraud to the following pipeline returns anomaly.count of 1 when the validation fraud returns True. The validation fraud returns True when the output field dense_1 at index 0 is greater than 0.9.

sample_pipeline = wallaroo.client.build_pipeline("sample-pipeline")
sample_pipeline.add_model_step(ccfraud_model)

# add the validation
sample_pipeline.add_validations(
    fraud=pl.col("out.dense_1").list.get(0) > 0.9,
    )

# deploy the pipeline
sample_pipeline.deploy()

# sample inference
display(sample_pipeline.infer_from_file("dev_high_fraud.json", data_format='pandas-records'))
 timein.tensorout.dense_1anomaly.countanomaly.fraud
02024-02-02 16:05:42.152[1.0678324729, 18.1555563975, -1.6589551058, 5…][0.981199]1True

Detecting Anomalies from Inference Request Results

When an inference request is submitted to a Wallaroo pipeline with validations, the following fields are output:

FieldTypeDescription
anomaly.countIntegerThe total of all validations that returned True.
anomaly.{validation name}BoolThe output of each pipeline validation {validation_name}.

For example, adding the validation fraud to the following pipeline returns anomaly.count of 1 when the validation fraud returns True.

sample_pipeline = wallaroo.client.build_pipeline("sample-pipeline")
sample_pipeline.add_model_step(ccfraud_model)

# add the validation
sample_pipeline.add_validations(
    fraud=pl.col("out.dense_1").list.get(0) > 0.9,
    )

# deploy the pipeline
sample_pipeline.deploy()

# sample inference
display(sample_pipeline.infer_from_file("dev_high_fraud.json", data_format='pandas-records'))
 timein.tensorout.dense_1anomaly.countanomaly.fraud
02024-02-02 16:05:42.152[1.0678324729, 18.1555563975, -1.6589551058, 5…][0.981199]1True

Anomaly Detection Demonstration

The following demonstrates how to:

  • Upload a ccfraud ML model trained to detect the likelihood of a transaction being fraudulent. This outputs the field dense_1 as an float where the closer to 1, the higher the likelihood of the transaction being fraudulent.
  • Add the ccfraud model as a pipeline step.
  • Add the validation fraud to detect when the output of dense_1 at index 0 when the values are greater than 0.9.
  • Deploy the pipeline and performing sample inferences on it.
  • Perform sample inferences to show when the fraud validation returns True and False.
  • Perform sample inference with different datasets to show enable or disable certain fields from displaying in the inference results.

Prerequisites

  • Wallaroo version 2023.4.1 and above.
  • polars version 0.18.5. This is installed by default with the Wallaroo SDK.

Tutorial Steps

Load Libraries

The first step is to import the libraries used in this notebook.

import wallaroo
wallaroo.__version__
'2024.1.0+e151d6731'

Connect to the Wallaroo Instance through the User Interface

The next step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

wl = wallaroo.Client()

Create a New Workspace

We’ll use the SDK below to create our workspace then assign as our current workspace. The current workspace is used by the Wallaroo SDK for where to upload models, create pipelines, etc. We’ll also set up variables for our models and pipelines down the road, so we have one spot to change names to whatever fits your organization’s standards best.

Before starting, verify that the workspace name is unique in your Wallaroo instance.

workspace_name = 'validation-ccfraud-demonstration-jch'
pipeline_name = 'ccfraud-validation-demo'
model_name = 'ccfraud'
model_file_name = './models/ccfraud.onnx'
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)
{'name': 'validation-ccfraud-demonstration-jch', 'id': 28, 'archived': False, 'created_by': '65124b18-8382-49af-b3c8-ada3b9df3330', 'created_at': '2024-04-16T20:58:29.117008+00:00', 'models': [], 'pipelines': []}

Upload the Model

Upload the model to the Wallaroo workspace with the wallaroo.client.upload_model method. Our ccfraud ML model is a Wallaroo Default Runtime of type ONNX, so all we need is the model name, the model file path, and the framework type of wallaroo.framework.Framework.ONNX.

ccfraud_model = (wl.upload_model(model_name, 
                                 model_file_name, 
                                 framework=wallaroo.framework.Framework.ONNX)
                )

Build the Pipeline

Pipelines are build with the wallaroo.client.build_pipeline method, which takes the pipeline name. This will create the pipeline in our default workspace. Note that if there are any existing pipelines with the same name in this workspace, this method will retrieve that pipeline for this SDK session.

Once the pipeline is created, we add the ccfraud model as our pipeline step.

sample_pipeline = wl.build_pipeline(pipeline_name)
sample_pipeline = sample_pipeline.add_model_step(ccfraud_model)

Add Validation

Now we add our validation to our new pipeline. We will give it the following configuration.

  • Validation Name: fraud
  • Validation Field: out.dense_1
  • Validation Field Index: 0
  • Validation Expression: Values greater than 0.9.

The polars library is required for creating the validation. We will import the polars library, then add our validation to the pipeline.

  • IMPORTANT NOTE: Validation names must be unique per pipeline. If a validation of the same name is added, both are included in the pipeline validations, but only most recent validation with the same name is displayed with the inference results. Anomalies detected by multiple validations of the same name are added to the anomaly.count inference result field.
import polars as pl

sample_pipeline = sample_pipeline.add_validations(
    fraud=pl.col("out.dense_1").list.get(0) > 0.9
)

Display Pipeline And Validation Steps

The method wallaroo.pipeline.steps() shows the current pipeline steps. The added validations are in the Check field. This is used for demonstration purposes to show the added validation to the pipeline.

sample_pipeline.steps()
[{'ModelInference': {'models': [{'name': 'ccfraud', 'version': 'f4a34330-f858-444c-ad25-ab84430a3ad4', 'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507'}]}},
 {'Check': {'tree': ['{"Alias":[{"BinaryExpr":{"left":{"Function":{"input":[{"Column":"out.dense_1"},{"Literal":{"Int32":0}}],"function":{"ListExpr":"Get"},"options":{"collect_groups":"ApplyFlat","fmt_str":"","input_wildcard_expansion":false,"auto_explode":true,"cast_to_supertypes":false,"allow_rename":false,"pass_name_to_apply":false,"changes_length":false,"check_lengths":true,"allow_group_aware":true}}},"op":"Gt","right":{"Literal":{"Float64":0.9}}}},"fraud"]}']}}]

Deploy Pipeline

With the pipeline steps set and the validations created, we deploy the pipeline. Because of it’s size, we will only allocate 0.1 cpu from the cluster for the pipeline’s use.

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.1)\
    .build()

sample_pipeline.deploy(deployment_config=deploy_config)
nameccfraud-validation-demo
created2024-04-16 20:58:30.684829+00:00
last_updated2024-04-16 20:58:31.187374+00:00
deployedTrue
archx86
accelnone
tags
versionsc49d725c-8b61-40e0-9677-7be4784056d0, 37a28acf-973c-4787-b574-96da190bb520
stepsccfraud
publishedFalse

Sample Inferences

Two sample inferences are performed with the method wallaroo.pipeline.infer_from_file that takes either a pandas Record JSON file or an Apache Arrow table as the input.

For our demonstration, we will use the following pandas Record JSON files with the following sample data:

  • ./data/dev_smoke_test.pandas.json: A sample inference that generates a low (lower than 0.01) likelihood of fraud.
  • ./data/dev_high_fraud.pandas.json: A sample inference that generates a high (higher than 0.90) likelihood of fraud.

The inference request returns a pandas DataFrame.

Each of the inference outputs will include the following fields:

FieldTypeDescription
timeDateTimeThe DateTime of the inference request.
in.{input_field_name}Input DependentEach input field submitted is labeled as in.{input_field_name} in the inference request result. For our example, this is tensor, so the input field in the returned inference request is in.tensor.
out.{model_output_field_name}Output DependentEach field output by the ML model is labeled as out.{model_output_field_name} in the inference request result. For our example, the ccfraud model returns dense_1 as its output field, so the output field in the returned inference request is out.dense_1.
anomaly.countIntegerThe total number of validations that returned True.
**anomaly.{validation_name}BoolEach validation added to the pipeline is returned as anomaly.{validation_name}, and returns either True if the validation returns True, indicating an anomaly is found, or False for an anomaly for the validation is not found. For our example, we will have anomaly.fraud returned.
sample_pipeline.infer_from_file("./data/dev_smoke_test.pandas.json")
timein.dense_inputout.dense_1anomaly.countanomaly.fraud
02024-04-16 20:58:47.163[1.0678324729, 0.2177810266, -1.7115145262, 0....[0.0014974177]0False
sample_pipeline.infer_from_file("./data/dev_high_fraud.json")
timein.dense_inputout.dense_1anomaly.countanomaly.fraud
02024-04-16 20:58:47.479[1.0678324729, 18.1555563975, -1.6589551058, 5...[0.981199]1True

Other Validation Examples

The following are additional examples of validations.

Multiple Validations

The following uses multiple validations to check for anomalies. We still use fraud which detects outputs that are greater than 0.9. The second validation too_low triggers an anomaly when the out.dense_1 is under 0.05.

After the validations are added, the pipeline is redeployed to “set” them.

sample_pipeline = sample_pipeline.add_validations(
    too_low=pl.col("out.dense_1").list.get(0) < 0.001
)

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.1)\
    .build()
sample_pipeline.undeploy()
sample_pipeline.deploy(deployment_config=deploy_config)
nameccfraud-validation-demo
created2024-04-16 20:58:30.684829+00:00
last_updated2024-04-16 20:59:32.960008+00:00
deployedTrue
archx86
accelnone
tags
versions71c3a179-8029-4e63-8fe6-7425f9773ccb, c49d725c-8b61-40e0-9677-7be4784056d0, 37a28acf-973c-4787-b574-96da190bb520
stepsccfraud
publishedFalse
sample_pipeline.infer_from_file("./data/dev_smoke_test.pandas.json")
timein.dense_inputout.dense_1anomaly.countanomaly.fraudanomaly.too_low
02024-04-16 20:59:47.432[1.0678324729, 0.2177810266, -1.7115145262, 0....[0.0014974177]0FalseFalse
sample_pipeline.infer_from_file("./data/dev_high_fraud.json")
timein.dense_inputout.dense_1anomaly.countanomaly.fraudanomaly.too_low
02024-04-16 20:59:47.621[1.0678324729, 18.1555563975, -1.6589551058, 5...[0.981199]1TrueFalse

Compound Validations

The following combines multiple field checks into a single validation. For this, we will check for values of out.dense_1 that are between 0.05 and 0.9.

Each expression is separated by (). For example:

  • Expression 1: pl.col("out.dense_1").list.get(0) < 0.9
  • Expression 2: pl.col("out.dense_1").list.get(0) > 0.001
  • Compound Expression: (pl.col("out.dense_1").list.get(0) < 0.9) & (pl.col("out.dense_1").list.get(0) > 0.001)
sample_pipeline = sample_pipeline.add_validations(
    in_between_2=(pl.col("out.dense_1").list.get(0) < 0.9) & (pl.col("out.dense_1").list.get(0) > 0.001)
)

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.1)\
    .build()
sample_pipeline.undeploy()
sample_pipeline.deploy(deployment_config=deploy_config)
nameccfraud-validation-demo
created2024-04-16 20:58:30.684829+00:00
last_updated2024-04-16 21:00:26.057970+00:00
deployedTrue
archx86
accelnone
tags
versions26c0e80c-3fdf-4f0e-ac39-351f7689168c, 71c3a179-8029-4e63-8fe6-7425f9773ccb, c49d725c-8b61-40e0-9677-7be4784056d0, 37a28acf-973c-4787-b574-96da190bb520
stepsccfraud
publishedFalse
results = sample_pipeline.infer_from_file("./data/cc_data_1k.df.json")

results.loc[results['anomaly.in_between_2'] == True] 
timein.dense_inputout.dense_1anomaly.countanomaly.fraudanomaly.in_between_2anomaly.too_low
42024-04-16 21:00:41.347[0.5817662108, 0.097881551, 0.1546819424, 0.47...[0.0010916889]1FalseTrueFalse
72024-04-16 21:00:41.347[1.0379636346, -0.152987302, -1.0912561862, -0...[0.0011294782]1FalseTrueFalse
82024-04-16 21:00:41.347[0.1517283662, 0.6589966337, -0.3323713647, 0....[0.0018743575]1FalseTrueFalse
92024-04-16 21:00:41.347[-0.1683100246, 0.7070470317, 0.1875234948, -0...[0.0011520088]1FalseTrueFalse
102024-04-16 21:00:41.347[0.6066235674, 0.0631839305, -0.0802961973, 0....[0.0016568303]1FalseTrueFalse
........................
9822024-04-16 21:00:41.347[-0.0932906169, 0.2837744937, -0.061094265, 0....[0.0010192394]1FalseTrueFalse
9832024-04-16 21:00:41.347[0.0991458877, 0.5813808183, -0.3863062246, -0...[0.0020678043]1FalseTrueFalse
9922024-04-16 21:00:41.347[1.0458395446, 0.2492453605, -1.5260449285, 0....[0.0013128221]1FalseTrueFalse
9982024-04-16 21:00:41.347[1.0046377125, 0.0343666504, -1.3512533246, 0....[0.0011070371]1FalseTrueFalse
10002024-04-16 21:00:41.347[0.6118805301, 0.1726081102, 0.4310545502, 0.5...[0.0012498498]1FalseTrueFalse

179 rows × 7 columns

Specify Dataset

Wallaroo inference requests allow datasets to be excluded or included with the dataset_exclude and dataset parameters.

ParameterTypeDescription
dataset_excludeList(String)The list of datasets to exclude. Values include:
  • metadata: Returns inference time per model, last model used, and other parameters.
  • anomaly: The anomaly results of all validations added to the pipeline.
datasetList(String)The list of datasets and fields to include.

For our example, we will exclude the anomaly dataset, but include the datasets 'time', 'in', 'out', 'anomaly.count'. Note that while we exclude anomaly, we override that with by setting the anomaly field 'anomaly.count' in our dataset parameter.

sample_pipeline.infer_from_file("./data/dev_high_fraud.json", 
                                dataset_exclude=['anomaly'], 
                                dataset=['time', 'in', 'out', 'anomaly.count']
                                )
timein.dense_inputout.dense_1anomaly.count
02024-04-16 21:00:42.227[1.0678324729, 18.1555563975, -1.6589551058, 5...[0.981199]1

Undeploy the Pipeline

With the demonstration complete, we undeploy the pipeline and return the resources back to the cluster.

sample_pipeline.undeploy()
nameccfraud-validation-demo
created2024-04-16 20:58:30.684829+00:00
last_updated2024-04-16 21:00:26.057970+00:00
deployedFalse
archx86
accelnone
tags
versions26c0e80c-3fdf-4f0e-ac39-351f7689168c, 71c3a179-8029-4e63-8fe6-7425f9773ccb, c49d725c-8b61-40e0-9677-7be4784056d0, 37a28acf-973c-4787-b574-96da190bb520
stepsccfraud
publishedFalse