Inference Logs Tutorial

How to retrieve Inference logs as DataFrame or Apache Arrow tables, and save inference logs to files.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Pipeline Log Tutorial

This tutorial will demonstrate how to:

  1. Select or create a workspace, pipeline and upload the control model, then additional models for A/B Testing and Shadow Deploy.
  2. Add a pipeline step with the champion model, then deploy the pipeline and perform sample inferences.
  3. Display the various log types for a standard deployed pipeline. These include:
    1. Logs displayed via the Wallaroo SDK filtered by date and time.
    2. Logs exported to files via the Wallaroo SDK.
    3. Logs displayed via the Wallaroo MLOps API filtered by date and time.
  4. Swap out the pipeline step with the champion model with a shadow deploy step that compares the champion model against two competitors.
  5. Perform sample inferences with a shadow deployed step, then display the log files for a shadow deployed pipeline.
  6. Swap out the shadow deployed pipeline step with an A/B pipeline step.
  7. Perform sample inferences with a A/B pipeline step, then display the log files for an A/B pipeline step.
  8. Undeploy the pipeline.

This tutorial provides the following:

  • Models:
    • models/rf_model.onnx: The champion model that has been used in this environment for some time.
    • models/xgb_model.onnx and models/gbr_model.onnx: Rival models that will be tested against the champion.
  • Data:
    • data/xtest-1.df.json and data/xtest-1k.df.json: DataFrame JSON inference inputs with 1 input and 1,000 inputs.
    • data/xtest-1k.arrow: Apache Arrow inference inputs with 1 input and 1,000 inputs.

Prerequisites

  • A deployed Wallaroo instance
  • The following Python libraries installed:
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
    • pyarrow: Pyarrow for Apache Arrow support

Initial Steps

Import libraries

The first step is to import the libraries needed for this notebook.

import wallaroo
from wallaroo.object import EntityNotFoundError

import pyarrow as pa
import requests

from IPython.display import display

# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)

import datetime

import os

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()

Create Workspace

We will create a workspace to manage our pipeline and models. The following variables will set the name of our sample workspace then set it as the current workspace.

workspace_name = 'logworkspace'
main_pipeline_name = 'logpipeline-test'
model_name_control = 'logcontrol'
model_file_name_control = './models/rf_model.onnx'
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)

wl.set_current_workspace(workspace)
{'name': 'logworkspace', 'id': 10, 'archived': False, 'created_by': 'ee2bee02-d1f1-4302-916b-08a9d2bf88b1', 'created_at': '2025-02-06T17:24:30.687557+00:00', 'models': [], 'pipelines': []}

Standard Pipeline

Upload The Champion Model

For our example, we will upload the champion model that has been trained to derive house prices from a variety of inputs. The model file is rf_model.onnx, and is uploaded with the name housingcontrol.

housing_model_control = (wl.upload_model(model_name_control, 
                                         model_file_name_control, 
                                         framework=wallaroo.framework.Framework.ONNX)
                                         .configure(tensor_fields=["tensor"])
                        )

Build the Pipeline

This pipeline is made to be an example of an existing situation where a model is deployed and being used for inferences in a production environment. We’ll call it housepricepipeline, set housingcontrol as a pipeline step, then run a few sample inferences.

mainpipeline = wl.build_pipeline(main_pipeline_name)
# in case this pipeline was run before
mainpipeline.clear()
mainpipeline.add_model_step(housing_model_control)

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.25)\
    .build()

mainpipeline.deploy(deployment_config=deploy_config)
namelogpipeline-test
created2025-02-06 17:24:35.075026+00:00
last_updated2025-02-06 17:24:35.536089+00:00
deployedTrue
workspace_id10
workspace_namelogworkspace
archx86
accelnone
tags
versions4352a1e8-909e-4e87-a4c7-a36f1546ebd7, 3658e031-941e-4207-882a-5881f9db1184
stepslogcontrol
publishedFalse

Testing

We’ll use two inferences as a quick sample test - one that has a house that should be determined around \$700k, the other with a house determined to be around \$1.5 million. We’ll also save the start and end periods for these events to for later log functionality.

dataframe_start = datetime.datetime.now()

normal_input = pd.DataFrame.from_records({"tensor": [
            [
                4.0, 
                2.5, 
                2900.0, 
                5505.0, 
                2.0, 
                0.0, 
                0.0, 
                3.0, 
                8.0, 
                2900.0, 
                0.0, 
                47.6063, 
                -122.02, 
                2970.0, 
                5251.0, 
                12.0, 
                0.0, 
                0.0
            ]
        ]
    }
)
result = mainpipeline.infer(normal_input)
display(result)
timein.tensorout.variableanomaly.count
02025-02-06 17:25:00.487[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.7]0
large_house_input = pd.DataFrame.from_records(
    {
        'tensor': [
            [
                4.0, 
                3.0, 
                3710.0, 
                20000.0, 
                2.0, 
                0.0, 
                2.0, 
                5.0, 
                10.0, 
                2760.0, 
                950.0, 
                47.6696, 
                -122.261, 
                3970.0, 
                20000.0, 
                79.0, 
                0.0, 
                0.0
            ]
        ]
    }
)
large_house_result = mainpipeline.infer(large_house_input)
display(large_house_result)

import time
time.sleep(10)
dataframe_end = datetime.datetime.now()
timein.tensorout.variableanomaly.count
02025-02-06 17:25:06.352[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.4]0

As one last sample, we’ll run through roughly 1,000 inferences at once and show a few of the results. For this example we’ll use an Apache Arrow table, which has a smaller file size compared to uploading a pandas DataFrame JSON file. The inference result is returned as an arrow table, which we’ll convert into a pandas DataFrame to display the first 20 results.

batch_inferences = mainpipeline.infer_from_file('./data/xtest-1k.arrow')

large_inference_result = batch_inferences.to_pandas()
display(large_inference_result.head(20))
timein.tensorout.variableanomaly.count
02025-02-06 17:25:16.671[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
12025-02-06 17:25:16.671[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
22025-02-06 17:25:16.671[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
32025-02-06 17:25:16.671[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
42025-02-06 17:25:16.671[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
52025-02-06 17:25:16.671[3.0, 2.0, 2140.0, 4923.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1070.0, 1070.0, 47.6902, -122.339, 1470.0, 4923.0, 86.0, 0.0, 0.0][668288.0]0
62025-02-06 17:25:16.671[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]0
72025-02-06 17:25:16.671[3.0, 2.0, 1280.0, 960.0, 2.0, 0.0, 0.0, 3.0, 9.0, 1040.0, 240.0, 47.602, -122.311, 1280.0, 1173.0, 0.0, 0.0, 0.0][684577.2]0
82025-02-06 17:25:16.671[4.0, 2.5, 2820.0, 15000.0, 2.0, 0.0, 0.0, 4.0, 9.0, 2820.0, 0.0, 47.7255, -122.101, 2440.0, 15000.0, 29.0, 0.0, 0.0][727898.1]0
92025-02-06 17:25:16.671[3.0, 2.25, 1790.0, 11393.0, 1.0, 0.0, 0.0, 3.0, 8.0, 1790.0, 0.0, 47.6297, -122.099, 2290.0, 11894.0, 36.0, 0.0, 0.0][559631.1]0
102025-02-06 17:25:16.671[3.0, 1.5, 1010.0, 7683.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1010.0, 0.0, 47.72, -122.318, 1550.0, 7271.0, 61.0, 0.0, 0.0][340764.53]0
112025-02-06 17:25:16.671[3.0, 2.0, 1270.0, 1323.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1270.0, 0.0, 47.6934, -122.342, 1330.0, 1323.0, 8.0, 0.0, 0.0][442168.06]0
122025-02-06 17:25:16.671[4.0, 1.75, 2070.0, 9120.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1250.0, 820.0, 47.6045, -122.123, 1650.0, 8400.0, 57.0, 0.0, 0.0][630865.6]0
132025-02-06 17:25:16.671[4.0, 1.0, 1620.0, 4080.0, 1.5, 0.0, 0.0, 3.0, 7.0, 1620.0, 0.0, 47.6696, -122.324, 1760.0, 4080.0, 91.0, 0.0, 0.0][559631.1]0
142025-02-06 17:25:16.671[4.0, 3.25, 3990.0, 9786.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3990.0, 0.0, 47.6784, -122.026, 3920.0, 8200.0, 10.0, 0.0, 0.0][909441.1]0
152025-02-06 17:25:16.671[4.0, 2.0, 1780.0, 19843.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1780.0, 0.0, 47.4414, -122.154, 2210.0, 13500.0, 52.0, 0.0, 0.0][313096.0]0
162025-02-06 17:25:16.671[4.0, 2.5, 2130.0, 6003.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2130.0, 0.0, 47.4518, -122.12, 1940.0, 4529.0, 11.0, 0.0, 0.0][404040.8]0
172025-02-06 17:25:16.671[3.0, 1.75, 1660.0, 10440.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1040.0, 620.0, 47.4448, -121.77, 1240.0, 10380.0, 36.0, 0.0, 0.0][292859.5]0
182025-02-06 17:25:16.671[3.0, 2.5, 2110.0, 4118.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2110.0, 0.0, 47.3878, -122.153, 2110.0, 4044.0, 25.0, 0.0, 0.0][338357.88]0
192025-02-06 17:25:16.671[4.0, 2.25, 2200.0, 11250.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1300.0, 900.0, 47.6845, -122.201, 2320.0, 10814.0, 94.0, 0.0, 0.0][682284.6]0

Standard Pipeline Logs

Pipeline logs with standard pipeline steps are retrieved either with:

  • Pipeline logs which returns either a pandas DataFrame or Apache Arrow table.
  • Pipeline export_logs which saves the logs either a pandas DataFrame JSON file or Apache Arrow table.

For full details, see the Wallaroo Documentation Pipeline Log Management guide.

Pipeline Log Method

The Pipeline logs method includes the following parameters. For a complete list, see the Wallaroo SDK Essentials Guide: Pipeline Log Management.

ParameterTypeDescription
limitInt (Optional)Limits how many log records to display. Defaults to 100. If there are more pipeline logs than are being displayed, the Warning message Pipeline log record limit exceeded will be displayed. For example, if 100 log files were requested and there are a total of 1,000, the warning message will be displayed.
start_datetime and end_datetimeDateTime (Optional)Limits logs to all logs between the start and end DateTime parameters. Both parameters must be provided. Submitting a logs() request with only start_datetime or end_datetime will generate an exception.
If start_datetime and end_datetime are provided as parameters, then the records are returned in chronological order, with the oldest record displayed first.
datasetList (OPTIONAL)The datasets to be returned. The datasets available are:
  • *: Default. This translates to ["time", "in", "out", "anomaly"].
  • time: The DateTime of the inference request.
  • in: All inputs listed as in_{variable_name}.
  • out: All outputs listed as out_variable_name.
  • anomaly: Flags whether an anomaly was detected was triggered. 0 indicates no checks were triggered, 1 or greater indicates a an anomaly was detected. was triggered. Each validation is displayed in the returned logs as part of the anomaly dataset as anomaly.{validation_name}. For more information on anomaly detection, see Wallaroo SDK Essentials Guide: Anomaly Detection
  • meta: Returns metadata. IMPORTANT NOTE: See Metadata RequestsRestrictions for specifications on how this dataset can be used with other datasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
    • Returns in the metadata.last_model field:
      • A dict with each Python step as:
        • model_name: The name of the model in the pipeline step.
        • model_sha : The sha hash of the model in the pipeline step.
    • Returns in the metadata.pipeline_version field:
      • The pipeline version as a UUID value.
  • metadata.elapsed: IMPORTANT NOTE: See Metadata Requests Restrictionsfor specifications on how this dataset can be used with other datasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
arrowBoolean (Optional)Defaults to False. If arrow is set to True, then the logs are returned as an Apache Arrow table. If arrow=False, then the logs are returned as a pandas DataFrame.
Pipeline Log Warnings

If the total number of logs the either the set limit or 10 MB in file size, the following warning is returned:

Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

If the total number of logs requested either through the limit or through the start_datetime and end_datetime request is greater than 10 MB in size, the following error is displayed:

Warning: Pipeline log size limit exceeded. Only displaying 509 log messages. Please request a file using export_logs.

The following examples demonstrate displaying the logs, then displaying the logs between the control_model_start and control_model_end periods, then again retrieved as an Arrow table with the logs limited to only 5 entries.

# pipeline log retrieval - reverse chronological order

regular_logs = mainpipeline.logs()

display("Standard Logs")
display(len(regular_logs))
display(regular_logs)

# Display metadata

metadatalogs = mainpipeline.logs(dataset=["time", "out.variable", "metadata"])
display("Metadata Logs")
# Only showing the pipeline version for space reasons
display(metadatalogs.loc[:, ["time", "out.variable", "metadata.pipeline_version"]])

# Display logs restricted by date and limit 

display("Logs restricted by date")
arrow_logs = mainpipeline.logs(start_datetime=dataframe_start, end_datetime=dataframe_end, limit=50)

display(len(arrow_logs))
display(arrow_logs)

# # pipeline log retrieval limited to arrow tables
display(mainpipeline.logs(arrow=True))
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
'Standard Logs'
100
timein.tensorout.variableanomaly.count
02025-02-06 17:25:16.671[3.0, 2.0, 2005.0, 7000.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1605.0, 400.0, 47.6039, -122.298, 1750.0, 4500.0, 34.0, 0.0, 0.0][581003.0]0
12025-02-06 17:25:16.671[3.0, 1.75, 2910.0, 37461.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1530.0, 1380.0, 47.7015, -122.164, 2520.0, 18295.0, 47.0, 0.0, 0.0][706823.56]0
22025-02-06 17:25:16.671[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]0
32025-02-06 17:25:16.671[4.0, 1.75, 2700.0, 7875.0, 1.5, 0.0, 0.0, 4.0, 8.0, 2700.0, 0.0, 47.454, -122.144, 2220.0, 7875.0, 46.0, 0.0, 0.0][441960.38]0
42025-02-06 17:25:16.671[3.0, 2.5, 2900.0, 23550.0, 1.0, 0.0, 0.0, 3.0, 10.0, 1490.0, 1410.0, 47.5708, -122.153, 2900.0, 19604.0, 27.0, 0.0, 0.0][827411.0]0
...............
952025-02-06 17:25:16.671[2.0, 1.5, 1070.0, 1236.0, 2.0, 0.0, 0.0, 3.0, 8.0, 1000.0, 70.0, 47.5619, -122.382, 1170.0, 1888.0, 10.0, 0.0, 0.0][435628.56]0
962025-02-06 17:25:16.671[3.0, 2.5, 2830.0, 6000.0, 1.0, 0.0, 3.0, 3.0, 9.0, 1730.0, 1100.0, 47.5751, -122.378, 2040.0, 5300.0, 60.0, 0.0, 0.0][981676.6]0
972025-02-06 17:25:16.671[4.0, 1.75, 1720.0, 8750.0, 1.0, 0.0, 0.0, 3.0, 7.0, 860.0, 860.0, 47.726, -122.21, 1790.0, 8750.0, 43.0, 0.0, 0.0][437177.84]0
982025-02-06 17:25:16.671[4.0, 2.25, 4470.0, 60373.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4470.0, 0.0, 47.7289, -122.127, 3210.0, 40450.0, 26.0, 0.0, 0.0][1208638.0]0
992025-02-06 17:25:16.671[3.0, 1.0, 1150.0, 3000.0, 1.0, 0.0, 0.0, 5.0, 6.0, 1150.0, 0.0, 47.6867, -122.345, 1460.0, 3200.0, 108.0, 0.0, 0.0][448627.72]0

100 rows × 4 columns

Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
'Metadata Logs'
timeout.variablemetadata.pipeline_version
02025-02-06 17:25:16.671[581003.0]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
12025-02-06 17:25:16.671[706823.56]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
22025-02-06 17:25:16.671[1060847.5]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
32025-02-06 17:25:16.671[441960.38]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
42025-02-06 17:25:16.671[827411.0]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
............
952025-02-06 17:25:16.671[435628.56]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
962025-02-06 17:25:16.671[981676.6]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
972025-02-06 17:25:16.671[437177.84]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
982025-02-06 17:25:16.671[1208638.0]4352a1e8-909e-4e87-a4c7-a36f1546ebd7
992025-02-06 17:25:16.671[448627.72]4352a1e8-909e-4e87-a4c7-a36f1546ebd7

100 rows × 3 columns

'Logs restricted by date'
2
timein.tensorout.variableanomaly.count
02025-02-06 17:25:00.487[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.7]0
12025-02-06 17:25:06.352[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.4]0
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
pyarrow.Table
time: timestamp[ms]
in.tensor: list<item: float>
  child 0, item: float
out.variable: list<inner: float not null> not null
  child 0, inner: float not null
anomaly.count: uint32 not null
----
time: [[2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,...,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671,2025-02-06 17:25:16.671]]
in.tensor: [[[3,2,2005,7000,1,...,1750,4500,34,0,0],[3,1.75,2910,37461,1,...,2520,18295,47,0,0],...,[4,2.25,4470,60373,2,...,3210,40450,26,0,0],[3,1,1150,3000,1,...,1460,3200,108,0,0]]]
out.variable: [[[581003],[706823.56],...,[1208638],[448627.72]]]
anomaly.count: [[0,0,0,0,0,...,0,0,0,0,0]]

Standard Pipeline Steps Log Requests

Effected pipeline steps:

  • add_model_step
  • replace_with_model_step

For log file requests, the following metadata dataset requests for standard pipeline steps are available:

  • metadata

These must be paired with specific columns. * is not available when paired with metadata.

  • in: All input fields.
  • out: All output fields.
  • time: The DateTime the inference request was made.
  • in.{input_fields}: Any input fields (tensor, etc.)
  • out.{output_fields}: Any output fields (out.house_price, out.variable, etc.)
  • anomaly.count: Any anomalies detected from validations.
  • anomaly.{validation}: The validation that triggered the anomaly detection and whether it is True (indicating an anomaly was detected) or False.

The following requests the metadata, and displays the output variable and last model from the metadata.

# Display metadata

metadatalogs = mainpipeline.logs(dataset=['time', "out","metadata"])
display("Metadata Logs")
display(metadatalogs.loc[:, ['time', 'out.variable', 'metadata.last_model']])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
'Metadata Logs'
timeout.variablemetadata.last_model
02025-02-06 17:25:16.671[581003.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
12025-02-06 17:25:16.671[706823.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
22025-02-06 17:25:16.671[1060847.5]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
32025-02-06 17:25:16.671[441960.38]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
42025-02-06 17:25:16.671[827411.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
............
952025-02-06 17:25:16.671[435628.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
962025-02-06 17:25:16.671[981676.6]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
972025-02-06 17:25:16.671[437177.84]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
982025-02-06 17:25:16.671[1208638.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
992025-02-06 17:25:16.671[448627.72]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}

100 rows × 3 columns

Pipeline Limits

In a previous step we performed 10,000 inferences at once. If we attempt to pull them at once, we’ll likely run into the size limit for this pipeline and receive the following warning message indicating that the pipeline size limits were exceeded and we should use export_logs instead.

Warning: Pipeline log size limit exceeded. Only displaying 1000 log messages (of 10000 requested). Please request a file using export_logs.

logs = mainpipeline.logs(limit=10000)
display(logs)
Pipeline log schema has changed over the logs requested 1000 newest records retrieved successfully, newest record seen was at <datetime>. Please request additional records separately
timein.tensorout.variableanomaly.count
02025-02-06 17:25:16.671[3.0, 2.0, 2005.0, 7000.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1605.0, 400.0, 47.6039, -122.298, 1750.0, 4500.0, 34.0, 0.0, 0.0][581003.0]0
12025-02-06 17:25:16.671[3.0, 1.75, 2910.0, 37461.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1530.0, 1380.0, 47.7015, -122.164, 2520.0, 18295.0, 47.0, 0.0, 0.0][706823.56]0
22025-02-06 17:25:16.671[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]0
32025-02-06 17:25:16.671[4.0, 1.75, 2700.0, 7875.0, 1.5, 0.0, 0.0, 4.0, 8.0, 2700.0, 0.0, 47.454, -122.144, 2220.0, 7875.0, 46.0, 0.0, 0.0][441960.38]0
42025-02-06 17:25:16.671[3.0, 2.5, 2900.0, 23550.0, 1.0, 0.0, 0.0, 3.0, 10.0, 1490.0, 1410.0, 47.5708, -122.153, 2900.0, 19604.0, 27.0, 0.0, 0.0][827411.0]0
...............
9952025-02-06 17:25:16.671[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
9962025-02-06 17:25:16.671[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
9972025-02-06 17:25:16.671[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
9982025-02-06 17:25:16.671[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
9992025-02-06 17:25:16.671[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0

1000 rows × 4 columns

Pipeline export_logs Method

The Pipeline method export_logs returns the Pipeline records as either a DataFrame JSON file, or an Apache Arrow table file. For a complete list, see the Wallaroo SDK Essentials Guide: Pipeline Log Management.

The export_logs method takes the following parameters:

ParameterTypeDescription
directoryString (Optional) (Default: logs)Logs are exported to a file from current working directory to directory.
data_size_limitString (Optional) ((Default: 100MB)The maximum size for the exported data in bytes. Note that file size is approximate to the request; a request of 10MiB may return 10.3MB of data. The fields are in the format “{size as number} {unit value}”, and can include a space so “10 MiB” and “10MiB” are the same. The accepted unit values are:
  • KiB (for KiloBytes)
  • MiB (for MegaBytes)
  • GiB (for GigaBytes)
  • TiB (for TeraBytes)
file_prefixString (Optional) (Default: The name of the pipeline)The name of the exported files. By default, this will the name of the pipeline and is segmented by pipeline version between the limits or the start and end period. For example: ’logpipeline-1.json`, etc.
limitInt (Optional)Limits how many log records to display. Defaults to 100. If there are more pipeline logs than are being displayed, the Warning message Pipeline log record limit exceeded will be displayed. For example, if 100 log files were requested and there are a total of 1,000, the warning message will be displayed.
start and endDateTime (Optional)Limits logs to all logs between the start and end DateTime parameters. Both parameters must be provided. Submitting a logs() request with only start or end will generate an exception.
If start and end are provided as parameters, then the records are returned in chronological order, with the oldest record displayed first.
datasetList (OPTIONAL)The datasets to be returned. The datasets available are:
  • *: Default. This translates to ["time", "in", "out", "anomaly"].
  • time: The DateTime of the inference request.
  • in: All inputs listed as in_{variable_name}.
  • out: All outputs listed as out_variable_name.
  • anomaly: Flags whether an anomaly was detected was triggered. 0 indicates no checks were triggered, 1 or greater indicates a an anomaly was detected. was triggered. Each validation is displayed in the returned logs as part of the anomaly dataset as anomaly.{validation_name}. For more information on anomaly detection, see Wallaroo SDK Essentials Guide: Anomaly Detection
  • meta: Returns metadata. IMPORTANT NOTE: See Metadata RequestsRestrictions for specifications on how this dataset can be used with otherdatasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
    • Returns in the metadata.last_model field:
      • A dict with each Python step as:
        • model_name: The name of the model in the pipeline step.
        • model_sha : The sha hash of the model in the pipeline step.
    • Returns in the metadata.pipeline_version field:
      • The pipeline version as a UUID value.
  • metadata.elapsed: IMPORTANT NOTE: See Metadata Requests Restrictionsfor specifications on how this dataset can be used with other datasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
arrowBoolean (Optional)Defaults to False. If arrow is set to True, then the logs are returned as an Apache Arrow table. If arrow=False, then the logs are returned as JSON in pandas DataFrame format.

The following examples demonstrate saving a DataFrame version of the mainpipeline logs, then an Arrow version.

# Save the DataFrame version of the log file

mainpipeline.export_logs()
display(os.listdir('./logs'))

mainpipeline.export_logs(arrow=True)
display(os.listdir('./logs'))
Warning: There are more logs available. Please set a larger limit to export more data.
['logpipeline-1.arrow',
 'logpipeline-test-2.json',
 'logpipeline-1.json',
 'logpipeline-test-1.arrow',
 'logpipeline-test-2.arrow',
 'logpipeline-test-1.json']
Warning: There are more logs available. Please set a larger limit to export more data.
['logpipeline-1.arrow',
 'logpipeline-test-2.json',
 'logpipeline-1.json',
 'logpipeline-test-1.arrow',
 'logpipeline-test-2.arrow',
 'logpipeline-test-1.json']

Pipeline Logs via the Wallaroo MLOps API

Pipeline logs are retrieved through the Wallaroo MLOps API with the following request.

  • REQUEST URL
    • v1/api/pipelines/get_logs
  • Headers
    • Accept:
      • application/json; format=pandas-records: For the logs returned as pandas DataFrame
      • application/vnd.apache.arrow.file: for the logs returned as Apache Arrow
  • PARAMETERS
    • pipeline_name (String Required): The name of the pipeline.
    • workspace_id (Integer Required): The numerical identifier of the workspace.
    • cursor (String Optional): Cursor returned with a previous page of results from a pipeline log request, used to retrieve the next page of information.
    • order (String Optional Default: Desc): The order for log inserts returned. Valid values are:
      • Asc: In chronological order of inserts.
      • Desc: In reverse chronological order of inserts.
    • page_size (Integer Optional Default: 1000.): Max records per page.
    • start_time (String Optional): The start time of the period to retrieve logs for in RFC 3339 format for DateTime. Must be combined with end_time.
    • end_time (String Optional): The end time of the period to retrieve logs for in RFC 3339 format for DateTime. Must be combined with start_time.
  • RETURNS
    • The logs are returned by default as 'application/json; format=pandas-records' format. To request the logs as Apache Arrow tables, set the submission header Accept to application/vnd.apache.arrow.file.
    • Headers:
      • x-iteration-cursor: Used to retrieve the next page of results. This is not included if x-iteration-status is All.
      • x-iteration-status: Informs whether there are more records available outside of this log request parameters.
        • All: This page includes all logs available from this request. If x-iteration-status is All, then x-iteration-cursor is not provided.
        • SchemaChange: A change in the log schema caused by actions such as pipeline version, etc.
        • RecordLimited: The number of records exceeded from the page size, more records can be requested as the next page. There may be more records available to retrieve OR the record limit was reached for this request even if no more records are available in next cursor request.
        • ByteLimited: The number of records exceeded the pipeline log limit which is around 100K.
# retrieve the authorization token
headers = wl.auth.auth_header()

url = f"{wl.api_endpoint}/v1/api/pipelines/get_logs"

# Standard log retrieval

# get the workspace id
workspace_id = workspace.id()

data = {
    'pipeline_name': mainpipeline.name(),
    'workspace_id': workspace_id
}

response = requests.post(url, headers=headers, json=data)
standard_logs = pd.DataFrame.from_records(response.json())

display(len(standard_logs))
display(standard_logs.head(5).loc[:, ["time", "in", "out"]])
cursor = response.headers['x-iteration-cursor']
2
timeinout
01738862700487{'tensor': [4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]}{'variable': [718013.7]}
11738862706352{'tensor': [4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0]}{'variable': [1514079.4]}
# Get next page of results as an arrow table

# retrieve the authorization token
headers = wl.auth.auth_header()
headers['Accept']="application/vnd.apache.arrow.file"

url = f"{wl.api_endpoint}/v1/api/pipelines/get_logs"

# Standard log retrieval

data = {
    'pipeline_name': mainpipeline.name(),
    'workspace_id': workspace_id,
    'cursor': cursor
}

response = requests.post(url, headers=headers, json=data)

# Arrow table is retrieved 
with pa.ipc.open_file(response.content) as reader:
    arrow_table = reader.read_all()

# convert to Polars DataFrame and display the first 5 rows
display(arrow_table.to_pandas().head(5).loc[:,["time", "out"]])
timeout
01738862716671{'variable': [718013.75]}
11738862716671{'variable': [615094.56]}
21738862716671{'variable': [448627.72]}
31738862716671{'variable': [758714.2]}
41738862716671{'variable': [513264.7]}

Shadow Deploy Pipelines

Let’s assume that after analyzing the assay information we want to test two challenger models to our control. We do that with the Shadow Deploy pipeline step.

In Shadow Deploy, the pipeline step is added with the add_shadow_deploy method, with the champion model listed first, then an array of challenger models after. All inference data is fed to all models, with the champion results displayed in the out.variable column, and the shadow results in the format out_{model name}.variable. For example, since we named our challenger models housingchallenger01 and housingchallenger02, the columns out_housingchallenger01.variable and out_housingchallenger02.variable have the shadow deployed model results.

For this example, we will remove the previous pipeline step, then replace it with a shadow deploy step with rf_model.onnx as our champion, and models xgb_model.onnx and gbr_model.onnx as the challengers. We’ll deploy the pipeline and prepare it for sample inferences.

# Upload the challenger models

model_name_challenger01 = 'logcontrolchallenger01'
model_file_name_challenger01 = './models/xgb_model.onnx'

model_name_challenger02 = 'logcontrolchallenger02'
model_file_name_challenger02 = './models/gbr_model.onnx'

housing_model_challenger01 = (wl.upload_model(model_name_challenger01, 
                                              model_file_name_challenger01, 
                                              framework=wallaroo.framework.Framework.ONNX)
                                              .configure(tensor_fields=["tensor"])
                            )
housing_model_challenger02 = (wl.upload_model(model_name_challenger02, 
                                              model_file_name_challenger02, 
                                              framework=wallaroo.framework.Framework.ONNX)
                                              .configure(tensor_fields=["tensor"])
                            )
# Undeploy the pipeline
mainpipeline.undeploy()

mainpipeline.clear()

# Add the new shadow deploy step with our challenger models
mainpipeline.add_shadow_deploy(housing_model_control, [housing_model_challenger01, housing_model_challenger02])

# Deploy the pipeline with the new shadow step
deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.25)\
    .build()

mainpipeline.deploy(deployment_config=deploy_config, wait_for_status=False)
Deployment initiated for logpipeline-test. Please check pipeline status.
namelogpipeline-test
created2025-02-06 17:24:35.075026+00:00
last_updated2025-02-06 17:27:37.330367+00:00
deployedTrue
workspace_id10
workspace_namelogworkspace
archx86
accelnone
tags
versions5934ec36-57fe-490c-9ae0-56ef4fdf2bdb, 4352a1e8-909e-4e87-a4c7-a36f1546ebd7, 3658e031-941e-4207-882a-5881f9db1184
stepslogcontrol
publishedFalse
# wait for the pipeline status = Running
import time
time.sleep(15)

while mainpipeline.status()['status'] != 'Running':
    time.sleep(15)
    print("Waiting for deployment.")
    mainpipeline.status()['status']
mainpipeline.status()['status']
'Running'

Shadow Deploy Sample Inference

We’ll now use our same sample data for an inference to our shadow deployed pipeline, then display the first 20 results with just the comparative outputs.

# setting time period to ensure logs are different
#time.sleep(30)
shadow_date_start = datetime.datetime.now(datetime.timezone.utc)

shadow_result = mainpipeline.infer_from_file('./data/xtest-1k.arrow')

shadow_outputs =  shadow_result.to_pandas()
display(shadow_outputs.loc[0:20,['out.variable','out_logcontrolchallenger01.variable','out_logcontrolchallenger02.variable']])

#time.sleep(30)
shadow_date_end = datetime.datetime.now(datetime.timezone.utc)
out.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
0[718013.75][659806.0][704901.9]
1[615094.56][732883.5][695994.44]
2[448627.72][419508.84][416164.8]
3[758714.2][634028.8][655277.2]
4[513264.7][427209.44][426854.66]
5[668288.0][615501.9][632556.1]
6[1004846.5][1139732.5][1100465.2]
7[684577.2][498328.88][528278.06]
8[727898.1][722664.4][659439.94]
9[559631.1][525746.44][534331.44]
10[340764.53][376337.1][377187.2]
11[442168.06][382053.12][403964.3]
12[630865.6][505608.97][528991.3]
13[559631.1][603260.5][612201.75]
14[909441.1][969585.4][893874.7]
15[313096.0][313633.75][318054.94]
16[404040.8][360413.56][357816.75]
17[292859.5][316674.94][294034.7]
18[338357.88][299907.44][323254.3]
19[682284.6][811896.75][770916.7]
20[583765.94][573618.5][549141.4]

Shadow Deploy Logs

Pipelines with a shadow deployed step include the shadow inference result in the same format as the inference result: inference results from shadow deployed models are displayed as out_{model name}.{output variable}.

# display logs with shadow deployed steps

display(mainpipeline.logs(start_datetime=shadow_date_start, end_datetime=shadow_date_end).loc[:, ["time", "out.variable", "out_logcontrolchallenger01.variable", "out_logcontrolchallenger02.variable"]])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
timeout.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
02025-02-06 17:37:28.226[718013.75][659806.0][704901.9]
12025-02-06 17:37:28.226[615094.56][732883.5][695994.44]
22025-02-06 17:37:28.226[448627.72][419508.84][416164.8]
32025-02-06 17:37:28.226[758714.2][634028.8][655277.2]
42025-02-06 17:37:28.226[513264.7][427209.44][426854.66]
...............
9952025-02-06 17:37:28.226[827411.0][743487.94][787589.25]
9962025-02-06 17:37:28.226[441960.38][381577.16][411258.3]
9972025-02-06 17:37:28.226[1060847.5][1520770.0][1491293.8]
9982025-02-06 17:37:28.226[706823.56][663008.75][594914.2]
9992025-02-06 17:37:28.226[581003.0][573391.1][596933.5]

1000 rows × 4 columns

For log file requests, the following metadata dataset requests for testing pipeline steps are available:

  • metadata

These must be paired with specific columns. * is not available when paired with metadata.

  • in: All input fields.
  • out: All output fields.
  • time: The DateTime the inference request was made.
  • in.{input_fields}: Any input fields (tensor, etc.).
  • out.{output_fields}: Any output fields matching the specific output_field (out.house_price, out.variable, etc.).
  • out_: All shadow deployed challenger steps Any output fields matching the specific output_field (out.house_price, out.variable, etc.).
  • anomaly.count: Any anomalies detected from validations.
  • anomaly.{validation}: The validation that triggered the anomaly detection and whether it is True (indicating an anomaly was detected) or False.

The following example retrieves the logs from a pipeline with shadow deployed models, and displays the specific shadow deployed model outputs and the metadata.elasped field.

# display logs with shadow deployed steps

display(mainpipeline.logs(start_datetime=shadow_date_start, end_datetime=shadow_date_end).loc[:, ["time", 
                                                                                                  "out.variable", 
                                                                                                  "out_logcontrolchallenger01.variable", 
                                                                                                  "out_logcontrolchallenger02.variable"
                                                                                                  ]
                                                                                        ])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
timeout.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
02025-02-06 17:37:28.226[718013.75][659806.0][704901.9]
12025-02-06 17:37:28.226[615094.56][732883.5][695994.44]
22025-02-06 17:37:28.226[448627.72][419508.84][416164.8]
32025-02-06 17:37:28.226[758714.2][634028.8][655277.2]
42025-02-06 17:37:28.226[513264.7][427209.44][426854.66]
...............
9952025-02-06 17:37:28.226[827411.0][743487.94][787589.25]
9962025-02-06 17:37:28.226[441960.38][381577.16][411258.3]
9972025-02-06 17:37:28.226[1060847.5][1520770.0][1491293.8]
9982025-02-06 17:37:28.226[706823.56][663008.75][594914.2]
9992025-02-06 17:37:28.226[581003.0][573391.1][596933.5]

1000 rows × 4 columns

metadatalogs = mainpipeline.logs(dataset=["time",
                                          "out_logcontrolchallenger01.variable", 
                                          "out_logcontrolchallenger02.variable", 
                                          "metadata",
                                          'anomaly.count'
                                          ],
                                start_datetime=shadow_date_start, 
                                end_datetime=shadow_date_end
                                )

display(metadatalogs.loc[:, ['out_logcontrolchallenger01.variable',	
                             'out_logcontrolchallenger02.variable', 
                             'metadata.elapsed',
                             'anomaly.count'
                             ]
                        ])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
out_logcontrolchallenger01.variableout_logcontrolchallenger02.variablemetadata.elapsedanomaly.count
0[659806.0][704901.9][95820, 19670]0
1[732883.5][695994.44][95820, 19670]0
2[419508.84][416164.8][95820, 19670]0
3[634028.8][655277.2][95820, 19670]0
4[427209.44][426854.66][95820, 19670]0
...............
995[743487.94][787589.25][95820, 19670]0
996[381577.16][411258.3][95820, 19670]0
997[1520770.0][1491293.8][95820, 19670]0
998[663008.75][594914.2][95820, 19670]0
999[573391.1][596933.5][95820, 19670]0

1000 rows × 4 columns

The following demonstrates exporting the shadow deployed logs to the directory shadow.

# Save shadow deployed log files as pandas DataFrame

mainpipeline.export_logs(directory="shadow", file_prefix="shadowdeploylogs")
display(os.listdir('./shadow'))
Warning: There are more logs available. Please set a larger limit to export more data.
['shadowdeploylogs-1.json', 'shadowdeploylogs-2.json']

Shadow Deploy Logs via the MLOps API

The following demonstrates retrieving the shadow deploy logs via the MLOps API.

# Retrieve logs from specific date/time to only get the two DataFrame input inferences in ascending format

# retrieve the authorization token
headers = wl.auth.auth_header()

url = f"{wl.api_endpoint}/v1/api/pipelines/get_logs"

# Standard log retrieval

data = {
    'pipeline_name': mainpipeline.name(),
    'workspace_id': workspace_id,
    'order': 'asc',
    'start_time': f'{shadow_date_start.isoformat()}',
    'end_time': f'{shadow_date_end.isoformat()}'
}

response = requests.post(url, headers=headers, json=data)

standard_logs = pd.DataFrame.from_records(response.json())

display(standard_logs.head(5).loc[:, ["time", "out", "out_logcontrolchallenger01", "out_logcontrolchallenger02"]])
timeoutout_logcontrolchallenger01out_logcontrolchallenger02
01738863448226{'variable': [718013.75]}{'variable': [659806.0]}{'variable': [704901.9]}
11738863448226{'variable': [615094.56]}{'variable': [732883.5]}{'variable': [695994.44]}
21738863448226{'variable': [448627.72]}{'variable': [419508.84]}{'variable': [416164.8]}
31738863448226{'variable': [758714.2]}{'variable': [634028.8]}{'variable': [655277.2]}
41738863448226{'variable': [513264.7]}{'variable': [427209.44]}{'variable': [426854.66]}

A/B Testing Pipeline

A/B testing allows inference requests to be split between a control model and one or more challenger models. For full details, see the Pipeline Management Guide: A/B Testing.

When the inference results and log entries are displayed, they include the column out._model_split which displays:

FieldTypeDescription
nameStringThe model name used for the inference.
versionStringThe version of the model.
shaStringThe sha hash of the model version.

For this example, the shadow deployed step will be removed and replaced with an A/B Testing step with the ratio 1:1:1, so the control and each of the challenger models will be split randomly between inference requests. A set of sample inferences will be run, then the pipeline logs displayed.

pipeline = (wl.build_pipeline(“randomsplitpipeline-demo”)
.add_random_split([(2, control), (1, challenger)], “session_id”))

mainpipeline.undeploy()

# remove the shadow deploy steps
mainpipeline.clear()

# Add the a/b test step to the pipeline
mainpipeline.add_random_split([(1, housing_model_control), (1, housing_model_challenger01), (1, housing_model_challenger02)], "session_id")

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.25)\
    .build()

mainpipeline.deploy(deployment_config=deploy_config, wait_for_status=False)
Deployment initiated for logpipeline-test. Please check pipeline status.
namelogpipeline-test
created2025-02-06 17:24:35.075026+00:00
last_updated2025-02-06 17:52:52.339715+00:00
deployedTrue
workspace_id10
workspace_namelogworkspace
archx86
accelnone
tags
versionse0618d64-dbba-42d7-955e-be35ef0b9520, 849d31fc-9a21-443c-92a7-1067a869f4b5, 5934ec36-57fe-490c-9ae0-56ef4fdf2bdb, 4352a1e8-909e-4e87-a4c7-a36f1546ebd7, 3658e031-941e-4207-882a-5881f9db1184
stepslogcontrol
publishedFalse
# wait for the pipeline status = Running
import time
time.sleep(15)

while mainpipeline.status()['status'] != 'Running':
    time.sleep(15)
    print("Waiting for deployment.")
    mainpipeline.status()['status']
mainpipeline.status()['status']
'Running'
# Perform sample inferences of 20 rows and display the results
ab_date_start = datetime.datetime.now(datetime.timezone.utc)
abtesting_inputs = pd.read_json('./data/xtest-1k.df.json')

for index, row in abtesting_inputs.sample(20).iterrows():
    display(mainpipeline.infer(row.to_frame('tensor').reset_index()).loc[:,["out._model_split", "out.variable"]])

ab_date_end = datetime.datetime.now(datetime.timezone.utc)
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][363876.13]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][294952.13]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][211170.63]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][1169642.9]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][400536.06]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][806113.75]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][334268.3]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger02","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":9,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"481bf716-3a92-4da3-bc5e-67d3cc140915","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a","file_name":"gbr_model.onnx","size":214380},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":15,"model_version_id":9,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][460745.84]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][448627.8]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][482136.63]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][330116.7]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][1048372.44]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][535831.6]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][732735.94]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger02","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":9,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"481bf716-3a92-4da3-bc5e-67d3cc140915","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a","file_name":"gbr_model.onnx","size":214380},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":15,"model_version_id":9,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][536182.75]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][290987.34]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][526152.9]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger02","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":9,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"481bf716-3a92-4da3-bc5e-67d3cc140915","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a","file_name":"gbr_model.onnx","size":214380},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":15,"model_version_id":9,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][1629122.8]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][743430.4]
out._model_splitout.variable
0[{"model_version":{"name":"logcontrolchallenger02","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":9,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"481bf716-3a92-4da3-bc5e-67d3cc140915","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a","file_name":"gbr_model.onnx","size":214380},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":15,"model_version_id":9,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}][591678.8]
## Get the logs with the a/b testing information

metadatalogs = mainpipeline.logs(dataset=["time",
                                          "out", 
                                          "metadata"
                                          ]
                                )

display(metadatalogs.loc[:, ['out.variable', 'metadata.last_model']])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
out.variablemetadata.last_model
0[581003.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
1[706823.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
2[1060847.5]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
3[441960.38]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
4[827411.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
.........
95[435628.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
96[981676.6]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
97[437177.84]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
98[1208638.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
99[448627.72]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}

100 rows × 2 columns

# Save a/b testing log files as DataFrame

mainpipeline.export_logs(directory="abtesting", 
                         file_prefix="abtests", 
                         start_datetime=ab_date_start, 
                         end_datetime=ab_date_end)
display(os.listdir('./abtesting'))
['abtests-1.json']

The following exports the metadata with the log files.

# Save a/b testing log files as DataFrame

mainpipeline.export_logs(directory="abtesting-metadata", 
                         file_prefix="abtests", 
                         start_datetime=ab_date_start, 
                         end_datetime=ab_date_end,
                         dataset=["time", "out", "metadata"])
display(os.listdir('./abtesting-metadata'))
['abtests-1.json']

AB Testing Logs via the MLOps API

The following demonstrates retrieving the ab testing deploy logs via the MLOps API. For brevity only the first 5 logs are shown.

# Retrieve logs from specific date/time to only get the two DataFrame input inferences in ascending format

# retrieve the authorization token
headers = wl.auth.auth_header()

url = f"{wl.api_endpoint}/v1/api/pipelines/get_logs"

# Standard log retrieval

data = {
    'pipeline_name': mainpipeline.name(),
    'workspace_id': workspace_id,
    'order': 'asc',
    'start_time': f'{ab_date_start.isoformat()}',
    'end_time': f'{ab_date_end.isoformat()}'
}

response = requests.post(url, headers=headers, json=data)

standard_logs = pd.DataFrame.from_records(response.json())

display(standard_logs.head(5))
timeinoutanomalymetadata
01738864533095{'index': 'tensor', 'tensor': [3.0, 2.0, 1310.0, 9855.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1310.0, 0.0, 47.7296, -122.241, 1310.0, 8370.0, 52.0, 0.0, 0.0]}{'_model_split': ['{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}'], 'variable': [363876.13]}{'count': 0}{'last_model': '{"model_name":"logcontrolchallenger01","model_sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}', 'pipeline_version': 'e0618d64-dbba-42d7-955e-be35ef0b9520', 'elapsed': [8190, 899660], 'dropped': [], 'partition': 'engine-58bbfd85b-crq5t'}
11738864533292{'index': 'tensor', 'tensor': [3.0, 2.0, 1770.0, 7251.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1770.0, 0.0, 47.4087, -122.17, 2560.0, 7210.0, 24.0, 0.0, 0.0]}{'_model_split': ['{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}'], 'variable': [294952.13]}{'count': 0}{'last_model': '{"model_name":"logcontrolchallenger01","model_sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}', 'pipeline_version': 'e0618d64-dbba-42d7-955e-be35ef0b9520', 'elapsed': [9830, 469410], 'dropped': [], 'partition': 'engine-58bbfd85b-crq5t'}
21738864533494{'index': 'tensor', 'tensor': [3.0, 1.75, 1200.0, 9266.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1200.0, 0.0, 47.314, -122.208, 1200.0, 9266.0, 54.0, 0.0, 0.0]}{'_model_split': ['{"model_version":{"name":"logcontrolchallenger01","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":8,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"b85ba125-b7d3-42af-9f60-d65dcbb82260","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c","file_name":"xgb_model.onnx","size":171121},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":13,"model_version_id":8,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}'], 'variable': [211170.63]}{'count': 0}{'last_model': '{"model_name":"logcontrolchallenger01","model_sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}', 'pipeline_version': 'e0618d64-dbba-42d7-955e-be35ef0b9520', 'elapsed': [11050, 522370], 'dropped': [], 'partition': 'engine-58bbfd85b-crq5t'}
31738864533679{'index': 'tensor', 'tensor': [4.0, 3.5, 3770.0, 8501.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3770.0, 0.0, 47.6744, -122.196, 1520.0, 9660.0, 6.0, 0.0, 0.0]}{'_model_split': ['{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}'], 'variable': [1169642.9]}{'count': 0}{'last_model': '{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': 'e0618d64-dbba-42d7-955e-be35ef0b9520', 'elapsed': [10500, 746210], 'dropped': [], 'partition': 'engine-58bbfd85b-crq5t'}
41738864533893{'index': 'tensor', 'tensor': [4.0, 2.5, 2070.0, 2992.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2070.0, 0.0, 47.4496, -122.12, 1900.0, 2957.0, 13.0, 0.0, 0.0]}{'_model_split': ['{"model_version":{"name":"logcontrol","visibility":"private","workspace_id":10,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx"},"id":7,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"9432a030-38e4-4838-a51a-7783a46f13fa","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6","file_name":"rf_model.onnx","size":225818},"created_on_version":"2024.4.0","created_by":"john.hansarick@wallaroo.ai","created_at":null,"deployed":false},"config":{"id":11,"model_version_id":7,"runtime":"onnx","filter_threshold":null,"tensor_fields":["tensor"],"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"sidekick_uri":null}}'], 'variable': [400536.06]}{'count': 0}{'last_model': '{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}', 'pipeline_version': 'e0618d64-dbba-42d7-955e-be35ef0b9520', 'elapsed': [7980, 335380], 'dropped': [], 'partition': 'engine-58bbfd85b-crq5t'}

Anomaly Detection Logs

Wallaroo provides validations to detect anomalous data from inference inputs and outputs. Validations are added to a Wallaroo pipeline with the wallaroo.pipeline.add_validations method.

Adding validations takes the format:

pipeline.add_validations(
    validation_name_01 = polars.col(in|out.{column_name}) EXPRESSION,
    validation_name_02 = polars.col(in|out.{column_name}) EXPRESSION
    ...{additional rules}
)
  • validation_name: The user provided name of the validation. The names must match Python variable naming requirements.
    • IMPORTANT NOTE: Using the name count as a validation name returns an error. Any validation rules named count are dropped upon request and an error returned.
  • polars.col(in|out.{column_name}): Specifies the input or output for a specific field aka “column” in an inference result. Wallaroo inference requests are in the format in.{field_name} for inputs, and out.{field_name} for outputs.
  • EXPRESSION: The expression to validate. When the expression returns True, that indicates an anomaly detected.

The polars library version 0.18.5 is used to create the validation rule. This is installed by default with the Wallaroo SDK. This provides a powerful range of comparisons to organizations tracking anomalous data from their ML models.

When validations are added to a pipeline, inference request outputs return the following fields:

FieldTypeDescription
anomaly.countIntegerThe total of all validations that returned True.
anomaly.{validation name}BoolThe output of the validation {validation_name}.

When validation returns True, an anomaly is detected.

For example, adding the validation fraud to the following pipeline returns anomaly.count of 1 when the validation fraud returns True. The validation fraud returns True when the output field dense_1 at index 0 is greater than 0.9.

sample_pipeline = wallaroo.client.build_pipeline("sample-pipeline")
sample_pipeline.add_model_step(model)

# add the validation
sample_pipeline.add_validations(
    fraud=pl.col("out.dense_1").list.get(0) > 0.9,
    )

# deploy the pipeline
sample_pipeline.deploy()

# sample inference
display(sample_pipeline.infer_from_file("dev_high_fraud.json", data_format='pandas-records'))
 timein.tensorout.dense_1anomaly.countanomaly.fraud
02024-02-02 16:05:42.152[1.0678324729, 18.1555563975, -1.6589551058, 5…][0.981199]1True

Anomaly Detection Inference Requests Example

For this example, we create the validation rule too_high which detects houses with a value greater than 1,000,000 and show the output for houses that trigger that validation.

For these examples we’ll create a new pipeline to ensure the logs are “clean” for the samples.

import polars as pl

mainpipeline.undeploy()
newpipeline = wl.build_pipeline("logpipeline-anomaly-example")
newpipeline.clear()
newpipeline.add_model_step(housing_model_control)
newpipeline.add_validations(
    too_high=pl.col("out.variable").list.get(0) > 1000000.0
)

deploy_config = wallaroo.deployment_config.DeploymentConfigBuilder() \
    .cpus(0.25)\
    .build()

newpipeline.deploy(deployment_config=deploy_config, wait_for_status=False)
Deployment initiated for logpipeline-anomaly-example. Please check pipeline status.
namelogpipeline-anomaly-example
created2025-02-06 17:51:52.522344+00:00
last_updated2025-02-06 17:58:15.038807+00:00
deployedTrue
workspace_id10
workspace_namelogworkspace
archx86
accelnone
tags
versionse990ea38-796d-4b6f-b9ed-9f888c5a1f36, 5cc88396-a0da-45fb-be7d-6b65042488cb, faf7bb1b-8b33-4aeb-93eb-b2fed3bff8af, c60d3dba-9a69-498c-b3d0-d4cbf4c6e4ca
stepslogcontrol
publishedFalse
# wait for the pipeline status = Running
import time
time.sleep(15)

while newpipeline.status()['status'] != 'Running':
    time.sleep(15)
    print("Waiting for deployment.")
    newpipeline.status()['status']
newpipeline.status()['status']
'Running'
# sample inferences
import datetime
import time
import pytz

inference_start = datetime.datetime.now(datetime.timezone.utc)

# adding sleep to ensure log distinction
time.sleep(15)

results = newpipeline.infer_from_file('./data/test-1000.df.json')

inference_end = datetime.datetime.now(datetime.timezone.utc)

# first 20 results
display(results.head(20))

# only results that trigger the anomaly too_high
results.loc[results['anomaly.too_high'] == True]
timein.tensorout.variableanomaly.countanomaly.too_high
02025-02-06 18:01:53.863[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0False
12025-02-06 18:01:53.863[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0False
22025-02-06 18:01:53.863[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0False
32025-02-06 18:01:53.863[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0False
42025-02-06 18:01:53.863[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0False
52025-02-06 18:01:53.863[3.0, 2.0, 2140.0, 4923.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1070.0, 1070.0, 47.6902, -122.339, 1470.0, 4923.0, 86.0, 0.0, 0.0][668288.0]0False
62025-02-06 18:01:53.863[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]1True
72025-02-06 18:01:53.863[3.0, 2.0, 1280.0, 960.0, 2.0, 0.0, 0.0, 3.0, 9.0, 1040.0, 240.0, 47.602, -122.311, 1280.0, 1173.0, 0.0, 0.0, 0.0][684577.2]0False
82025-02-06 18:01:53.863[4.0, 2.5, 2820.0, 15000.0, 2.0, 0.0, 0.0, 4.0, 9.0, 2820.0, 0.0, 47.7255, -122.101, 2440.0, 15000.0, 29.0, 0.0, 0.0][727898.1]0False
92025-02-06 18:01:53.863[3.0, 2.25, 1790.0, 11393.0, 1.0, 0.0, 0.0, 3.0, 8.0, 1790.0, 0.0, 47.6297, -122.099, 2290.0, 11894.0, 36.0, 0.0, 0.0][559631.1]0False
102025-02-06 18:01:53.863[3.0, 1.5, 1010.0, 7683.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1010.0, 0.0, 47.72, -122.318, 1550.0, 7271.0, 61.0, 0.0, 0.0][340764.53]0False
112025-02-06 18:01:53.863[3.0, 2.0, 1270.0, 1323.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1270.0, 0.0, 47.6934, -122.342, 1330.0, 1323.0, 8.0, 0.0, 0.0][442168.06]0False
122025-02-06 18:01:53.863[4.0, 1.75, 2070.0, 9120.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1250.0, 820.0, 47.6045, -122.123, 1650.0, 8400.0, 57.0, 0.0, 0.0][630865.6]0False
132025-02-06 18:01:53.863[4.0, 1.0, 1620.0, 4080.0, 1.5, 0.0, 0.0, 3.0, 7.0, 1620.0, 0.0, 47.6696, -122.324, 1760.0, 4080.0, 91.0, 0.0, 0.0][559631.1]0False
142025-02-06 18:01:53.863[4.0, 3.25, 3990.0, 9786.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3990.0, 0.0, 47.6784, -122.026, 3920.0, 8200.0, 10.0, 0.0, 0.0][909441.1]0False
152025-02-06 18:01:53.863[4.0, 2.0, 1780.0, 19843.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1780.0, 0.0, 47.4414, -122.154, 2210.0, 13500.0, 52.0, 0.0, 0.0][313096.0]0False
162025-02-06 18:01:53.863[4.0, 2.5, 2130.0, 6003.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2130.0, 0.0, 47.4518, -122.12, 1940.0, 4529.0, 11.0, 0.0, 0.0][404040.8]0False
172025-02-06 18:01:53.863[3.0, 1.75, 1660.0, 10440.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1040.0, 620.0, 47.4448, -121.77, 1240.0, 10380.0, 36.0, 0.0, 0.0][292859.5]0False
182025-02-06 18:01:53.863[3.0, 2.5, 2110.0, 4118.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2110.0, 0.0, 47.3878, -122.153, 2110.0, 4044.0, 25.0, 0.0, 0.0][338357.88]0False
192025-02-06 18:01:53.863[4.0, 2.25, 2200.0, 11250.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1300.0, 900.0, 47.6845, -122.201, 2320.0, 10814.0, 94.0, 0.0, 0.0][682284.6]0False
timein.tensorout.variableanomaly.countanomaly.too_high
62025-02-06 18:01:53.863[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]1True
302025-02-06 18:01:53.863[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.8]1True
402025-02-06 18:01:53.863[4.0, 4.5, 5120.0, 41327.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3290.0, 1830.0, 47.7009, -122.059, 3360.0, 82764.0, 6.0, 0.0, 0.0][1204324.8]1True
632025-02-06 18:01:53.863[4.0, 3.0, 4040.0, 19700.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4040.0, 0.0, 47.7205, -122.127, 3930.0, 21887.0, 27.0, 0.0, 0.0][1028923.06]1True
1102025-02-06 18:01:53.863[4.0, 2.5, 3470.0, 20445.0, 2.0, 0.0, 0.0, 4.0, 10.0, 3470.0, 0.0, 47.547, -122.219, 3360.0, 21950.0, 51.0, 0.0, 0.0][1412215.3]1True
1302025-02-06 18:01:53.863[4.0, 2.75, 2620.0, 13777.0, 1.5, 0.0, 2.0, 4.0, 9.0, 1720.0, 900.0, 47.58, -122.285, 3530.0, 9287.0, 88.0, 0.0, 0.0][1223839.1]1True
1332025-02-06 18:01:53.863[5.0, 2.25, 3320.0, 13138.0, 1.0, 0.0, 2.0, 4.0, 9.0, 1900.0, 1420.0, 47.759, -122.269, 2820.0, 13138.0, 51.0, 0.0, 0.0][1108000.1]1True
1542025-02-06 18:01:53.863[4.0, 2.75, 3800.0, 9606.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3800.0, 0.0, 47.7368, -122.208, 3400.0, 9677.0, 6.0, 0.0, 0.0][1039781.25]1True
1602025-02-06 18:01:53.863[5.0, 3.5, 4150.0, 13232.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4150.0, 0.0, 47.3417, -122.182, 3840.0, 15121.0, 9.0, 0.0, 0.0][1042119.1]1True
2102025-02-06 18:01:53.863[4.0, 3.5, 4300.0, 70407.0, 2.0, 0.0, 0.0, 3.0, 10.0, 2710.0, 1590.0, 47.4472, -122.092, 3520.0, 26727.0, 22.0, 0.0, 0.0][1115275.0]1True
2392025-02-06 18:01:53.863[4.0, 3.25, 5010.0, 49222.0, 2.0, 0.0, 0.0, 5.0, 9.0, 3710.0, 1300.0, 47.5489, -122.092, 3140.0, 54014.0, 36.0, 0.0, 0.0][1092274.1]1True
2482025-02-06 18:01:53.863[4.0, 3.75, 4410.0, 8112.0, 3.0, 0.0, 4.0, 3.0, 11.0, 3570.0, 840.0, 47.5888, -122.392, 2770.0, 5750.0, 12.0, 0.0, 0.0][1967344.1]1True
2552025-02-06 18:01:53.863[4.0, 3.0, 4750.0, 21701.0, 1.5, 0.0, 0.0, 5.0, 11.0, 4750.0, 0.0, 47.6454, -122.218, 3120.0, 18551.0, 38.0, 0.0, 0.0][2002393.5]1True
2712025-02-06 18:01:53.863[5.0, 3.25, 5790.0, 13726.0, 2.0, 0.0, 3.0, 3.0, 10.0, 4430.0, 1360.0, 47.5388, -122.114, 5790.0, 13726.0, 0.0, 0.0, 0.0][1189654.4]1True
2812025-02-06 18:01:53.863[3.0, 3.0, 3570.0, 6250.0, 2.0, 0.0, 2.0, 3.0, 10.0, 2710.0, 860.0, 47.5624, -122.399, 2550.0, 7596.0, 30.0, 0.0, 0.0][1124493.3]1True
2822025-02-06 18:01:53.863[3.0, 2.75, 3170.0, 34850.0, 1.0, 0.0, 0.0, 5.0, 9.0, 3170.0, 0.0, 47.6611, -122.169, 3920.0, 36740.0, 58.0, 0.0, 0.0][1227073.8]1True
2832025-02-06 18:01:53.863[4.0, 2.75, 3260.0, 19542.0, 1.0, 0.0, 0.0, 4.0, 10.0, 2170.0, 1090.0, 47.6245, -122.236, 3480.0, 19863.0, 46.0, 0.0, 0.0][1364650.3]1True
2852025-02-06 18:01:53.863[4.0, 2.75, 4020.0, 18745.0, 2.0, 0.0, 4.0, 4.0, 10.0, 2830.0, 1190.0, 47.6042, -122.21, 3150.0, 20897.0, 26.0, 0.0, 0.0][1322835.9]1True
3232025-02-06 18:01:53.863[3.0, 3.0, 2480.0, 5500.0, 2.0, 0.0, 3.0, 3.0, 10.0, 1730.0, 750.0, 47.6466, -122.404, 2950.0, 5670.0, 64.0, 1.0, 55.0][1100884.1]1True
3512025-02-06 18:01:53.863[5.0, 4.0, 4660.0, 9900.0, 2.0, 0.0, 2.0, 4.0, 9.0, 2600.0, 2060.0, 47.5135, -122.2, 3380.0, 9900.0, 35.0, 0.0, 0.0][1058105.0]1True
3602025-02-06 18:01:53.863[4.0, 3.5, 3770.0, 8501.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3770.0, 0.0, 47.6744, -122.196, 1520.0, 9660.0, 6.0, 0.0, 0.0][1169643.0]1True
3982025-02-06 18:01:53.863[3.0, 2.25, 2390.0, 7875.0, 1.0, 0.0, 1.0, 3.0, 10.0, 1980.0, 410.0, 47.6515, -122.278, 3720.0, 9075.0, 66.0, 0.0, 0.0][1364149.9]1True
4142025-02-06 18:01:53.863[5.0, 3.5, 5430.0, 10327.0, 2.0, 0.0, 2.0, 3.0, 10.0, 4010.0, 1420.0, 47.5476, -122.116, 4340.0, 10324.0, 7.0, 0.0, 0.0][1207858.6]1True
4432025-02-06 18:01:53.863[5.0, 4.0, 4360.0, 8030.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4360.0, 0.0, 47.5923, -121.973, 3570.0, 6185.0, 0.0, 0.0, 0.0][1160512.8]1True
4972025-02-06 18:01:53.863[4.0, 2.5, 4090.0, 11225.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4090.0, 0.0, 47.581, -121.971, 3510.0, 8762.0, 9.0, 0.0, 0.0][1048372.4]1True
5132025-02-06 18:01:53.863[4.0, 3.25, 3320.0, 8587.0, 3.0, 0.0, 0.0, 3.0, 11.0, 2950.0, 370.0, 47.691, -122.337, 1860.0, 5668.0, 6.0, 0.0, 0.0][1130661.0]1True
5202025-02-06 18:01:53.863[5.0, 3.75, 4170.0, 8142.0, 2.0, 0.0, 2.0, 3.0, 10.0, 4170.0, 0.0, 47.5354, -122.181, 3030.0, 7980.0, 9.0, 0.0, 0.0][1098628.8]1True
5302025-02-06 18:01:53.863[4.0, 4.25, 3500.0, 8750.0, 1.0, 0.0, 4.0, 5.0, 9.0, 2140.0, 1360.0, 47.7222, -122.367, 3110.0, 8750.0, 63.0, 0.0, 0.0][1140733.8]1True
5352025-02-06 18:01:53.863[4.0, 3.5, 4460.0, 16271.0, 2.0, 0.0, 2.0, 3.0, 11.0, 4460.0, 0.0, 47.5862, -121.97, 4540.0, 17122.0, 13.0, 0.0, 0.0][1208638.0]1True
5562025-02-06 18:01:53.863[4.0, 3.5, 4285.0, 9567.0, 2.0, 0.0, 1.0, 5.0, 10.0, 3485.0, 800.0, 47.6434, -122.409, 2960.0, 6902.0, 68.0, 0.0, 0.0][1886959.4]1True
6232025-02-06 18:01:53.863[4.0, 3.25, 4240.0, 25639.0, 2.0, 0.0, 3.0, 3.0, 10.0, 3550.0, 690.0, 47.3241, -122.378, 3590.0, 24967.0, 25.0, 0.0, 0.0][1156651.3]1True
6242025-02-06 18:01:53.863[4.0, 3.5, 3440.0, 9776.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3440.0, 0.0, 47.5374, -122.216, 2400.0, 11000.0, 9.0, 0.0, 0.0][1124493.3]1True
6342025-02-06 18:01:53.863[4.0, 3.25, 4700.0, 38412.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3420.0, 1280.0, 47.6445, -122.167, 3640.0, 35571.0, 36.0, 0.0, 0.0][1164589.4]1True
6512025-02-06 18:01:53.863[3.0, 3.0, 3920.0, 13085.0, 2.0, 1.0, 4.0, 4.0, 11.0, 3920.0, 0.0, 47.5716, -122.204, 3450.0, 13287.0, 18.0, 0.0, 0.0][1452224.5]1True
6582025-02-06 18:01:53.863[3.0, 3.25, 3230.0, 7800.0, 2.0, 0.0, 3.0, 3.0, 10.0, 3230.0, 0.0, 47.6348, -122.403, 3030.0, 6600.0, 9.0, 0.0, 0.0][1077279.3]1True
6712025-02-06 18:01:53.863[3.0, 3.5, 3080.0, 6495.0, 2.0, 0.0, 3.0, 3.0, 11.0, 2530.0, 550.0, 47.6321, -122.393, 4120.0, 8620.0, 18.0, 1.0, 10.0][1122811.8]1True
6852025-02-06 18:01:53.863[4.0, 2.5, 4200.0, 35267.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4200.0, 0.0, 47.7108, -122.071, 3540.0, 22234.0, 24.0, 0.0, 0.0][1181336.0]1True
6862025-02-06 18:01:53.863[4.0, 3.25, 4160.0, 47480.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4160.0, 0.0, 47.7266, -122.115, 3400.0, 40428.0, 19.0, 0.0, 0.0][1082353.3]1True
6982025-02-06 18:01:53.863[4.0, 4.5, 5770.0, 10050.0, 1.0, 0.0, 3.0, 5.0, 9.0, 3160.0, 2610.0, 47.677, -122.275, 2950.0, 6700.0, 65.0, 0.0, 0.0][1689843.3]1True
7112025-02-06 18:01:53.863[3.0, 2.5, 5403.0, 24069.0, 2.0, 1.0, 4.0, 4.0, 12.0, 5403.0, 0.0, 47.4169, -122.348, 3980.0, 104374.0, 39.0, 0.0, 0.0][1946437.3]1True
7202025-02-06 18:01:53.863[5.0, 3.0, 3420.0, 18129.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2540.0, 880.0, 47.5333, -122.217, 3750.0, 16316.0, 62.0, 1.0, 53.0][1325961.0]1True
7222025-02-06 18:01:53.863[3.0, 3.25, 4560.0, 13363.0, 1.0, 0.0, 4.0, 3.0, 11.0, 2760.0, 1800.0, 47.6205, -122.214, 4060.0, 13362.0, 20.0, 0.0, 0.0][2005883.1]1True
7262025-02-06 18:01:53.863[5.0, 3.5, 4200.0, 5400.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3140.0, 1060.0, 47.7077, -122.12, 3300.0, 5564.0, 2.0, 0.0, 0.0][1052898.0]1True
7372025-02-06 18:01:53.863[4.0, 3.25, 2980.0, 7000.0, 2.0, 0.0, 3.0, 3.0, 10.0, 2140.0, 840.0, 47.5933, -122.292, 2200.0, 4800.0, 114.0, 1.0, 114.0][1156206.5]1True
7402025-02-06 18:01:53.863[4.0, 4.5, 6380.0, 88714.0, 2.0, 0.0, 0.0, 3.0, 12.0, 6380.0, 0.0, 47.5592, -122.015, 3040.0, 7113.0, 8.0, 0.0, 0.0][1355747.1]1True
7822025-02-06 18:01:53.863[5.0, 4.25, 4860.0, 9453.0, 1.5, 0.0, 1.0, 5.0, 10.0, 3100.0, 1760.0, 47.6196, -122.286, 3150.0, 8557.0, 109.0, 0.0, 0.0][1910823.8]1True
7982025-02-06 18:01:53.863[4.0, 2.5, 2790.0, 5450.0, 2.0, 0.0, 0.0, 3.0, 10.0, 1930.0, 860.0, 47.6453, -122.303, 2320.0, 5450.0, 89.0, 1.0, 75.0][1097757.4]1True
8182025-02-06 18:01:53.863[4.0, 4.0, 4620.0, 130208.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4620.0, 0.0, 47.5885, -121.939, 4620.0, 131007.0, 1.0, 0.0, 0.0][1164589.4]1True
8272025-02-06 18:01:53.863[4.0, 2.5, 3340.0, 10422.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3340.0, 0.0, 47.6515, -122.197, 1770.0, 9490.0, 18.0, 0.0, 0.0][1103101.4]1True
8282025-02-06 18:01:53.863[5.0, 3.5, 3760.0, 10207.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3150.0, 610.0, 47.5605, -122.225, 3550.0, 12118.0, 46.0, 0.0, 0.0][1489624.5]1True
9012025-02-06 18:01:53.863[4.0, 2.25, 4470.0, 60373.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4470.0, 0.0, 47.7289, -122.127, 3210.0, 40450.0, 26.0, 0.0, 0.0][1208638.0]1True
9122025-02-06 18:01:53.863[3.0, 2.25, 2960.0, 8330.0, 1.0, 0.0, 3.0, 4.0, 10.0, 2260.0, 700.0, 47.7035, -122.385, 2960.0, 8840.0, 62.0, 0.0, 0.0][1178314.0]1True
9192025-02-06 18:01:53.863[4.0, 3.25, 5180.0, 19850.0, 2.0, 0.0, 3.0, 3.0, 12.0, 3540.0, 1640.0, 47.562, -122.162, 3160.0, 9750.0, 9.0, 0.0, 0.0][1295531.3]1True
9412025-02-06 18:01:53.863[4.0, 3.75, 3770.0, 4000.0, 2.5, 0.0, 0.0, 5.0, 9.0, 2890.0, 880.0, 47.6157, -122.287, 2800.0, 5000.0, 98.0, 0.0, 0.0][1182821.0]1True
9652025-02-06 18:01:53.863[6.0, 4.0, 5310.0, 12741.0, 2.0, 0.0, 2.0, 3.0, 10.0, 3600.0, 1710.0, 47.5696, -122.213, 4190.0, 12632.0, 48.0, 0.0, 0.0][2016006.0]1True
9732025-02-06 18:01:53.863[5.0, 2.0, 3540.0, 9970.0, 2.0, 0.0, 3.0, 3.0, 9.0, 3540.0, 0.0, 47.7108, -122.277, 2280.0, 7195.0, 44.0, 0.0, 0.0][1085835.8]1True
9972025-02-06 18:01:53.863[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]1True

Anomaly Detection Logs

Pipeline logs retrieves with wallaroo.pipeline.logs include the anomaly dataset.

anomaly_logs =  newpipeline.logs(limit=1000)
display(anomaly_logs.loc[anomaly_logs['anomaly.too_high'] == True])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
timein.tensorout.variableanomaly.countanomaly.too_high
22025-02-06 18:01:53.863[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]1True
262025-02-06 18:01:53.863[5.0, 2.0, 3540.0, 9970.0, 2.0, 0.0, 3.0, 3.0, 9.0, 3540.0, 0.0, 47.7108, -122.277, 2280.0, 7195.0, 44.0, 0.0, 0.0][1085835.8]1True
342025-02-06 18:01:53.863[6.0, 4.0, 5310.0, 12741.0, 2.0, 0.0, 2.0, 3.0, 10.0, 3600.0, 1710.0, 47.5696, -122.213, 4190.0, 12632.0, 48.0, 0.0, 0.0][2016006.0]1True
582025-02-06 18:01:53.863[4.0, 3.75, 3770.0, 4000.0, 2.5, 0.0, 0.0, 5.0, 9.0, 2890.0, 880.0, 47.6157, -122.287, 2800.0, 5000.0, 98.0, 0.0, 0.0][1182821.0]1True
802025-02-06 18:01:53.863[4.0, 3.25, 5180.0, 19850.0, 2.0, 0.0, 3.0, 3.0, 12.0, 3540.0, 1640.0, 47.562, -122.162, 3160.0, 9750.0, 9.0, 0.0, 0.0][1295531.2]1True
872025-02-06 18:01:53.863[3.0, 2.25, 2960.0, 8330.0, 1.0, 0.0, 3.0, 4.0, 10.0, 2260.0, 700.0, 47.7035, -122.385, 2960.0, 8840.0, 62.0, 0.0, 0.0][1178314.0]1True
982025-02-06 18:01:53.863[4.0, 2.25, 4470.0, 60373.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4470.0, 0.0, 47.7289, -122.127, 3210.0, 40450.0, 26.0, 0.0, 0.0][1208638.0]1True
1712025-02-06 18:01:53.863[5.0, 3.5, 3760.0, 10207.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3150.0, 610.0, 47.5605, -122.225, 3550.0, 12118.0, 46.0, 0.0, 0.0][1489624.5]1True
1722025-02-06 18:01:53.863[4.0, 2.5, 3340.0, 10422.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3340.0, 0.0, 47.6515, -122.197, 1770.0, 9490.0, 18.0, 0.0, 0.0][1103101.4]1True
1812025-02-06 18:01:53.863[4.0, 4.0, 4620.0, 130208.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4620.0, 0.0, 47.5885, -121.939, 4620.0, 131007.0, 1.0, 0.0, 0.0][1164589.4]1True
2012025-02-06 18:01:53.863[4.0, 2.5, 2790.0, 5450.0, 2.0, 0.0, 0.0, 3.0, 10.0, 1930.0, 860.0, 47.6453, -122.303, 2320.0, 5450.0, 89.0, 1.0, 75.0][1097757.4]1True
2172025-02-06 18:01:53.863[5.0, 4.25, 4860.0, 9453.0, 1.5, 0.0, 1.0, 5.0, 10.0, 3100.0, 1760.0, 47.6196, -122.286, 3150.0, 8557.0, 109.0, 0.0, 0.0][1910823.8]1True
2592025-02-06 18:01:53.863[4.0, 4.5, 6380.0, 88714.0, 2.0, 0.0, 0.0, 3.0, 12.0, 6380.0, 0.0, 47.5592, -122.015, 3040.0, 7113.0, 8.0, 0.0, 0.0][1355747.1]1True
2622025-02-06 18:01:53.863[4.0, 3.25, 2980.0, 7000.0, 2.0, 0.0, 3.0, 3.0, 10.0, 2140.0, 840.0, 47.5933, -122.292, 2200.0, 4800.0, 114.0, 1.0, 114.0][1156206.5]1True
2732025-02-06 18:01:53.863[5.0, 3.5, 4200.0, 5400.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3140.0, 1060.0, 47.7077, -122.12, 3300.0, 5564.0, 2.0, 0.0, 0.0][1052898.0]1True
2772025-02-06 18:01:53.863[3.0, 3.25, 4560.0, 13363.0, 1.0, 0.0, 4.0, 3.0, 11.0, 2760.0, 1800.0, 47.6205, -122.214, 4060.0, 13362.0, 20.0, 0.0, 0.0][2005883.1]1True
2792025-02-06 18:01:53.863[5.0, 3.0, 3420.0, 18129.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2540.0, 880.0, 47.5333, -122.217, 3750.0, 16316.0, 62.0, 1.0, 53.0][1325961.0]1True
2882025-02-06 18:01:53.863[3.0, 2.5, 5403.0, 24069.0, 2.0, 1.0, 4.0, 4.0, 12.0, 5403.0, 0.0, 47.4169, -122.348, 3980.0, 104374.0, 39.0, 0.0, 0.0][1946437.2]1True
3012025-02-06 18:01:53.863[4.0, 4.5, 5770.0, 10050.0, 1.0, 0.0, 3.0, 5.0, 9.0, 3160.0, 2610.0, 47.677, -122.275, 2950.0, 6700.0, 65.0, 0.0, 0.0][1689843.2]1True
3132025-02-06 18:01:53.863[4.0, 3.25, 4160.0, 47480.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4160.0, 0.0, 47.7266, -122.115, 3400.0, 40428.0, 19.0, 0.0, 0.0][1082353.2]1True
3142025-02-06 18:01:53.863[4.0, 2.5, 4200.0, 35267.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4200.0, 0.0, 47.7108, -122.071, 3540.0, 22234.0, 24.0, 0.0, 0.0][1181336.0]1True
3282025-02-06 18:01:53.863[3.0, 3.5, 3080.0, 6495.0, 2.0, 0.0, 3.0, 3.0, 11.0, 2530.0, 550.0, 47.6321, -122.393, 4120.0, 8620.0, 18.0, 1.0, 10.0][1122811.8]1True
3412025-02-06 18:01:53.863[3.0, 3.25, 3230.0, 7800.0, 2.0, 0.0, 3.0, 3.0, 10.0, 3230.0, 0.0, 47.6348, -122.403, 3030.0, 6600.0, 9.0, 0.0, 0.0][1077279.2]1True
3482025-02-06 18:01:53.863[3.0, 3.0, 3920.0, 13085.0, 2.0, 1.0, 4.0, 4.0, 11.0, 3920.0, 0.0, 47.5716, -122.204, 3450.0, 13287.0, 18.0, 0.0, 0.0][1452224.5]1True
3652025-02-06 18:01:53.863[4.0, 3.25, 4700.0, 38412.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3420.0, 1280.0, 47.6445, -122.167, 3640.0, 35571.0, 36.0, 0.0, 0.0][1164589.4]1True
3752025-02-06 18:01:53.863[4.0, 3.5, 3440.0, 9776.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3440.0, 0.0, 47.5374, -122.216, 2400.0, 11000.0, 9.0, 0.0, 0.0][1124493.2]1True
3762025-02-06 18:01:53.863[4.0, 3.25, 4240.0, 25639.0, 2.0, 0.0, 3.0, 3.0, 10.0, 3550.0, 690.0, 47.3241, -122.378, 3590.0, 24967.0, 25.0, 0.0, 0.0][1156651.2]1True
4432025-02-06 18:01:53.863[4.0, 3.5, 4285.0, 9567.0, 2.0, 0.0, 1.0, 5.0, 10.0, 3485.0, 800.0, 47.6434, -122.409, 2960.0, 6902.0, 68.0, 0.0, 0.0][1886959.4]1True
4642025-02-06 18:01:53.863[4.0, 3.5, 4460.0, 16271.0, 2.0, 0.0, 2.0, 3.0, 11.0, 4460.0, 0.0, 47.5862, -121.97, 4540.0, 17122.0, 13.0, 0.0, 0.0][1208638.0]1True
4692025-02-06 18:01:53.863[4.0, 4.25, 3500.0, 8750.0, 1.0, 0.0, 4.0, 5.0, 9.0, 2140.0, 1360.0, 47.7222, -122.367, 3110.0, 8750.0, 63.0, 0.0, 0.0][1140733.8]1True
4792025-02-06 18:01:53.863[5.0, 3.75, 4170.0, 8142.0, 2.0, 0.0, 2.0, 3.0, 10.0, 4170.0, 0.0, 47.5354, -122.181, 3030.0, 7980.0, 9.0, 0.0, 0.0][1098628.8]1True
4862025-02-06 18:01:53.863[4.0, 3.25, 3320.0, 8587.0, 3.0, 0.0, 0.0, 3.0, 11.0, 2950.0, 370.0, 47.691, -122.337, 1860.0, 5668.0, 6.0, 0.0, 0.0][1130661.0]1True
5022025-02-06 18:01:53.863[4.0, 2.5, 4090.0, 11225.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4090.0, 0.0, 47.581, -121.971, 3510.0, 8762.0, 9.0, 0.0, 0.0][1048372.4]1True
5562025-02-06 18:01:53.863[5.0, 4.0, 4360.0, 8030.0, 2.0, 0.0, 0.0, 3.0, 10.0, 4360.0, 0.0, 47.5923, -121.973, 3570.0, 6185.0, 0.0, 0.0, 0.0][1160512.8]1True
5852025-02-06 18:01:53.863[5.0, 3.5, 5430.0, 10327.0, 2.0, 0.0, 2.0, 3.0, 10.0, 4010.0, 1420.0, 47.5476, -122.116, 4340.0, 10324.0, 7.0, 0.0, 0.0][1207858.6]1True
6012025-02-06 18:01:53.863[3.0, 2.25, 2390.0, 7875.0, 1.0, 0.0, 1.0, 3.0, 10.0, 1980.0, 410.0, 47.6515, -122.278, 3720.0, 9075.0, 66.0, 0.0, 0.0][1364149.9]1True
6392025-02-06 18:01:53.863[4.0, 3.5, 3770.0, 8501.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3770.0, 0.0, 47.6744, -122.196, 1520.0, 9660.0, 6.0, 0.0, 0.0][1169643.0]1True
6482025-02-06 18:01:53.863[5.0, 4.0, 4660.0, 9900.0, 2.0, 0.0, 2.0, 4.0, 9.0, 2600.0, 2060.0, 47.5135, -122.2, 3380.0, 9900.0, 35.0, 0.0, 0.0][1058105.0]1True
6762025-02-06 18:01:53.863[3.0, 3.0, 2480.0, 5500.0, 2.0, 0.0, 3.0, 3.0, 10.0, 1730.0, 750.0, 47.6466, -122.404, 2950.0, 5670.0, 64.0, 1.0, 55.0][1100884.1]1True
7142025-02-06 18:01:53.863[4.0, 2.75, 4020.0, 18745.0, 2.0, 0.0, 4.0, 4.0, 10.0, 2830.0, 1190.0, 47.6042, -122.21, 3150.0, 20897.0, 26.0, 0.0, 0.0][1322835.9]1True
7162025-02-06 18:01:53.863[4.0, 2.75, 3260.0, 19542.0, 1.0, 0.0, 0.0, 4.0, 10.0, 2170.0, 1090.0, 47.6245, -122.236, 3480.0, 19863.0, 46.0, 0.0, 0.0][1364650.2]1True
7172025-02-06 18:01:53.863[3.0, 2.75, 3170.0, 34850.0, 1.0, 0.0, 0.0, 5.0, 9.0, 3170.0, 0.0, 47.6611, -122.169, 3920.0, 36740.0, 58.0, 0.0, 0.0][1227073.8]1True
7182025-02-06 18:01:53.863[3.0, 3.0, 3570.0, 6250.0, 2.0, 0.0, 2.0, 3.0, 10.0, 2710.0, 860.0, 47.5624, -122.399, 2550.0, 7596.0, 30.0, 0.0, 0.0][1124493.2]1True
7282025-02-06 18:01:53.863[5.0, 3.25, 5790.0, 13726.0, 2.0, 0.0, 3.0, 3.0, 10.0, 4430.0, 1360.0, 47.5388, -122.114, 5790.0, 13726.0, 0.0, 0.0, 0.0][1189654.4]1True
7442025-02-06 18:01:53.863[4.0, 3.0, 4750.0, 21701.0, 1.5, 0.0, 0.0, 5.0, 11.0, 4750.0, 0.0, 47.6454, -122.218, 3120.0, 18551.0, 38.0, 0.0, 0.0][2002393.5]1True
7512025-02-06 18:01:53.863[4.0, 3.75, 4410.0, 8112.0, 3.0, 0.0, 4.0, 3.0, 11.0, 3570.0, 840.0, 47.5888, -122.392, 2770.0, 5750.0, 12.0, 0.0, 0.0][1967344.1]1True
7602025-02-06 18:01:53.863[4.0, 3.25, 5010.0, 49222.0, 2.0, 0.0, 0.0, 5.0, 9.0, 3710.0, 1300.0, 47.5489, -122.092, 3140.0, 54014.0, 36.0, 0.0, 0.0][1092274.1]1True
7892025-02-06 18:01:53.863[4.0, 3.5, 4300.0, 70407.0, 2.0, 0.0, 0.0, 3.0, 10.0, 2710.0, 1590.0, 47.4472, -122.092, 3520.0, 26727.0, 22.0, 0.0, 0.0][1115275.0]1True
8392025-02-06 18:01:53.863[5.0, 3.5, 4150.0, 13232.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4150.0, 0.0, 47.3417, -122.182, 3840.0, 15121.0, 9.0, 0.0, 0.0][1042119.1]1True
8452025-02-06 18:01:53.863[4.0, 2.75, 3800.0, 9606.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3800.0, 0.0, 47.7368, -122.208, 3400.0, 9677.0, 6.0, 0.0, 0.0][1039781.25]1True
8662025-02-06 18:01:53.863[5.0, 2.25, 3320.0, 13138.0, 1.0, 0.0, 2.0, 4.0, 9.0, 1900.0, 1420.0, 47.759, -122.269, 2820.0, 13138.0, 51.0, 0.0, 0.0][1108000.1]1True
8692025-02-06 18:01:53.863[4.0, 2.75, 2620.0, 13777.0, 1.5, 0.0, 2.0, 4.0, 9.0, 1720.0, 900.0, 47.58, -122.285, 3530.0, 9287.0, 88.0, 0.0, 0.0][1223839.1]1True
8892025-02-06 18:01:53.863[4.0, 2.5, 3470.0, 20445.0, 2.0, 0.0, 0.0, 4.0, 10.0, 3470.0, 0.0, 47.547, -122.219, 3360.0, 21950.0, 51.0, 0.0, 0.0][1412215.2]1True
9362025-02-06 18:01:53.863[4.0, 3.0, 4040.0, 19700.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4040.0, 0.0, 47.7205, -122.127, 3930.0, 21887.0, 27.0, 0.0, 0.0][1028923.06]1True
9592025-02-06 18:01:53.863[4.0, 4.5, 5120.0, 41327.0, 2.0, 0.0, 0.0, 3.0, 10.0, 3290.0, 1830.0, 47.7009, -122.059, 3360.0, 82764.0, 6.0, 0.0, 0.0][1204324.8]1True
9692025-02-06 18:01:53.863[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.8]1True
9932025-02-06 18:01:53.863[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]1True

Undeploy Main Pipeline

With the examples and tutorial complete, we will undeploy the main pipeline and return the resources back to the Wallaroo instance.

mainpipeline.undeploy()
namelogpipeline-test
created2025-02-06 17:24:35.075026+00:00
last_updated2025-02-06 17:52:52.339715+00:00
deployedFalse
workspace_id10
workspace_namelogworkspace
archx86
accelnone
tags
versionse0618d64-dbba-42d7-955e-be35ef0b9520, 849d31fc-9a21-443c-92a7-1067a869f4b5, 5934ec36-57fe-490c-9ae0-56ef4fdf2bdb, 4352a1e8-909e-4e87-a4c7-a36f1546ebd7, 3658e031-941e-4207-882a-5881f9db1184
stepslogcontrol
publishedFalse