Pipeline Logs Tutorial

How to retrieve pipeline logs as DataFrame, Apache Arrow tables, and saved to files.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Pipeline Log Tutorial

This tutorial demonstrates Wallaroo Pipeline logs and

This tutorial will demonstrate how to:

  1. Select or create a workspace, pipeline and upload the control model, then additional models for A/B Testing and Shadow Deploy.
  2. Add a pipeline step with the champion model, then deploy the pipeline and perform sample inferences.
  3. Display the various log types for a standard deployed pipeline.
  4. Swap out the pipeline step with the champion model with a shadow deploy step that compares the champion model against two competitors.
  5. Perform sample inferences with a shadow deployed step, then display the log files for a shadow deployed pipeline.
  6. Swap out the shadow deployed pipeline step with an A/B pipeline step.
  7. Perform sample inferences with a A/B pipeline step, then display the log files for an A/B pipeline step.
  8. Undeploy the pipeline.

This tutorial provides the following:

  • Models:
    • models/rf_model.onnx: The champion model that has been used in this environment for some time.
    • models/xgb_model.onnx and models/gbr_model.onnx: Rival models that will be tested against the champion.
  • Data:
    • data/xtest-1.df.json and data/xtest-1k.df.json: DataFrame JSON inference inputs with 1 input and 1,000 inputs.
    • data/xtest-1k.arrow: Apache Arrow inference inputs with 1 input and 1,000 inputs.

Prerequisites

  • A deployed Wallaroo instance
  • The following Python libraries installed:
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
    • pyarrow: Pyarrow for Apache Arrow support

Initial Steps

Import libraries

The first step is to import the libraries needed for this notebook.

import wallaroo
from wallaroo.object import EntityNotFoundError

import pyarrow as pa

from IPython.display import display

# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

import datetime

import os

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()

Create Workspace

We will create a workspace to manage our pipeline and models. The following variables will set the name of our sample workspace then set it as the current workspace.

workspace_name = 'logworkspace'
main_pipeline_name = 'logpipeline-test'
model_name_control = 'logcontrol'
model_file_name_control = './models/rf_model.onnx'
def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace
workspace = get_workspace(workspace_name)

wl.set_current_workspace(workspace)
{'name': 'logworkspace', 'id': 7, 'archived': False, 'created_by': '218d4494-6ef6-4781-be30-9cbea0aa1695', 'created_at': '2023-10-26T16:30:41.605212+00:00', 'models': [], 'pipelines': []}

Standard Pipeline

Upload The Champion Model

For our example, we will upload the champion model that has been trained to derive house prices from a variety of inputs. The model file is rf_model.onnx, and is uploaded with the name housingcontrol.

housing_model_control = (wl.upload_model(model_name_control, 
                                         model_file_name_control, 
                                         framework=wallaroo.framework.Framework.ONNX)
                                         .configure(tensor_fields=["tensor"])
                        )

Build the Pipeline

This pipeline is made to be an example of an existing situation where a model is deployed and being used for inferences in a production environment. We’ll call it housepricepipeline, set housingcontrol as a pipeline step, then run a few sample inferences.

mainpipeline = wl.build_pipeline(main_pipeline_name)
mainpipeline.undeploy()
# in case this pipeline was run before
mainpipeline.clear()
mainpipeline.add_model_step(housing_model_control).deploy()
namelogpipeline-test
created2023-10-26 16:50:50.930771+00:00
last_updated2023-10-26 16:50:51.945682+00:00
deployedTrue
tags
versions9aa799f3-bae4-49d5-93ec-82ae91a34ea0, 951f1a15-5aa3-478b-ac64-a1d11a070ab4
stepslogcontrol
publishedFalse

Testing

We’ll use two inferences as a quick sample test - one that has a house that should be determined around $700k, the other with a house determined to be around $1.5 million. We’ll also save the start and end periods for these events to for later log functionality.

dataframe_start = datetime.datetime.now()

normal_input = pd.DataFrame.from_records({"tensor": [
            [
                4.0, 
                2.5, 
                2900.0, 
                5505.0, 
                2.0, 
                0.0, 
                0.0, 
                3.0, 
                8.0, 
                2900.0, 
                0.0, 
                47.6063, 
                -122.02, 
                2970.0, 
                5251.0, 
                12.0, 
                0.0, 
                0.0
            ]
        ]
    }
)
result = mainpipeline.infer(normal_input)
display(result)
timein.tensorout.variablecheck_failures
02023-10-26 16:49:34.750[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.7]0
large_house_input = pd.DataFrame.from_records(
    {
        'tensor': [
            [
                4.0, 
                3.0, 
                3710.0, 
                20000.0, 
                2.0, 
                0.0, 
                2.0, 
                5.0, 
                10.0, 
                2760.0, 
                950.0, 
                47.6696, 
                -122.261, 
                3970.0, 
                20000.0, 
                79.0, 
                0.0, 
                0.0
            ]
        ]
    }
)
large_house_result = mainpipeline.infer(large_house_input)
display(large_house_result)

import time
time.sleep(10)
dataframe_end = datetime.datetime.now()
timein.tensorout.variablecheck_failures
02023-10-26 16:49:35.392[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.4]0

As one last sample, we’ll run through roughly 1,000 inferences at once and show a few of the results. For this example we’ll use an Apache Arrow table, which has a smaller file size compared to uploading a pandas DataFrame JSON file. The inference result is returned as an arrow table, which we’ll convert into a pandas DataFrame to display the first 20 results.

batch_inferences = mainpipeline.infer_from_file('./data/xtest-1k.arrow')

large_inference_result = batch_inferences.to_pandas()
display(large_inference_result.head(20))
timein.tensorout.variablecheck_failures
02023-10-26 16:51:09.710[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
12023-10-26 16:51:09.710[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
22023-10-26 16:51:09.710[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
32023-10-26 16:51:09.710[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
42023-10-26 16:51:09.710[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
52023-10-26 16:51:09.710[3.0, 2.0, 2140.0, 4923.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1070.0, 1070.0, 47.6902, -122.339, 1470.0, 4923.0, 86.0, 0.0, 0.0][668288.0]0
62023-10-26 16:51:09.710[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]0
72023-10-26 16:51:09.710[3.0, 2.0, 1280.0, 960.0, 2.0, 0.0, 0.0, 3.0, 9.0, 1040.0, 240.0, 47.602, -122.311, 1280.0, 1173.0, 0.0, 0.0, 0.0][684577.2]0
82023-10-26 16:51:09.710[4.0, 2.5, 2820.0, 15000.0, 2.0, 0.0, 0.0, 4.0, 9.0, 2820.0, 0.0, 47.7255, -122.101, 2440.0, 15000.0, 29.0, 0.0, 0.0][727898.1]0
92023-10-26 16:51:09.710[3.0, 2.25, 1790.0, 11393.0, 1.0, 0.0, 0.0, 3.0, 8.0, 1790.0, 0.0, 47.6297, -122.099, 2290.0, 11894.0, 36.0, 0.0, 0.0][559631.1]0
102023-10-26 16:51:09.710[3.0, 1.5, 1010.0, 7683.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1010.0, 0.0, 47.72, -122.318, 1550.0, 7271.0, 61.0, 0.0, 0.0][340764.53]0
112023-10-26 16:51:09.710[3.0, 2.0, 1270.0, 1323.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1270.0, 0.0, 47.6934, -122.342, 1330.0, 1323.0, 8.0, 0.0, 0.0][442168.06]0
122023-10-26 16:51:09.710[4.0, 1.75, 2070.0, 9120.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1250.0, 820.0, 47.6045, -122.123, 1650.0, 8400.0, 57.0, 0.0, 0.0][630865.6]0
132023-10-26 16:51:09.710[4.0, 1.0, 1620.0, 4080.0, 1.5, 0.0, 0.0, 3.0, 7.0, 1620.0, 0.0, 47.6696, -122.324, 1760.0, 4080.0, 91.0, 0.0, 0.0][559631.1]0
142023-10-26 16:51:09.710[4.0, 3.25, 3990.0, 9786.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3990.0, 0.0, 47.6784, -122.026, 3920.0, 8200.0, 10.0, 0.0, 0.0][909441.1]0
152023-10-26 16:51:09.710[4.0, 2.0, 1780.0, 19843.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1780.0, 0.0, 47.4414, -122.154, 2210.0, 13500.0, 52.0, 0.0, 0.0][313096.0]0
162023-10-26 16:51:09.710[4.0, 2.5, 2130.0, 6003.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2130.0, 0.0, 47.4518, -122.12, 1940.0, 4529.0, 11.0, 0.0, 0.0][404040.8]0
172023-10-26 16:51:09.710[3.0, 1.75, 1660.0, 10440.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1040.0, 620.0, 47.4448, -121.77, 1240.0, 10380.0, 36.0, 0.0, 0.0][292859.5]0
182023-10-26 16:51:09.710[3.0, 2.5, 2110.0, 4118.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2110.0, 0.0, 47.3878, -122.153, 2110.0, 4044.0, 25.0, 0.0, 0.0][338357.88]0
192023-10-26 16:51:09.710[4.0, 2.25, 2200.0, 11250.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1300.0, 900.0, 47.6845, -122.201, 2320.0, 10814.0, 94.0, 0.0, 0.0][682284.6]0

Standard Pipeline Logs

Pipeline logs with standard pipeline steps are retrieved either with:

  • Pipeline logs which returns either a pandas DataFrame or Apache Arrow table.
  • Pipeline export_logs which saves the logs either a pandas DataFrame JSON file or Apache Arrow table.

For full details, see the Wallaroo Documentation Pipeline Log Management guide.

Pipeline Log Method

The Pipeline logs method includes the following parameters. For a complete list, see the Wallaroo SDK Essentials Guide: Pipeline Log Management.

ParameterTypeDescription
limitInt (Optional)Limits how many log records to display. Defaults to 100. If there are more pipeline logs than are being displayed, the Warning message Pipeline log record limit exceeded will be displayed. For example, if 100 log files were requested and there are a total of 1,000, the warning message will be displayed.
start_datetime and end_datetimeDateTime (Optional)Limits logs to all logs between the start and end DateTime parameters. Both parameters must be provided. Submitting a logs() request with only start_datetime or end_datetime will generate an exception.
If start_datetime and end_datetime are provided as parameters, then the records are returned in chronological order, with the oldest record displayed first.
datasetList (OPTIONAL)The datasets to be returned. The datasets available are:
  • *: Default. This translates to ["time", "in", "out", "check_failures"].
  • time: The DateTime of the inference request.
  • in: All inputs listed as in_{variable_name}.
  • out: All outputs listed as out_variable_name.
  • check_failures: Flags whether an Anomaly or Validation Check was triggered. 0 indicates no checks were triggered, 1 or greater indicates a check was triggered.
  • meta: Returns metadata. IMPORTANT NOTE: See Metadata RequestsRestrictions for specifications on how this dataset can be used with otherdatasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
    • Returns in the metadata.last_model field:
      • A dict with each Python step as:
        • model_name: The name of the model in the pipeline step.
        • model_sha : The sha hash of the model in the pipeline step.
    • Returns in the metadata.pipeline_version field:
      • The pipeline version as a UUID value.
  • metadata.elapsed: IMPORTANT NOTE: See Metadata Requests Restrictionsfor specifications on how this dataset can be used with other datasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
arrowBoolean (Optional)Defaults to False. If arrow is set to True, then the logs are returned as an Apache Arrow table. If arrow=False, then the logs are returned as a pandas DataFrame.
Pipeline Log Warnings

If the total number of logs the either the set limit or 10 MB in file size, the following warning is returned:

Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

If the total number of logs requested either through the limit or through the start_datetime and end_datetime request is greater than 10 MB in size, the following error is displayed:

Warning: Pipeline log size limit exceeded. Only displaying 509 log messages. Please request a file using export_logs.

The following examples demonstrate displaying the logs, then displaying the logs between the control_model_start and control_model_end periods, then again retrieved as an Arrow table with the logs limited to only 5 entries.

# pipeline log retrieval - reverse chronological order

regular_logs = mainpipeline.logs()

display("Standard Logs")
display(len(regular_logs))
display(regular_logs)

# Display metadata

metadatalogs = mainpipeline.logs(dataset=["time", "out.variable", "metadata"])
display("Metadata Logs")
# Only showing the pipeline version for space reasons
display(metadatalogs.loc[:, ["time", "out.variable", "metadata.pipeline_version"]])

# Display logs restricted by date and limit 

display("Logs restricted by date")
arrow_logs = mainpipeline.logs(start_datetime=dataframe_start, end_datetime=dataframe_end, limit=50)

display(len(arrow_logs))
display(arrow_logs)

# # pipeline log retrieval limited to arrow tables
display(mainpipeline.logs(arrow=True))
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

‘Standard Logs’

100

timein.tensorout.variablecheck_failures
02023-10-26 16:32:19.282[3.0, 2.0, 2005.0, 7000.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1605.0, 400.0, 47.6039, -122.298, 1750.0, 4500.0, 34.0, 0.0, 0.0][581003.0]0
12023-10-26 16:32:19.282[3.0, 1.75, 2910.0, 37461.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1530.0, 1380.0, 47.7015, -122.164, 2520.0, 18295.0, 47.0, 0.0, 0.0][706823.56]0
22023-10-26 16:32:19.282[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]0
32023-10-26 16:32:19.282[4.0, 1.75, 2700.0, 7875.0, 1.5, 0.0, 0.0, 4.0, 8.0, 2700.0, 0.0, 47.454, -122.144, 2220.0, 7875.0, 46.0, 0.0, 0.0][441960.38]0
42023-10-26 16:32:19.282[3.0, 2.5, 2900.0, 23550.0, 1.0, 0.0, 0.0, 3.0, 10.0, 1490.0, 1410.0, 47.5708, -122.153, 2900.0, 19604.0, 27.0, 0.0, 0.0][827411.0]0
...............
952023-10-26 16:32:19.282[2.0, 1.5, 1070.0, 1236.0, 2.0, 0.0, 0.0, 3.0, 8.0, 1000.0, 70.0, 47.5619, -122.382, 1170.0, 1888.0, 10.0, 0.0, 0.0][435628.56]0
962023-10-26 16:32:19.282[3.0, 2.5, 2830.0, 6000.0, 1.0, 0.0, 3.0, 3.0, 9.0, 1730.0, 1100.0, 47.5751, -122.378, 2040.0, 5300.0, 60.0, 0.0, 0.0][981676.6]0
972023-10-26 16:32:19.282[4.0, 1.75, 1720.0, 8750.0, 1.0, 0.0, 0.0, 3.0, 7.0, 860.0, 860.0, 47.726, -122.21, 1790.0, 8750.0, 43.0, 0.0, 0.0][437177.84]0
982023-10-26 16:32:19.282[4.0, 2.25, 4470.0, 60373.0, 2.0, 0.0, 0.0, 3.0, 11.0, 4470.0, 0.0, 47.7289, -122.127, 3210.0, 40450.0, 26.0, 0.0, 0.0][1208638.0]0
992023-10-26 16:32:19.282[3.0, 1.0, 1150.0, 3000.0, 1.0, 0.0, 0.0, 5.0, 6.0, 1150.0, 0.0, 47.6867, -122.345, 1460.0, 3200.0, 108.0, 0.0, 0.0][448627.72]0

100 rows × 4 columns

Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

‘Metadata Logs’

timeout.variablemetadata.pipeline_version
02023-10-26 16:32:19.282[581003.0]23b11216-7b04-4e2e-862d-26308055f3cc
12023-10-26 16:32:19.282[706823.56]23b11216-7b04-4e2e-862d-26308055f3cc
22023-10-26 16:32:19.282[1060847.5]23b11216-7b04-4e2e-862d-26308055f3cc
32023-10-26 16:32:19.282[441960.38]23b11216-7b04-4e2e-862d-26308055f3cc
42023-10-26 16:32:19.282[827411.0]23b11216-7b04-4e2e-862d-26308055f3cc
............
952023-10-26 16:32:19.282[435628.56]23b11216-7b04-4e2e-862d-26308055f3cc
962023-10-26 16:32:19.282[981676.6]23b11216-7b04-4e2e-862d-26308055f3cc
972023-10-26 16:32:19.282[437177.84]23b11216-7b04-4e2e-862d-26308055f3cc
982023-10-26 16:32:19.282[1208638.0]23b11216-7b04-4e2e-862d-26308055f3cc
992023-10-26 16:32:19.282[448627.72]23b11216-7b04-4e2e-862d-26308055f3cc

100 rows × 3 columns

'Logs restricted by date'

2

timein.tensorout.variablecheck_failures
02023-10-26 16:30:58.396[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.7]0
12023-10-26 16:32:08.690[4.0, 3.0, 3710.0, 20000.0, 2.0, 0.0, 2.0, 5.0, 10.0, 2760.0, 950.0, 47.6696, -122.261, 3970.0, 20000.0, 79.0, 0.0, 0.0][1514079.4]0
    Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

    pyarrow.Table
    time: timestamp[ms]
    in.tensor: list<item: float> not null
      child 0, item: float
    out.variable: list<inner: float not null> not null
      child 0, inner: float not null
    check_failures: int8
    ----
    time: [[2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,...,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282,2023-10-26 16:32:19.282]]
    in.tensor: [[[3,2,2005,7000,1,...,1750,4500,34,0,0],[3,1.75,2910,37461,1,...,2520,18295,47,0,0],...,[4,2.25,4470,60373,2,...,3210,40450,26,0,0],[3,1,1150,3000,1,...,1460,3200,108,0,0]]]
    out.variable: [[[581003],[706823.56],...,[1208638],[448627.72]]]
    check_failures: [[0,0,0,0,0,...,0,0,0,0,0]]
result = mainpipeline.infer(normal_input, dataset=["*", "metadata.pipeline_version"])
display(result)
timein.tensorout.variablecheck_failuresmetadata.pipeline_version
02023-10-26 16:47:45.922[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.7]023b11216-7b04-4e2e-862d-26308055f3cc

The following displays the pipeline metadata logs.

Standard Pipeline Steps Log Requests

Effected pipeline steps:

  • add_model_step
  • replace_with_model_step

For log file requests, the following metadata dataset requests for standard pipeline steps are available:

  • metadata

These must be paired with specific columns. * is not available when paired with metadata.

  • in: All input fields.
  • out: All output fields.
  • time: The DateTime the inference request was made.
  • in.{input_fields}: Any input fields (tensor, etc.)
  • out.{output_fields}: Any output fields (out.house_price, out.variable, etc.)
  • check_failures: Any validation check failures.

The following requests the metadata, and displays the output variable and last model from the metadata.

# Display metadata

metadatalogs = mainpipeline.logs(dataset=["out","metadata"])
display("Metadata Logs")
display(metadatalogs.loc[:, ['out.variable', 'metadata.last_model']])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.

‘Metadata Logs’

out.variablemetadata.last_model
0[581003.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
1[706823.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
2[1060847.5]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
3[441960.38]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
4[827411.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
.........
95[435628.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
96[981676.6]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
97[437177.84]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
98[1208638.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
99[448627.72]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}

100 rows × 2 columns

Pipeline Limits

In a previous step we performed 10,000 inferences at once. If we attempt to pull them at once, we’ll likely run into the size limit for this pipeline and receive the following warning message indicating that the pipeline size limits were exceeded and we should use export_logs instead.

Warning: Pipeline log size limit exceeded. Only displaying 1000 log messages (of 10000 requested). Please request a file using export_logs.

logs = mainpipeline.logs(limit=10000)
display(logs)
Warning: Pipeline log size limit exceeded. Please request logs using export_logs
timein.tensorout.variablecheck_failures
02023-10-26 16:51:09.710[3.0, 2.0, 2005.0, 7000.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1605.0, 400.0, 47.6039, -122.298, 1750.0, 4500.0, 34.0, 0.0, 0.0][581003.0]0
12023-10-26 16:51:09.710[3.0, 1.75, 2910.0, 37461.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1530.0, 1380.0, 47.7015, -122.164, 2520.0, 18295.0, 47.0, 0.0, 0.0][706823.56]0
22023-10-26 16:51:09.710[4.0, 3.25, 2910.0, 1880.0, 2.0, 0.0, 3.0, 5.0, 9.0, 1830.0, 1080.0, 47.616, -122.282, 3100.0, 8200.0, 100.0, 0.0, 0.0][1060847.5]0
32023-10-26 16:51:09.710[4.0, 1.75, 2700.0, 7875.0, 1.5, 0.0, 0.0, 4.0, 8.0, 2700.0, 0.0, 47.454, -122.144, 2220.0, 7875.0, 46.0, 0.0, 0.0][441960.38]0
42023-10-26 16:51:09.710[3.0, 2.5, 2900.0, 23550.0, 1.0, 0.0, 0.0, 3.0, 10.0, 1490.0, 1410.0, 47.5708, -122.153, 2900.0, 19604.0, 27.0, 0.0, 0.0][827411.0]0
...............
6612023-10-26 16:51:09.710[5.0, 3.25, 3160.0, 10587.0, 1.0, 0.0, 0.0, 5.0, 7.0, 2190.0, 970.0, 47.7238, -122.165, 2200.0, 7761.0, 55.0, 0.0, 0.0][573403.1]0
6622023-10-26 16:51:09.710[3.0, 2.5, 2210.0, 7620.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2210.0, 0.0, 47.6938, -122.13, 1920.0, 7440.0, 20.0, 0.0, 0.0][677870.9]0
6632023-10-26 16:51:09.710[3.0, 1.75, 1960.0, 8136.0, 1.0, 0.0, 0.0, 3.0, 7.0, 980.0, 980.0, 47.5208, -122.364, 1070.0, 7480.0, 66.0, 0.0, 0.0][365436.25]0
6642023-10-26 16:51:09.710[3.0, 2.0, 1260.0, 8092.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1260.0, 0.0, 47.3635, -122.054, 1950.0, 8092.0, 28.0, 0.0, 0.0][253958.75]0
6652023-10-26 16:51:09.710[4.0, 2.5, 2650.0, 18295.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2650.0, 0.0, 47.6075, -122.154, 2230.0, 19856.0, 28.0, 0.0, 0.0][706407.4]0

666 rows × 4 columns

Pipeline export_logs Method

The Pipeline method export_logs returns the Pipeline records as either a DataFrame JSON file, or an Apache Arrow table file. For a complete list, see the Wallaroo SDK Essentials Guide: Pipeline Log Management.

The export_logs method takes the following parameters:

ParameterTypeDescription
directoryString (Optional) (Default: logs)Logs are exported to a file from current working directory to directory.
data_size_limitString (Optional) ((Default: 100MB)The maximum size for the exported data in bytes. Note that file size is approximate to the request; a request of 10MiB may return 10.3MB of data. The fields are in the format “{size as number} {unit value}”, and can include a space so “10 MiB” and “10MiB” are the same. The accepted unit values are:
  • KiB (for KiloBytes)
  • MiB (for MegaBytes)
  • GiB (for GigaBytes)
  • TiB (for TeraBytes)
file_prefixString (Optional) (Default: The name of the pipeline)The name of the exported files. By default, this will the name of the pipeline and is segmented by pipeline version between the limits or the start and end period. For example: ’logpipeline-1.json`, etc.
limitInt (Optional)Limits how many log records to display. Defaults to 100. If there are more pipeline logs than are being displayed, the Warning message Pipeline log record limit exceeded will be displayed. For example, if 100 log files were requested and there are a total of 1,000, the warning message will be displayed.
start and endDateTime (Optional)Limits logs to all logs between the start and end DateTime parameters. Both parameters must be provided. Submitting a logs() request with only start or end will generate an exception.
If start and end are provided as parameters, then the records are returned in chronological order, with the oldest record displayed first.
datasetList (OPTIONAL)The datasets to be returned. The datasets available are:
  • *: Default. This translates to ["time", "in", "out", "check_failures"].
  • time: The DateTime of the inference request.
  • in: All inputs listed as in_{variable_name}.
  • out: All outputs listed as out_variable_name.
  • check_failures: Flags whether an Anomaly or Validation Check was triggered. 0 indicates no checks were triggered, 1 or greater indicates a check was triggered.
  • meta: Returns metadata. IMPORTANT NOTE: See Metadata RequestsRestrictions for specifications on how this dataset can be used with otherdatasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
    • Returns in the metadata.last_model field:
      • A dict with each Python step as:
        • model_name: The name of the model in the pipeline step.
        • model_sha : The sha hash of the model in the pipeline step.
    • Returns in the metadata.pipeline_version field:
      • The pipeline version as a UUID value.
  • metadata.elapsed: IMPORTANT NOTE: See Metadata Requests Restrictionsfor specifications on how this dataset can be used with other datasets.
    • Returns in the metadata.elapsed field:
      • A list of time in nanoseconds for:
        • The time to serialize the input.
        • How long each step took.
arrowBoolean (Optional)Defaults to False. If arrow is set to True, then the logs are returned as an Apache Arrow table. If arrow=False, then the logs are returned as JSON in pandas DataFrame format.

The following examples demonstrate saving a DataFrame version of the mainpipeline logs, then an Arrow version.

# Save the DataFrame version of the log file

mainpipeline.export_logs()
display(os.listdir('./logs'))

mainpipeline.export_logs(arrow=True)
display(os.listdir('./logs'))
Warning: There are more logs available. Please set a larger limit to export more data.

[’logpipeline-1.json']

Warning: There are more logs available. Please set a larger limit to export more data.

[’logpipeline-1.arrow’, ’logpipeline-1.json’]

Shadow Deploy Pipelines

Let’s assume that after analyzing the assay information we want to test two challenger models to our control. We do that with the Shadow Deploy pipeline step.

In Shadow Deploy, the pipeline step is added with the add_shadow_deploy method, with the champion model listed first, then an array of challenger models after. All inference data is fed to all models, with the champion results displayed in the out.variable column, and the shadow results in the format out_{model name}.variable. For example, since we named our challenger models housingchallenger01 and housingchallenger02, the columns out_housingchallenger01.variable and out_housingchallenger02.variable have the shadow deployed model results.

For this example, we will remove the previous pipeline step, then replace it with a shadow deploy step with rf_model.onnx as our champion, and models xgb_model.onnx and gbr_model.onnx as the challengers. We’ll deploy the pipeline and prepare it for sample inferences.

# Upload the challenger models

model_name_challenger01 = 'logcontrolchallenger01'
model_file_name_challenger01 = './models/xgb_model.onnx'

model_name_challenger02 = 'logcontrolchallenger02'
model_file_name_challenger02 = './models/gbr_model.onnx'

housing_model_challenger01 = (wl.upload_model(model_name_challenger01, 
                                              model_file_name_challenger01, 
                                              framework=wallaroo.framework.Framework.ONNX)
                                              .configure(tensor_fields=["tensor"])
                            )
housing_model_challenger02 = (wl.upload_model(model_name_challenger02, 
                                              model_file_name_challenger02, 
                                              framework=wallaroo.framework.Framework.ONNX)
                                              .configure(tensor_fields=["tensor"])
                            )
# Undeploy the pipeline
mainpipeline.undeploy()

mainpipeline.clear()

# Add the new shadow deploy step with our challenger models
mainpipeline.add_shadow_deploy(housing_model_control, [housing_model_challenger01, housing_model_challenger02])

# Deploy the pipeline with the new shadow step
mainpipeline.deploy()
namelogpipeline-test
created2023-10-26 16:50:50.930771+00:00
last_updated2023-10-26 17:04:50.279469+00:00
deployedTrue
tags
versionsb487e3d5-3cff-42de-b4d2-40eab6ad37c3, 9aa799f3-bae4-49d5-93ec-82ae91a34ea0, 951f1a15-5aa3-478b-ac64-a1d11a070ab4
stepslogcontrol
publishedFalse

Shadow Deploy Sample Inference

We’ll now use our same sample data for an inference to our shadow deployed pipeline, then display the first 20 results with just the comparative outputs.

shadow_date_start = datetime.datetime.now()

shadow_result = mainpipeline.infer_from_file('./data/xtest-1k.arrow')

shadow_outputs =  shadow_result.to_pandas()
display(shadow_outputs.loc[0:20,['out.variable','out_logcontrolchallenger01.variable','out_logcontrolchallenger02.variable']])

shadow_date_end = datetime.datetime.now()
out.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
0[718013.75][659806.0][704901.9]
1[615094.56][732883.5][695994.44]
2[448627.72][419508.84][416164.8]
3[758714.2][634028.8][655277.2]
4[513264.7][427209.44][426854.66]
5[668288.0][615501.9][632556.1]
6[1004846.5][1139732.5][1100465.2]
7[684577.2][498328.88][528278.06]
8[727898.1][722664.4][659439.94]
9[559631.1][525746.44][534331.44]
10[340764.53][376337.1][377187.2]
11[442168.06][382053.12][403964.3]
12[630865.6][505608.97][528991.3]
13[559631.1][603260.5][612201.75]
14[909441.1][969585.4][893874.7]
15[313096.0][313633.75][318054.94]
16[404040.8][360413.56][357816.75]
17[292859.5][316674.94][294034.7]
18[338357.88][299907.44][323254.3]
19[682284.6][811896.75][770916.7]
20[583765.94][573618.5][549141.4]

Shadow Deploy Logs

Pipelines with a shadow deployed step include the shadow inference result in the same format as the inference result: inference results from shadow deployed models are displayed as out_{model name}.{output variable}.

# display logs with shadow deployed steps

display(mainpipeline.logs(start_datetime=shadow_date_start, end_datetime=shadow_date_end).loc[:, ["time", "out.variable", "out_logcontrolchallenger01.variable", "out_logcontrolchallenger02.variable"]])
Warning: Pipeline log size limit exceeded. Please request logs using export_logs
timeout.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
02023-10-26 17:05:08.349[718013.75][659806.0][704901.9]
12023-10-26 17:05:08.349[615094.56][732883.5][695994.44]
22023-10-26 17:05:08.349[448627.72][419508.84][416164.8]
32023-10-26 17:05:08.349[758714.2][634028.8][655277.2]
42023-10-26 17:05:08.349[513264.7][427209.44][426854.66]
...............
6632023-10-26 17:05:08.349[642519.75][390891.06][481425.8]
6642023-10-26 17:05:08.349[301714.75][406503.62][374509.53]
6652023-10-26 17:05:08.349[448627.72][473771.0][478128.03]
6662023-10-26 17:05:08.349[544392.1][428174.9][442408.25]
6672023-10-26 17:05:08.349[944006.75][902058.6][866622.25]

668 rows × 4 columns

For log file requests, the following metadata dataset requests for testing pipeline steps are available:

  • metadata

These must be paired with specific columns. * is not available when paired with metadata.

  • in: All input fields.
  • out: All output fields.
  • time: The DateTime the inference request was made.
  • in.{input_fields}: Any input fields (tensor, etc.).
  • out.{output_fields}: Any output fields matching the specific output_field (out.house_price, out.variable, etc.).
  • out_: All shadow deployed challenger steps Any output fields matching the specific output_field (out.house_price, out.variable, etc.).
  • check_failures: Any validation check failures.

The following example retrieves the logs from a pipeline with shadow deployed models, and displays the specific shadow deployed model outputs and the metadata.elasped field.

# display logs with shadow deployed steps

display(mainpipeline.logs(start_datetime=shadow_date_start, end_datetime=shadow_date_end).loc[:, ["time", 
                                                                                                  "out.variable", 
                                                                                                  "out_logcontrolchallenger01.variable", 
                                                                                                  "out_logcontrolchallenger02.variable"
                                                                                                  ]
                                                                                        ])
Warning: Pipeline log size limit exceeded. Please request logs using export_logs
timeout.variableout_logcontrolchallenger01.variableout_logcontrolchallenger02.variable
02023-10-26 17:07:56.576[718013.75][659806.0][704901.9]
12023-10-26 17:07:56.576[615094.56][732883.5][695994.44]
22023-10-26 17:07:56.576[448627.72][419508.84][416164.8]
32023-10-26 17:07:56.576[758714.2][634028.8][655277.2]
42023-10-26 17:07:56.576[513264.7][427209.44][426854.66]
...............
6632023-10-26 17:07:56.576[642519.75][390891.06][481425.8]
6642023-10-26 17:07:56.576[301714.75][406503.62][374509.53]
6652023-10-26 17:07:56.576[448627.72][473771.0][478128.03]
6662023-10-26 17:07:56.576[544392.1][428174.9][442408.25]
6672023-10-26 17:07:56.576[944006.75][902058.6][866622.25]

668 rows × 4 columns

metadatalogs = mainpipeline.logs(dataset=["time",
                                          "out_logcontrolchallenger01.variable", 
                                          "out_logcontrolchallenger02.variable", 
                                          "metadata"
                                          ],
                                start_datetime=shadow_date_start, 
                                end_datetime=shadow_date_end
                                )

display(metadatalogs.loc[:, ['out_logcontrolchallenger01.variable',	
                             'out_logcontrolchallenger02.variable', 
                             'metadata.elapsed'
                             ]
                        ])
Warning: Pipeline log size limit exceeded. Please request logs using export_logs
out_logcontrolchallenger01.variableout_logcontrolchallenger02.variablemetadata.elapsed
0[659806.0][704901.9][302804, 26900]
1[732883.5][695994.44][302804, 26900]
2[419508.84][416164.8][302804, 26900]
3[634028.8][655277.2][302804, 26900]
4[427209.44][426854.66][302804, 26900]
............
663[390891.06][481425.8][302804, 26900]
664[406503.62][374509.53][302804, 26900]
665[473771.0][478128.03][302804, 26900]
666[428174.9][442408.25][302804, 26900]
667[902058.6][866622.25][302804, 26900]

668 rows × 3 columns

The following demonstrates exporting the shadow deployed logs to the directory shadow.

# Save shadow deployed log files as pandas DataFrame

mainpipeline.export_logs(directory="shadow", file_prefix="shadowdeploylogs")
display(os.listdir('./shadow'))
Warning: There are more logs available. Please set a larger limit to export more data.

[‘shadowdeploylogs-1.json’]

A/B Testing Pipeline

A/B testing allows inference requests to be split between a control model and one or more challenger models. For full details, see the Pipeline Management Guide: A/B Testing.

When the inference results and log entries are displayed, they include the column out._model_split which displays:

FieldTypeDescription
nameStringThe model name used for the inference.
versionStringThe version of the model.
shaStringThe sha hash of the model version.

For this example, the shadow deployed step will be removed and replaced with an A/B Testing step with the ratio 1:1:1, so the control and each of the challenger models will be split randomly between inference requests. A set of sample inferences will be run, then the pipeline logs displayed.

pipeline = (wl.build_pipeline(“randomsplitpipeline-demo”)
.add_random_split([(2, control), (1, challenger)], “session_id”))

mainpipeline.undeploy()

# remove the shadow deploy steps
mainpipeline.clear()

# Add the a/b test step to the pipeline
mainpipeline.add_random_split([(1, housing_model_control), (1, housing_model_challenger01), (1, housing_model_challenger02)], "session_id")

mainpipeline.deploy()

# Perform sample inferences of 20 rows and display the results
ab_date_start = datetime.datetime.now()
abtesting_inputs = pd.read_json('./data/xtest-1k.df.json')

for index, row in abtesting_inputs.sample(20).iterrows():
    display(mainpipeline.infer(row.to_frame('tensor').reset_index()).loc[:,["out._model_split", "out.variable"]])

ab_date_end = datetime.datetime.now()
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][921695.4]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][455095.16]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][286100.3]
out._model_splitout.variable
0[{"name":"logcontrolchallenger02","version":"87a39cd8-1ff5-41a7-b66b-373d49fd2452","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a"}][360863.16]
out._model_splitout.variable
0[{"name":"logcontrolchallenger02","version":"87a39cd8-1ff5-41a7-b66b-373d49fd2452","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a"}][356897.44]
out._model_splitout.variable
0[{"name":"logcontrolchallenger02","version":"87a39cd8-1ff5-41a7-b66b-373d49fd2452","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a"}][732620.3]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][729163.06]
out._model_splitout.variable
0[{"name":"logcontrolchallenger02","version":"87a39cd8-1ff5-41a7-b66b-373d49fd2452","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a"}][725518.4]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][448627.8]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][628291.1]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][340764.53]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][491306.7]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][673288.6]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][464811.13]
out._model_splitout.variable
0[{"name":"logcontrolchallenger02","version":"87a39cd8-1ff5-41a7-b66b-373d49fd2452","sha":"ed6065a79d841f7e96307bb20d5ef22840f15da0b587efb51425c7ad60589d6a"}][268304.4]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][1182820.6]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][271387.47]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][637377.0]
out._model_splitout.variable
0[{"name":"logcontrol","version":"096065f1-2d21-4c8b-b499-0c813bc09c04","sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}][403520.16]
out._model_splitout.variable
0[{"name":"logcontrolchallenger01","version":"470363b5-6353-47ae-8c66-1631b783f562","sha":"31e92d6ccb27b041a324a7ac22cf95d9d6cc3aa7e8263a229f7c4aec4938657c"}][1165000.1]
## Get the logs with the a/b testing information

metadatalogs = mainpipeline.logs(dataset=["time",
                                          "out", 
                                          "metadata"
                                          ]
                                )

display(metadatalogs.loc[:, ['out.variable', 'metadata.last_model']])
Warning: There are more logs available. Please set a larger limit or request a file using export_logs.
out.variablemetadata.last_model
0[581003.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
1[706823.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
2[1060847.5]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
3[441960.38]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
4[827411.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
.........
95[435628.56]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
96[981676.6]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
97[437177.84]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
98[1208638.0]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}
99[448627.72]{"model_name":"logcontrol","model_sha":"e22a0831aafd9917f3cc87a15ed267797f80e2afa12ad7d8810ca58f173b8cc6"}

100 rows × 2 columns

# Save a/b testing log files as DataFrame

mainpipeline.export_logs(directory="abtesting", 
                         file_prefix="abtests", 
                         start_datetime=ab_date_start, 
                         end_datetime=ab_date_end)
display(os.listdir('./abtesting'))
['abtests-1.json']

The following exports the metadata with the log files.

# Save a/b testing log files as DataFrame

mainpipeline.export_logs(directory="abtesting-metadata", 
                         file_prefix="abtests", 
                         start_datetime=ab_date_start, 
                         end_datetime=ab_date_end,
                         dataset=["time", "out", "metadata"])
display(os.listdir('./abtesting-metadata'))
['abtests-1.json']

Undeploy Main Pipeline

With the examples and tutorial complete, we will undeploy the main pipeline and return the resources back to the Wallaroo instance.

mainpipeline.undeploy()
namelogpipeline-test
created2023-10-26 16:50:50.930771+00:00
last_updated2023-10-26 17:46:09.340797+00:00
deployedFalse
tags
versions7351b1ab-3038-48e6-81ef-0378a8afc36d, b487e3d5-3cff-42de-b4d2-40eab6ad37c3, 9aa799f3-bae4-49d5-93ec-82ae91a34ea0, 951f1a15-5aa3-478b-ac64-a1d11a070ab4
stepslogcontrol
publishedFalse