Wallaroo ML Workload Orchestration Google BigQuery with House Price Model Tutorial

A tutorial on using the ML Workload Orchestration with BigQuery and the house price model.

This can be downloaded as part of the Wallaroo Tutorials repository.

Wallaroo ML Workload Orchestration House Price Model Tutorial

This tutorial provides a quick set of methods and examples regarding Wallaroo Connections and Wallaroo ML Workload Orchestration. For full details, see the Wallaroo Documentation site.

Wallaroo provides Data Connections and ML Workload Orchestrations to provide organizations with a method of creating and managing automated tasks that can either be run on demand or a regular schedule.

Definitions

  • Orchestration: A set of instructions written as a python script with a requirements library. Orchestrations are uploaded to the Wallaroo instance as a .zip file.
  • Task: An implementation of an orchestration. Tasks are run either once when requested, on a repeating schedule, or as a service.
  • Connection: Definitions set by MLOps engineers that are used by other Wallaroo users for connection information to a data source. Usually paired with orchestrations.

This tutorial will focus on using Google BigQuery as the data source.

Tutorial Goals

The tutorial will demonstrate the following:

  1. Create a Wallaroo connection to retrieving information from a Google BigQuery source table.
  2. Create a Wallaroo connection to store inference results into a Google BigQuery destination table.
  3. Upload Wallaroo ML Workload Orchestration that supports BigQuery connections with the connection details.
  4. Run the orchestration once as a Run Once Task and verify that the inference request succeeded and the inference results were saved to the external data store.
  5. Schedule the orchestration as a Scheduled Task and verify that the inference request succeeded and the inference results were saved to the external data store.

Prerequisites

  • An installed Wallaroo instance.
  • The following Python libraries installed. These are included by default in a Wallaroo instance’s JupyterHub service.
    • os
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
    • pandas: Pandas, mainly used for Pandas DataFrame
    • pyarrow: PyArrow for Apache Arrow support
  • The following Python libraries. These are not included in a Wallaroo instance’s JupyterHub service.

Tutorial Resources

  • Models:
    • models/rf_model.onnx: A model that predicts house price values.
  • Data:
    • data/xtest-1.df.json and data/xtest-1k.df.json: DataFrame JSON inference inputs with 1 input and 1,000 inputs.
    • data/xtest-1k.arrow: Apache Arrow inference inputs with 1 input and 1,000 inputs.
    • Sample inference inputs in CSV that can be imported into Google BigQuery.
      • data/xtest-1k.df.json: Random sample housing prices.
      • data/smallinputs.df.json: Sample housing prices that return results lower than $1.5 million.
      • data/biginputs.df.json: Sample housing prices that return results higher than $1.5 million.
    • SQL queries to create the inputs/outputs tables with schema.
      • ./resources/create_inputs_table.sql: Inputs table with schema.
      • ./resources/create_outputs_table.sql: Outputs table with schema.
      • ./resources/housrpricesga_inputs.avro: Avro container of inputs table.

Initial Steps

For this tutorial, we’ll create a workspace, upload our sample model and deploy a pipeline. We’ll perform some quick sample inferences to verify that everything it working.

Load Libraries

Here we’ll import the various libraries we’ll use for the tutorial.

import wallaroo
from wallaroo.object import EntityNotFoundError, RequiredAttributeMissing

# to display dataframe tables
from IPython.display import display
# used to display dataframe information without truncating
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
import pyarrow as pa

import time
import json

# for Big Query connections
from google.cloud import bigquery
from google.oauth2 import service_account
import db_dtypes

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()

Variable Declaration

The following variables will be used for our big query testing.

We’ll use two connections:

  • bigquery_input_connection: The connection that will draw inference input data from a BigQuery table.
  • bigquery_output_connection: The connection that will upload inference results into a BigQuery table.

Not that for the connection arguments, we’ll retrieve the information from the files ./bigquery_service_account_input_key.json and ./bigquery_service_account_output_key.json that include the service account key file(SAK) information, as well as the dataset and table used.

FieldIncluded in SAK
type
project_id
private_key_id
private_key
client_email
auth_uri
token_uri
auth_provider_x509_cert_url
client_x509_cert_url
database🚫
table🚫
# Setting variables for later steps

workspace_name = f'bigqueryworkspace'
pipeline_name = f'bigquerypipeline'
model_name = f'bigquerymodel'
model_file_name = './models/rf_model.onnx'

bigquery_connection_input_name = 'bigqueryhouseinputs'
bigquery_connection_input_type = "BIGQUERY"
bigquery_connection_input_argument = json.load(open("./bigquery_service_account_input_key.json"))

bigquery_connection_output_name = 'bigqueryhouseoutputs'
bigquery_connection_output_type = "BIGQUERY"
bigquery_connection_output_argument = json.load(open("./bigquery_service_account_output_key.json"))

Create the Workspace and Pipeline

We’ll now create our workspace and pipeline for the tutorial. If this tutorial has been run previously, then this will retrieve the existing ones with the assumption they’re for us with this tutorial.

We’ll set the retrieved workspace as the current workspace in the SDK, so all commands will default to that workspace.

workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)

pipeline = wl.build_pipeline(pipeline_name)

Upload the Model and Deploy Pipeline

We’ll upload our model into our sample workspace, then add it as a pipeline step before deploying the pipeline to it’s ready to accept inference requests.

# Upload the model

housing_model_control = (wl.upload_model(model_name, 
                                         model_file_name, 
                                         framework=wallaroo.framework.Framework.ONNX)
                                         .configure(tensor_fields=["tensor"])
                        )

# Add the model as a pipeline step

pipeline.add_model_step(housing_model_control)
namebigquerypipeline
created2024-04-17 17:09:17.626798+00:00
last_updated2024-04-17 17:33:00.819639+00:00
deployedTrue
archx86
accelnone
tags
versionsd1b689bc-4b15-4e95-a97d-fea7dab28dea, 260fcd02-3de6-4cf1-bd6f-912ab53999b5, 85946c55-83d9-49ad-989c-519957ad5fb8, c044370d-cd77-468e-a6a6-4ac0ed7f1b8e, b7348932-cfca-42a6-99fe-efc88273771e
stepsbigquerymodel
publishedFalse
#deploy the pipeline to set the pipeline steps
pipeline.deploy()
namebigquerypipeline
created2024-04-17 17:09:17.626798+00:00
last_updated2024-04-17 17:34:09.277972+00:00
deployedTrue
archx86
accelnone
tags
versions2c749d44-0dd0-4e2c-9d1f-de6dcb073af7, d1b689bc-4b15-4e95-a97d-fea7dab28dea, 260fcd02-3de6-4cf1-bd6f-912ab53999b5, 85946c55-83d9-49ad-989c-519957ad5fb8, c044370d-cd77-468e-a6a6-4ac0ed7f1b8e, b7348932-cfca-42a6-99fe-efc88273771e
stepsbigquerymodel
publishedFalse

Create Connections

We will create the data source connection via the Wallaroo client command create_connection.

Connections are created with the Wallaroo client command create_connection with the following parameters.

ParameterTypeDescription
namestring (Required)The name of the connection. This must be unique - if submitting the name of an existing connection it will return an error.
typestring (Required)The user defined type of connection.
detailsDict (Required)User defined configuration details for the data connection. These can be {'username':'dataperson', 'password':'datapassword', 'port': 3339}, or {'token':'abcde123==', 'host':'example.com', 'port:1234'}, or other user defined combinations.
  • IMPORTANT NOTE: Data connections names must be unique. Attempting to create a data connection with the same name as an existing data connection will result in an error.
connection_input = wl.create_connection(bigquery_connection_input_name, bigquery_connection_input_type, bigquery_connection_input_argument)
connection_output = wl.create_connection(bigquery_connection_output_name, bigquery_connection_output_type, bigquery_connection_output_argument)
wl.list_connections()
nameconnection typedetailscreated atlinked workspaces
mitochondria_image_sourceHTTP*****2024-04-16T17:51:37.014995+00:00[]
external_inference_connectionHTTP*****2024-04-17T15:37:00.958462+00:00['simpleorchestrationworkspace']
external_inference_connection_sampleHTTP*****2024-04-17T15:49:26.695606+00:00['simpleorchestrationworkspace']
houseprice_arrow_tableHTTPFILE*****2024-04-17T16:12:57.815962+00:00['orchestrationworkspace']
bigqueryhouseinputsBIGQUERY*****2024-04-17T17:34:12.065733+00:00[]
bigqueryhouseoutputsBIGQUERY*****2024-04-17T17:34:12.207738+00:00[]

Get Connection by Name

The Wallaroo client method get_connection(name) retrieves the connection that matches the name parameter. We’ll retrieve our connection and store it as inference_source_connection.

big_query_input_connection = wl.get_connection(name=bigquery_connection_input_name)
big_query_output_connection = wl.get_connection(name=bigquery_connection_output_name)
display(big_query_input_connection)
display(big_query_output_connection)
FieldValue
Namebigqueryhouseinputs
Connection TypeBIGQUERY
Details*****
Created At2024-04-17T17:34:12.065733+00:00
Linked Workspaces[]
FieldValue
Namebigqueryhouseoutputs
Connection TypeBIGQUERY
Details*****
Created At2024-04-17T17:34:12.207738+00:00
Linked Workspaces[]

Add Connection to Workspace

The method Workspace add_connection(connection_name) adds a Data Connection to a workspace, and takes the following parameters.

ParameterTypeDescription
namestring (Required)The name of the Data Connection

We’ll add both connections to our sample workspace, then list the connections available to the workspace to confirm.

Big Query Connection Inference Example

We can test the BigQuery connection with a simple inference to our deployed pipeline. We’ll request the data, format the table into a pandas DataFrame, then submit it for an inference request.

Create Google Credentials

From our BigQuery request, we’ll create the credentials for our BigQuery connection.

bigquery_input_credentials = service_account.Credentials.from_service_account_info(
    big_query_input_connection.details())

bigquery_output_credentials = service_account.Credentials.from_service_account_info(
    big_query_output_connection.details())

Connect to Google BigQuery

We can now generate a client from our connection details, specifying the project that was included in the big_query_connection details.

bigqueryinputclient = bigquery.Client(
    credentials=bigquery_input_credentials, 
    project=big_query_input_connection.details()['project_id']
)
bigqueryoutputclient = bigquery.Client(
    credentials=bigquery_output_credentials, 
    project=big_query_output_connection.details()['project_id']
)

Query Data

Now we’ll create our query and retrieve information from out dataset and table as defined in the file bigquery_service_account_key.json. The table is expected to be in the format of the file ./data/xtest-1k.df.json.

inference_dataframe_input = bigqueryinputclient.query(
        f"""
        SELECT tensor
        FROM {big_query_input_connection.details()['dataset']}.{big_query_input_connection.details()['table']}"""
    ).to_dataframe()
inference_dataframe_input.head(5)
tensor
0[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0]
1[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0]
2[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0]
3[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0]
4[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0]

Sample Inference

With our data retrieved, we’ll perform an inference and display the results.

result = pipeline.infer(inference_dataframe_input)
display(result.head(5))
timein.tensorout.variableanomaly.count
02024-04-17 17:43:04.437[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
12024-04-17 17:43:04.437[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
22024-04-17 17:43:04.437[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
32024-04-17 17:43:04.437[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
42024-04-17 17:43:04.437[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0

Upload the Results

With the query complete, we’ll upload the results back to the BigQuery dataset.

output_table = bigqueryoutputclient.get_table(f"{big_query_output_connection.details()['dataset']}.{big_query_output_connection.details()['table']}")

bigqueryoutputclient.insert_rows_from_dataframe(
    output_table, 
    dataframe=result.rename(columns={"in.tensor":"in_tensor", "out.variable":"out_variable", 'anomaly.count':'anomaly_count'})
)
[[], []]

Wallaroo ML Workload Orchestration Example

With the pipeline deployed and our connections set, we will now generate our ML Workload Orchestration. See the Wallaroo ML Workload Orchestrations guide for full details.

Orchestrations are uploaded to the Wallaroo instance as a ZIP file with the following requirements:

ParameterTypeDescription
User Code(Required) Python script as .py filesIf main.py exists, then that will be used as the task entrypoint. Otherwise, the first main.py found in any subdirectory will be used as the entrypoint.
Python Library Requirements(Optional) requirements.txt file in the requirements file format. A standard Python requirements.txt for any dependencies to be provided in the task environment. The Wallaroo SDK will already be present and should not be included in the requirements.txt. Multiple requirements.txt files are not allowed.
Other artifacts Other artifacts such as files, data, or code to support the orchestration.

For our example, our orchestration will:

  1. Use the bigquery_remote_inference to open a connection to the input and output tables.
  2. Deploy the pipeline.
  3. Perform an inference with the input data.
  4. Save the inference results to the output table.
  5. Undeploy the pipeline.

This sample script is stored in bigquery_remote_inference/main.py with an requirements.txt file having the specific libraries for the Google BigQuery connection., and packaged into the orchestration as ./bigquery_remote_inference/bigquery_remote_inference.zip. We’ll display the steps in uploading the orchestration to the Wallaroo instance.

Note that the orchestration assumes the pipeline is already deployed.

Upload the Orchestration

Orchestrations are uploaded with the Wallaroo client upload_orchestration(path) method with the following parameters.

ParameterTypeDescription
pathstring (Required)The path to the ZIP file to be uploaded.

Once uploaded, the deployment will be prepared and any requirements will be downloaded and installed.

For this example, the orchestration ./bigquery_remote_inference/bigquery_remote_inference.zip will be uploaded and saved to the variable orchestration. Then we will loop until the uploaded orchestration’s status displays ready.

orchestration = wl.upload_orchestration(path="./bigquery_remote_inference/bigquery_remote_inference.zip")

while orchestration.status() != 'ready' and orchestration.status() != 'error':
    print(orchestration.status())
    time.sleep(5)
pending_packaging
pending_packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
wl.list_orchestrations()
idnamestatusfilenameshacreated atupdated at
e1cd47ba-9028-4ed2-a09e-a836f4a0c71fNonereadybigquery_remote_inference.zip3e5f6d...8644832024-17-Apr 17:48:182024-17-Apr 17:49:13

Task Management Tutorial

Once an Orchestration has the status ready, it can be run as a task. Tasks have three run options.

TypeSDK CallHow triggered
Onceorchestration.run_once(name, json_args, timeout)Task runs once and exits.
Scheduledorchestration.run_scheduled(name, schedule, timeout, json_args)User provides schedule. Task runs exits whenever schedule dictates.

Run Task Once

We’ll do both a Run Once task and generate our Run Once Task from our orchestration.

Tasks are generated and run once with the Orchestration run_once(name, json_args, timeout) method. Any arguments for the orchestration are passed in as a Dict. If there are no arguments, then an empty set {} is passed.

We’ll display the last 5 rows of our BigQuery output table, then start the task that will perform the same inference we did above.

task_inference_results = bigqueryoutputclient.query(
        f"""
        SELECT *
        FROM {big_query_output_connection.details()['dataset']}.{big_query_output_connection.details()['table']}
        ORDER BY time DESC
        LIMIT 5
        """
    ).to_dataframe()

display(task_inference_results)
timein_tensorout_variableanomaly_count
02024-04-17 17:43:04.437[3.0, 0.75, 920.0, 20412.0, 1.0, 1.0, 2.0, 5.0, 6.0, 920.0, 0.0, 47.4781, -122.49, 1162.0, 54705.0, 64.0, 0.0, 0.0][338418.8]0
12024-04-17 17:43:04.437[2.0, 1.0, 850.0, 5000.0, 1.0, 0.0, 0.0, 3.0, 6.0, 850.0, 0.0, 47.3817, -122.314, 1160.0, 5000.0, 39.0, 0.0, 0.0][236238.66]0
22024-04-17 17:43:04.437[3.0, 2.5, 1570.0, 1433.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1570.0, 0.0, 47.6858, -122.336, 1570.0, 2652.0, 4.0, 0.0, 0.0][557391.25]0
32024-04-17 17:43:04.437[4.0, 2.5, 2800.0, 246114.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2800.0, 0.0, 47.6586, -121.962, 2750.0, 60351.0, 15.0, 0.0, 0.0][765468.75]0
42024-04-17 17:43:04.437[3.0, 2.5, 2390.0, 15669.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2390.0, 0.0, 47.7446, -122.193, 2640.0, 12500.0, 24.0, 0.0, 0.0][741973.6]0
# Example: run once

import datetime
task_start = datetime.datetime.now()

task = orchestration.run_once(name="big query single run", json_args={})
task
FieldValue
IDe2951a1c-d272-4968-94a8-d2acf8a18669
Namebig query single run
Last Run Statusunknown
TypeTemporary Run
ActiveTrue
Schedule-
Created At2024-17-Apr 17:50:37
Updated At2024-17-Apr 17:50:37

Task Status

The list of tasks in the Wallaroo instance is retrieves through the Wallaroo Client list_tasks() method. This returns an array list of the following.

ParameterTypeDescription
idstringThe UUID identifier for the task.
last run statusstringThe last reported status the task. Values are:
  • pending: The task has not been started.
  • started: The task has been scheduled to execute.
  • pending_kill: The task kill command has been issued and the task is scheduled to be stopped.
typestringThe type of the task. Values are:
  • Temporary Run: The task runs once then stop.
  • Scheduled Run: The task repeats on a cron like schedule.
schedulestringThe schedule for the task. If a run once task, the schedule will be -.
created atDateTimeThe date and time the task was started.
updated atDateTimeThe date and time the task was updated.

For this example, the status of the previously created task will be generated, then looped until it has reached status started.

while task.status() != "started":
    display(task.status())
    time.sleep(5)
'pending'
wl.list_tasks()
idnamelast run statustypeactiveschedulecreated atupdated at
e2951a1c-d272-4968-94a8-d2acf8a18669big query single runrunningTemporary RunTrue-2024-17-Apr 17:50:372024-17-Apr 17:50:42

Task Results

We can view the inferences from our logs and verify that new entries were added from our task. We’ll query the last 5 rows of our inference output table after a wait of 60 seconds.

time.sleep(60)

task_inference_results = bigqueryoutputclient.query(
        f"""
        SELECT *
        FROM {big_query_output_connection.details()['dataset']}.{big_query_output_connection.details()['table']}
        ORDER BY time DESC LIMIT 5"""
    ).to_dataframe()

display(task_inference_results)
timein_tensorout_variableanomaly_count
02024-04-17 17:50:49.032[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
12024-04-17 17:50:49.032[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
22024-04-17 17:50:49.032[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
32024-04-17 17:50:49.032[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
42024-04-17 17:50:49.032[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0

Scheduled Run Task Example

The other method of using tasks is as a scheduled run through the Orchestration run_scheduled(name, schedule, timeout, json_args). This sets up a task to run on an regular schedule as defined by the schedule parameter in the cron service format. For example:

schedule={'42 * * * *'}

Runs on the 42nd minute of every hour.

The following schedule runs every day at 12 noon from February 1 to February 15 2024 - and then ends.

schedule={'0 0 12 1-15 2 2024'}

For our example, we will create a scheduled task to run every 5 minutes, display the inference results, then use the Orchestration kill task to keep the task from running any further.

It is recommended that orchestrations that have pipeline deploy or undeploy commands be spaced out no less than 5 minutes to prevent colliding with other tasks that use the same pipeline.

task_inference_results = bigqueryoutputclient.query(
        f"""
        SELECT *
        FROM {big_query_output_connection.details()['dataset']}.{big_query_output_connection.details()['table']}
        ORDER BY time DESC LIMIT 5"""
    ).to_dataframe()

display(task_inference_results.tail(5))

scheduled_task = orchestration.run_scheduled(name="simple_inference_schedule", schedule="*/5 * * * *", timeout=120, json_args={})
timein_tensorout_variableanomaly_count
02024-04-17 17:50:49.032[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
12024-04-17 17:50:49.032[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
22024-04-17 17:50:49.032[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
32024-04-17 17:50:49.032[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
42024-04-17 17:50:49.032[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
while scheduled_task.status() != "started":
    display(scheduled_task.status())
    time.sleep(5)
#wait 420 seconds to give the scheduled event time to finish
time.sleep(420)
task_inference_results = bigqueryoutputclient.query(
        f"""
        SELECT *
        FROM {big_query_output_connection.details()['dataset']}.{big_query_output_connection.details()['table']}
        ORDER BY time DESC LIMIT 5"""
    ).to_dataframe()

display(task_inference_results.tail(5))
timein_tensorout_variableanomaly_count
02024-04-17 18:00:38.347[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
12024-04-17 18:00:38.347[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
22024-04-17 18:00:38.347[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
32024-04-17 18:00:38.347[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
42024-04-17 18:00:38.347[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0

Kill Task

With our testing complete, we will kill the scheduled task so it will not run again. First we’ll show all the tasks to verify that our task is there, then issue it the kill command.

scheduled_task.kill()

<ArbexStatus.PENDING_KILL: ‘pending_kill’>

Cleanup

With the tutorial complete, we can undeploy the pipeline and return the resources back to the Wallaroo instance.

pipeline.undeploy()
namebigquerypipeline
created2024-04-17 17:09:17.626798+00:00
last_updated2024-04-17 17:34:09.277972+00:00
deployedFalse
archx86
accelnone
tags
versions2c749d44-0dd0-4e2c-9d1f-de6dcb073af7, d1b689bc-4b15-4e95-a97d-fea7dab28dea, 260fcd02-3de6-4cf1-bd6f-912ab53999b5, 85946c55-83d9-49ad-989c-519957ad5fb8, c044370d-cd77-468e-a6a6-4ac0ed7f1b8e, b7348932-cfca-42a6-99fe-efc88273771e
stepsbigquerymodel
publishedFalse