Wallaroo ML Workload Orchestration Comprehensive Tutorial

A tutorial on using the ML Workload Orchestration for more examples of Wallaroo connections and ML Workload Orchestrations.

This can be downloaded as part of the Wallaroo Tutorials repository.

ML Workload Orchestration Comprehensive Tutorial

This tutorial provides a complete set of methods and examples regarding Wallaroo Connections and Wallaroo ML Workload Orchestration.

Wallaroo provides data connections, orchestrations, and tasks to provide organizations with a method of creating and managing automated tasks that can either be run on demand, on a regular schedule, or as a service so they respond to requests.

ObjectDescription
OrchestrationA set of instructions written as a python script with a requirements library. Orchestrations are uploaded to the Wallaroo instance
TaskAn implementation of an orchestration. Tasks are run either once when requested, on a repeating schedule, or as a service.
ConnectionDefinitions set by MLOps engineers that are used by other Wallaroo users for connection information to a data source. Usually paired with orchestrations.

A typical flow in the orchestration, task and connection life cycle is:

  1. (Optional) A connection is defined with information such as username, connection URL, tokens, etc.
  2. One or more connections are applied to a workspace for users to implement in their code or orchestrations.
  3. An orchestration is created to perform some set instructions. For example:
    1. Deploy a pipeline, request data from an external service, store the results in an external database, then undeploy the pipeline.
    2. Download a ML Model then replace a current pipeline step with the new version.
    3. Collect log files from a deployed pipeline once every hour and submit it to a Kafka or other service.
  4. A task is created that specifies the orchestration to perform and the schedule:
    1. Run once.
    2. Run on a schedule (based on cron like settings).
    3. Run as a service to be run whenever requested.
  5. Once the use for a task is complete, it is killed and its schedule or service removed.

Tutorial Goals

The tutorial will demonstrate the following:

  1. Create a simple connection to retrieve an Apache Arrow table file from a GitHub registry.
  2. Create an orchestration that retrieves the Apache Arrow table file from the location defined by the connection, deploy a pipeline, perform an inference, then undeploys the pipeline.
  3. Implement the orchestration as a task that runs every minute.
  4. Display the logs from the pipeline after 5 minutes to verify the task is running.

Tutorial Required Libraries

The following libraries are required for this tutorial, and included by default in a Wallaroo instance’s JupyterHub service.

  • IMPORTANT NOTE: These libraries are already installed in the Wallaroo JupyterHub service. Do not uninstall and reinstall the Wallaroo SDK with the command below.

  • wallaroo: The Wallaroo SDK.

  • pandas: The pandas data analysis library.

  • pyarrow: The Apache Arrow Python library.

The specific versions used are set in the file ./resources/requirements.txt. Supported libraries are automatically installed with the pypi or conda commands. For example, from the root of this tutorials folder:

pip install -r ./resources/requirements.txt

Initialization

The first step is to connect to a Wallaroo instance. We’ll load the libraries and set our client connection settings

Workspace, Model and Pipeline Setup

For this tutorial, we’ll create a workspace, upload our sample model and deploy a pipeline. We’ll perform some quick sample inferences to verify that everything it working.

import wallaroo
from wallaroo.object import EntityNotFoundError

# to display dataframe tables
from IPython.display import display
# used to display dataframe information without truncating
import pandas as pd
pd.set_option('display.max_colwidth', None)
import pyarrow as pa

import requests

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()
# Setting variables for later steps

workspace_name = f'orchestrationworkspace'
pipeline_name = f'orchestrationpipeline'
model_name = f'orchestrationmodel'
model_file_name = './models/rf_model.onnx'
connection_name = f'houseprice_arrow_table'

Create the Workspace and Pipeline

We’ll now create our workspace and pipeline for the tutorial. If this tutorial has been run previously, then this will retrieve the existing ones with the assumption they’re for us with this tutorial.

We’ll set the retrieved workspace as the current workspace in the SDK, so all commands will default to that workspace.

workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)

pipeline = wl.build_pipeline(pipeline_name)

Upload the Model and Deploy Pipeline

We’ll upload our model into our sample workspace, then add it as a pipeline step before deploying the pipeline to it’s ready to accept inference requests.

# Upload the model

housing_model_control = (wl.upload_model(model_name, 
                                         model_file_name, 
                                         framework=wallaroo.framework.Framework.ONNX)
                                         .configure(tensor_fields=["tensor"])
                        )

# Add the model as a pipeline step

pipeline.add_model_step(housing_model_control)
nameorchestrationpipeline
created2024-04-17 16:12:26.901337+00:00
last_updated2024-04-17 16:12:26.901337+00:00
deployed(none)
archNone
accelNone
tags
versionscbf93219-d40f-44e5-9269-21af5749964d
steps
publishedFalse
#deploy the pipeline
pipeline.deploy()
nameorchestrationpipeline
created2024-04-17 16:12:26.901337+00:00
last_updated2024-04-17 16:12:29.335369+00:00
deployedTrue
archx86
accelnone
tags
versions4df71b1e-fcc6-4373-aa18-83ffb7fe28a8, cbf93219-d40f-44e5-9269-21af5749964d
stepsorchestrationmodel
publishedFalse

Sample Inferences

We’ll perform some quick sample inferences using an Apache Arrow table as the input. Once that’s finished, we’ll undeploy the pipeline and return the resources back to the Wallaroo instance.

# sample inferences

batch_inferences = pipeline.infer_from_file('./data/xtest-1k.arrow')

large_inference_result =  batch_inferences.to_pandas()
display(large_inference_result.head(20))
timein.tensorout.variableanomaly.count
02024-04-17 16:12:53.124[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
12024-04-17 16:12:53.124[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
22024-04-17 16:12:53.124[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
32024-04-17 16:12:53.124[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
42024-04-17 16:12:53.124[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
52024-04-17 16:12:53.124[3.0, 2.0, 2140.0, 4923.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1070.0, 1070.0, 47.6902, -122.339, 1470.0, 4923.0, 86.0, 0.0, 0.0][668288.0]0
62024-04-17 16:12:53.124[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]0
72024-04-17 16:12:53.124[3.0, 2.0, 1280.0, 960.0, 2.0, 0.0, 0.0, 3.0, 9.0, 1040.0, 240.0, 47.602, -122.311, 1280.0, 1173.0, 0.0, 0.0, 0.0][684577.2]0
82024-04-17 16:12:53.124[4.0, 2.5, 2820.0, 15000.0, 2.0, 0.0, 0.0, 4.0, 9.0, 2820.0, 0.0, 47.7255, -122.101, 2440.0, 15000.0, 29.0, 0.0, 0.0][727898.1]0
92024-04-17 16:12:53.124[3.0, 2.25, 1790.0, 11393.0, 1.0, 0.0, 0.0, 3.0, 8.0, 1790.0, 0.0, 47.6297, -122.099, 2290.0, 11894.0, 36.0, 0.0, 0.0][559631.1]0
102024-04-17 16:12:53.124[3.0, 1.5, 1010.0, 7683.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1010.0, 0.0, 47.72, -122.318, 1550.0, 7271.0, 61.0, 0.0, 0.0][340764.53]0
112024-04-17 16:12:53.124[3.0, 2.0, 1270.0, 1323.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1270.0, 0.0, 47.6934, -122.342, 1330.0, 1323.0, 8.0, 0.0, 0.0][442168.06]0
122024-04-17 16:12:53.124[4.0, 1.75, 2070.0, 9120.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1250.0, 820.0, 47.6045, -122.123, 1650.0, 8400.0, 57.0, 0.0, 0.0][630865.6]0
132024-04-17 16:12:53.124[4.0, 1.0, 1620.0, 4080.0, 1.5, 0.0, 0.0, 3.0, 7.0, 1620.0, 0.0, 47.6696, -122.324, 1760.0, 4080.0, 91.0, 0.0, 0.0][559631.1]0
142024-04-17 16:12:53.124[4.0, 3.25, 3990.0, 9786.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3990.0, 0.0, 47.6784, -122.026, 3920.0, 8200.0, 10.0, 0.0, 0.0][909441.1]0
152024-04-17 16:12:53.124[4.0, 2.0, 1780.0, 19843.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1780.0, 0.0, 47.4414, -122.154, 2210.0, 13500.0, 52.0, 0.0, 0.0][313096.0]0
162024-04-17 16:12:53.124[4.0, 2.5, 2130.0, 6003.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2130.0, 0.0, 47.4518, -122.12, 1940.0, 4529.0, 11.0, 0.0, 0.0][404040.8]0
172024-04-17 16:12:53.124[3.0, 1.75, 1660.0, 10440.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1040.0, 620.0, 47.4448, -121.77, 1240.0, 10380.0, 36.0, 0.0, 0.0][292859.5]0
182024-04-17 16:12:53.124[3.0, 2.5, 2110.0, 4118.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2110.0, 0.0, 47.3878, -122.153, 2110.0, 4044.0, 25.0, 0.0, 0.0][338357.88]0
192024-04-17 16:12:53.124[4.0, 2.25, 2200.0, 11250.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1300.0, 900.0, 47.6845, -122.201, 2320.0, 10814.0, 94.0, 0.0, 0.0][682284.6]0

Create Wallaroo Connection

Connections are created at the Wallaroo instance level, typically by a MLOps or DevOps engineer, then applied to a workspace.

For this section:

  1. We will create a sample connection that just has a URL to the same Arrow table file we used in the previous step.
  2. We’ll apply the data connection to the workspace above.
  3. For a quick demonstration, we’ll use the connection to retrieve the Arrow table file and use it for a quick sample inference.

Create Connection

Connections are created with the Wallaroo client command create_connection with the following parameters.

ParameterTypeDescription
namestring (Required)The name of the connection. This must be unique - if submitting the name of an existing connection it will return an error.
typestring (Required)The user defined type of connection.
detailsDict (Requires)User defined configuration details for the data connection. These can be {'username':'dataperson', 'password':'datapassword', 'port': 3339}, or {'token':'abcde123==', 'host':'example.com', 'port:1234'}, or other user defined combinations.

We’ll create the connection named houseprice_arrow_table, set it to the type HTTPFILE, and provide the details as 'host':'https://github.com/WallarooLabs/Wallaroo_Tutorials/raw/main/wallaroo-testing-tutorials/houseprice-saga/data/xtest-1k.arrow' - the location for our sample Arrow table inference input.

wl.create_connection(connection_name, 
                  "HTTPFILE", 
                  {'host':'https://github.com/WallarooLabs/Wallaroo_Tutorials/raw/main/wallaroo-testing-tutorials/houseprice-saga/data/xtest-1k.arrow'}
                  )
FieldValue
Namehouseprice_arrow_table
Connection TypeHTTPFILE
Details*****
Created At2024-04-17T16:12:53.381200+00:00
Linked Workspaces[]

List Data Connections

The Wallaroo Client list_connections() method lists all connections for the Wallaroo instance.

wl.list_connections()
nameconnection typedetailscreated atlinked workspaces
mitochondria_image_sourceHTTP*****2024-04-16T17:51:37.014995+00:00[]
external_inference_connectionHTTP*****2024-04-17T15:37:00.958462+00:00['simpleorchestrationworkspace']
external_inference_connection_sampleHTTP*****2024-04-17T15:49:26.695606+00:00['simpleorchestrationworkspace']
houseprice_arrow_tableHTTPFILE*****2024-04-17T16:12:53.381200+00:00[]

Add Connection to Workspace

The method Workspace add_connection(connection_name) adds a Data Connection to a workspace, and takes the following parameters.

ParameterTypeDescription
namestring (Required)The name of the Data Connection

We’ll add this connection to our sample workspace.

workspace.add_connection(connection_name)

Get Connection

Connections are retrieved by the Wallaroo Client get_connection(name) method.

connection = wl.get_connection(connection_name)

Connection Details

The Connection method details() retrieves a the connection details() as a dict.

display(connection.details())
{'host': 'https://github.com/WallarooLabs/Wallaroo_Tutorials/raw/main/wallaroo-testing-tutorials/houseprice-saga/data/xtest-1k.arrow'}

Using a Connection Example

For this example, the connection will be used to retrieve the Apache Arrow file referenced in the connection, and use that to turn it into an Apache Arrow table, then use that for a sample inference.

# Deploy the pipeline 
pipeline.deploy()

# Retrieve the file
# set accept as apache arrow table
headers = {
    'Accept': 'application/vnd.apache.arrow.file'
}

response = requests.get(
                    connection.details()['host'], 
                    headers=headers
                )

# Arrow table is retrieved 
with pa.ipc.open_file(response.content) as reader:
    arrow_table = reader.read_all()

results = pipeline.infer(arrow_table)

result_table = results.to_pandas()
display(result_table.head(20))
timein.tensorout.variableanomaly.count
02024-04-17 16:12:56.870[4.0, 2.5, 2900.0, 5505.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2900.0, 0.0, 47.6063, -122.02, 2970.0, 5251.0, 12.0, 0.0, 0.0][718013.75]0
12024-04-17 16:12:56.870[2.0, 2.5, 2170.0, 6361.0, 1.0, 0.0, 2.0, 3.0, 8.0, 2170.0, 0.0, 47.7109, -122.017, 2310.0, 7419.0, 6.0, 0.0, 0.0][615094.56]0
22024-04-17 16:12:56.870[3.0, 2.5, 1300.0, 812.0, 2.0, 0.0, 0.0, 3.0, 8.0, 880.0, 420.0, 47.5893, -122.317, 1300.0, 824.0, 6.0, 0.0, 0.0][448627.72]0
32024-04-17 16:12:56.870[4.0, 2.5, 2500.0, 8540.0, 2.0, 0.0, 0.0, 3.0, 9.0, 2500.0, 0.0, 47.5759, -121.994, 2560.0, 8475.0, 24.0, 0.0, 0.0][758714.2]0
42024-04-17 16:12:56.870[3.0, 1.75, 2200.0, 11520.0, 1.0, 0.0, 0.0, 4.0, 7.0, 2200.0, 0.0, 47.7659, -122.341, 1690.0, 8038.0, 62.0, 0.0, 0.0][513264.7]0
52024-04-17 16:12:56.870[3.0, 2.0, 2140.0, 4923.0, 1.0, 0.0, 0.0, 4.0, 8.0, 1070.0, 1070.0, 47.6902, -122.339, 1470.0, 4923.0, 86.0, 0.0, 0.0][668288.0]0
62024-04-17 16:12:56.870[4.0, 3.5, 3590.0, 5334.0, 2.0, 0.0, 2.0, 3.0, 9.0, 3140.0, 450.0, 47.6763, -122.267, 2100.0, 6250.0, 9.0, 0.0, 0.0][1004846.5]0
72024-04-17 16:12:56.870[3.0, 2.0, 1280.0, 960.0, 2.0, 0.0, 0.0, 3.0, 9.0, 1040.0, 240.0, 47.602, -122.311, 1280.0, 1173.0, 0.0, 0.0, 0.0][684577.2]0
82024-04-17 16:12:56.870[4.0, 2.5, 2820.0, 15000.0, 2.0, 0.0, 0.0, 4.0, 9.0, 2820.0, 0.0, 47.7255, -122.101, 2440.0, 15000.0, 29.0, 0.0, 0.0][727898.1]0
92024-04-17 16:12:56.870[3.0, 2.25, 1790.0, 11393.0, 1.0, 0.0, 0.0, 3.0, 8.0, 1790.0, 0.0, 47.6297, -122.099, 2290.0, 11894.0, 36.0, 0.0, 0.0][559631.1]0
102024-04-17 16:12:56.870[3.0, 1.5, 1010.0, 7683.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1010.0, 0.0, 47.72, -122.318, 1550.0, 7271.0, 61.0, 0.0, 0.0][340764.53]0
112024-04-17 16:12:56.870[3.0, 2.0, 1270.0, 1323.0, 3.0, 0.0, 0.0, 3.0, 8.0, 1270.0, 0.0, 47.6934, -122.342, 1330.0, 1323.0, 8.0, 0.0, 0.0][442168.06]0
122024-04-17 16:12:56.870[4.0, 1.75, 2070.0, 9120.0, 1.0, 0.0, 0.0, 4.0, 7.0, 1250.0, 820.0, 47.6045, -122.123, 1650.0, 8400.0, 57.0, 0.0, 0.0][630865.6]0
132024-04-17 16:12:56.870[4.0, 1.0, 1620.0, 4080.0, 1.5, 0.0, 0.0, 3.0, 7.0, 1620.0, 0.0, 47.6696, -122.324, 1760.0, 4080.0, 91.0, 0.0, 0.0][559631.1]0
142024-04-17 16:12:56.870[4.0, 3.25, 3990.0, 9786.0, 2.0, 0.0, 0.0, 3.0, 9.0, 3990.0, 0.0, 47.6784, -122.026, 3920.0, 8200.0, 10.0, 0.0, 0.0][909441.1]0
152024-04-17 16:12:56.870[4.0, 2.0, 1780.0, 19843.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1780.0, 0.0, 47.4414, -122.154, 2210.0, 13500.0, 52.0, 0.0, 0.0][313096.0]0
162024-04-17 16:12:56.870[4.0, 2.5, 2130.0, 6003.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2130.0, 0.0, 47.4518, -122.12, 1940.0, 4529.0, 11.0, 0.0, 0.0][404040.8]0
172024-04-17 16:12:56.870[3.0, 1.75, 1660.0, 10440.0, 1.0, 0.0, 0.0, 3.0, 7.0, 1040.0, 620.0, 47.4448, -121.77, 1240.0, 10380.0, 36.0, 0.0, 0.0][292859.5]0
182024-04-17 16:12:56.870[3.0, 2.5, 2110.0, 4118.0, 2.0, 0.0, 0.0, 3.0, 8.0, 2110.0, 0.0, 47.3878, -122.153, 2110.0, 4044.0, 25.0, 0.0, 0.0][338357.88]0
192024-04-17 16:12:56.870[4.0, 2.25, 2200.0, 11250.0, 1.5, 0.0, 0.0, 5.0, 7.0, 1300.0, 900.0, 47.6845, -122.201, 2320.0, 10814.0, 94.0, 0.0, 0.0][682284.6]0

Remove Connection from Workspace

The Workspace method remove_connection(connection_name) removes the connection from the workspace, but does not delete the connection from the Wallaroo instance. This method takes the following parameters.

ParameterTypeDescription
nameString (Required)The name of the connection to be removed

The previous connection will be removed from the workspace, then the workspace connections displayed to verify it has been removed.

workspace.remove_connection(connection_name)

display(workspace.list_connections())

(no connections)

Delete Connection

The Connection method delete_connection() removes the connection from the Wallaroo instance, and all attachments in workspaces they were connected to.

connection.delete_connection()

wl.list_connections()
nameconnection typedetailscreated atlinked workspaces
mitochondria_image_sourceHTTP*****2024-04-16T17:51:37.014995+00:00[]
external_inference_connectionHTTP*****2024-04-17T15:37:00.958462+00:00['simpleorchestrationworkspace']
external_inference_connection_sampleHTTP*****2024-04-17T15:49:26.695606+00:00['simpleorchestrationworkspace']

Orchestration Tutorial

The next series of examples will build on what we just did. So far we have:

  • Deployed a pipeline, performed sample inferences with a local Apache Arrow file, displayed the results, then undeployed the pipeline.
  • Deployed a pipeline, use a Wallaroo connection details to retrieve a remote Apache Arrow file, performed inferences and displayed the results, then undeployed the pipeline.

For the orchestration tutorial, we’ll do the same thing only package it into a separate python script and upload it to the Wallaroo instance, then create a task from that orchestration and perform our sample inferences again.

Orchestration Requirements

Orchestrations are uploaded to the Wallaroo instance as a ZIP file with the following requirements:

  • The ZIP file should not contain any directories - only files at the top level.
ParameterTypeDescription
User Code(Required) Python script as .py filesPython scripts for the orchestration to run. If the file main.py exists, that will be the entrypoint. Otherwise, if only one .py exists, then that will be the entrypoint.
Python Library Requirements(Required) requirements.txt file in the requirements file format. This is in the root of the zip file, and there can only be one requirements.txt file for the orchestration.
Other artifacts Other artifacts such as files, data, or code to support the orchestration.

Zip Instructions

In a terminal with the zip command, assemble artifacts as above and then create the archive. The zip command is included by default with the Wallaroo JupyterHub service.

zip commands take the following format, with {zipfilename}.zip as the zip file to save the artifacts to, and each file thereafter as the files to add to the archive.

zip {zipfilename}.zip file1, file2, file3....

For example, the following command will add the files main.py and requirements.txt into the file hello.zip.

$ zip hello.zip main.py requirements.txt 
  adding: main.py (deflated 47%)
  adding: requirements.txt (deflated 52%)

Orchestration Recommendations

The following recommendations will make using Wallaroo orchestrations

  • The version of Python used should match the same version as in the Wallaroo JupyterHub service.
  • The same version of the Wallaroo SDK should match the server. For a 2023.2 Wallaroo instance, use the Wallaroo SDK version 2023.2.
  • Specify the version of pip dependencies.
  • The wallaroo.Client constructor auth_type argument is ignored. Using wallaroo.Client() is sufficient.
  • The following methods will assist with orchestrations:
    • wallaroo.in_task() : Returns True if the code is running within an Orchestrator task.
    • wallaroo.task_args(): Returns a Dict of invocation-specific arguments passed to the run_ calls.
  • Use print commands so outputs are saved to the task’s log files.

Example requirements.txt file

dbt-bigquery==1.4.3
dbt-core==1.4.5
dbt-extractor==0.4.1
dbt-postgres==1.4.5
google-api-core==2.8.2
google-auth==2.11.0
google-auth-oauthlib==0.4.6
google-cloud-bigquery==3.3.2
google-cloud-bigquery-storage==2.15.0
google-cloud-core==2.3.2
google-cloud-storage==2.5.0
google-crc32c==1.5.0
google-pasta==0.2.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4

Sample Orchestrator

The following orchestrator artifacts are in the directory ./remote_inference and includes the file main.py with the following code:

import wallaroo
from wallaroo.object import EntityNotFoundError
import pandas as pd
import pyarrow as pa
import requests

wl = wallaroo.Client()

# Setting variables for later steps

# get the arguments
arguments = wl.task_args()

if "workspace_name" in arguments:
    workspace_name = arguments['workspace_name']
else:
    workspace_name="orchestrationworkspace"

if "pipeline_name" in arguments:
    pipeline_name = arguments['pipeline_name']
else:
    pipeline_name="orchestrationpipeline"

if "connection_name" in arguments:
    connection_name = arguments['connection_name']
else:
    connection_name = "houseprice_arrow_table"

# helper methods to retrieve workspaces and pipelines

def get_workspace(name):
    workspace = None
    for ws in wl.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = wl.create_workspace(name)
    return workspace

def get_pipeline(name):
    try:
        pipeline = wl.pipelines_by_name(name)[0]
    except EntityNotFoundError:
        pipeline = wl.build_pipeline(name)
    return pipeline

print(f"Getting the workspace {workspace_name}")
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)

print(f"Getting the pipeline {pipeline_name}")
pipeline = wl.build_pipeline(pipeline_name)
pipeline.deploy()
# Get the connection - assuming it will be the only one

inference_source_connection = wl.get_connection(name=connection_name)

print(f"Getting arrow table file")
# Retrieve the file
# set accept as apache arrow table
headers = {
    'Accept': 'application/vnd.apache.arrow.file'
}

response = requests.get(
                    inference_source_connection.details()['host'], 
                    headers=headers
                )

# Arrow table is retrieved 
with pa.ipc.open_file(response.content) as reader:
    arrow_table = reader.read_all()

print("Inference time.  Displaying results after.")
# Perform the inference
result = pipeline.infer(arrow_table)
print(result)

pipeline.undeploy()

This is saved to the file ./remote_inference/remote_inference.zip.

Preparing the Wallaroo Instance

To prepare the Wallaroo instance, we’ll once again create the Wallaroo connection houseprice_arrow_table and apply it to the workspace.

wl.create_connection(connection_name, 
                  "HTTPFILE", 
                  {'host':'https://github.com/WallarooLabs/Wallaroo_Tutorials/raw/main/wallaroo-testing-tutorials/houseprice-saga/data/xtest-1k.arrow'}
                  )

workspace.add_connection(connection_name)

Upload the Orchestration

Orchestrations are uploaded with the Wallaroo client upload_orchestration(path) method with the following parameters.

ParameterTypeDescription
pathstring (Required)The path to the ZIP file to be uploaded.

Once uploaded, the deployment will be prepared and any requirements will be downloaded and installed.

For this example, the orchestration ./remote_inference/remote_inference.zip will be uploaded and saved to the variable orchestration.

orchestration = wl.upload_orchestration(name="comprehensive sample", 
                                        path="./remote_inference/remote_inference.zip")

Orchestration Status

The Orchestration method status() displays the current status of the uploaded orchestration.

StatusDescription
pending_packagingThe orchestration is uploaded, but packaging hasn’t started yet.
packagingThe orchestration is being packaged for use with the Wallaroo instance.
readyThe orchestration is ready for use.

For this example, the status of the orchestration will be displayed then looped until it has reached status ready.

import time

while orchestration.status() != 'ready':
    print(orchestration.status())
    time.sleep(5)
pending_packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging
packaging

List Orchestrations

Orchestrations are listed with the Wallaroo Client list_orchestrations() which returns a list of available orchestrations.

wl.list_orchestrations()
idnamestatusfilenameshacreated atupdated at
8879d2f2-61a1-4564-a1a4-f90304ff69f4comprehensive samplereadyremote_inference.zipa3cecf...ddff2e2024-17-Apr 16:14:412024-17-Apr 16:15:43

Task Management Tutorial

Once an Orchestration has the status ready, it can be run as a task. Tasks have three run options.

TypeSDK CallHow triggeredPurpose
Onceorchestration.run_once(name, json_args, timeout)Task runs once and exits.Single batch, experimentation
Scheduledorchestration.run_scheduled()User provides schedule. Task runs exits whenever schedule dictates.Recurrent batch ETL.

Task Run Once

Tasks are generated and run once with the Orchestration run_once(name, json_args, timeout) method. Any arguments for the orchestration are passed in as a Dict. If there are no arguments, then an empty set {} is passed.

For our example, we will pass the workspace, pipeline, and connection into our task.

# Example: run once

import datetime
task_start = datetime.datetime.now()

task = orchestration.run_once(name="house price run once 2", json_args={"workspace_name": workspace_name, 
                                                                           "pipeline_name":pipeline_name,
                                                                           "connection_name": connection_name
                                                                           }
                            )
task
FieldValue
ID2db258ba-265d-4728-98ef-2cf8982ffe47
Namehouse price run once 2
Last Run Statusunknown
TypeTemporary Run
ActiveTrue
Schedule-
Created At2024-17-Apr 16:15:45
Updated At2024-17-Apr 16:15:45

List Tasks

The list of tasks in the Wallaroo instance is retrieved through the Wallaroo Client list_tasks() method that accepts the following parameters.

ParameterTypeDescription
killedBoolean (Optional Default: False)Returns tasks depending on whether they have been issued the kill command. False returns all tasks whether killed or not. True only returns killed tasks.

This returns an array list of the following in reverse chronological order from updated at.

ParameterTypeDescription
idstringThe UUID identifier for the task.
last run statusstringThe last reported status the task. Values are:
  • unknown: The task has not been started or is being prepared.
  • ready: The task is scheduled to execute.
  • running: The task has started.
  • failure: The task failed.
  • success: The task completed.
typestringThe type of the task. Values are:
  • Temporary Run: The task runs once then stop.
  • Scheduled Run: The task repeats on a cron like schedule.
  • Service Run: The task runs as a service and executes when its service port is activated.
activeBooleanTrue: The task is scheduled or running. False: The task has completed or has been issued the kill command.
schedulestringThe cron style schedule for the task. If the task is not a scheduled one, then the schedule will be -.
created atDateTimeThe date and time the task was started.
updated atDateTimeThe date and time the task was updated.
wl.list_tasks()
idnamelast run statustypeactiveschedulecreated atupdated at
2db258ba-265d-4728-98ef-2cf8982ffe47house price run once 2successTemporary RunTrue-2024-17-Apr 16:15:452024-17-Apr 16:15:51

Task Status

The status of the task is returned with the Task status() method that returned the tasks status. Tasks can have the following status.

  • pending: The task has not been started or is being prepared.
  • started: The task has started to execute.
while task.status() != "started":
    display(task.status())
    time.sleep(5)

Task Last Runs History

The history of a task, which each deployment of the task is known as a task run is retrieved with the Task last_runs method that takes the following arguments.

ParameterTypeDescription
statusString (Optional *Default: all)Filters the task history by the status. If all, returns all statuses. Status values are:
  • running: The task has started.
  • failure: The task failed.
  • success: The task completed.
limitInteger (Optional)Limits the number of task runs returned.

This returns the following in reverse chronological order by updated at.

ParameterTypeDescription
task idstringTask id in UUID format.
pod idstringPod id in UUID format.
statusstringStatus of the task. Status values are:
  • running: The task has started.
  • failure: The task failed.
  • success: The task completed.
created atDateTimeDate and time the task was created at.
updated atDateTimeDate and time the task was updated.
task.last_runs()
task idpod idstatuscreated atupdated at
2db258ba-265d-4728-98ef-2cf8982ffe47262ca816-42a6-45ab-9e91-6c00ce32b294success2024-17-Apr 16:15:472024-17-Apr 16:15:47

Task Run Logs

The output of a task is displayed with the Task Run logs() method that takes the following parameters.

ParameterTypeDescription
limitInteger (Optional)Limits the lines returned from the task run log. The limit parameter is based on the log tail - starting from the last line of the log file, then working up until the limit of lines is reached. This is useful for viewing final outputs, exceptions, etc.

The Task Run logs() returns the log entries as a string list, with each entry as an item in the list.

  • IMPORTANT NOTE: It may take around a minute for task run logs to be integrated into the Wallaroo log database.
# give time for the task to complete and the log files entered
time.sleep(60)
recent_run = task.last_runs()[0]
display(recent_run.logs())
2024-17-Apr 16:16:33 Getting the workspace orchestrationworkspace
2024-17-Apr 16:16:33 Getting the pipeline orchestrationpipeline
2024-17-Apr 16:16:33 Getting arrow table file
2024-17-Apr 16:16:33 Inference time.  Displaying results after.
2024-17-Apr 16:16:33 time: timestamp[ms]
2024-17-Apr 16:16:33 pyarrow.Table
2024-17-Apr 16:16:33   child 0, item: float
2024-17-Apr 16:16:33 in.tensor: list not null
2024-17-Apr 16:16:33 out.variable: list not null
2024-17-Apr 16:16:33 anomaly.count: uint32 not null
2024-17-Apr 16:16:33   child 0, inner: float not null
2024-17-Apr 16:16:33 ----
2024-17-Apr 16:16:33 time: [[2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,...,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984,2024-04-17 16:15:55.984]]
2024-17-Apr 16:16:33 in.tensor: [[[4,2.5,2900,5505,2,...,2970,5251,12,0,0],[2,2.5,2170,6361,1,...,2310,7419,6,0,0],...,[3,1.75,2910,37461,1,...,2520,18295,47,0,0],[3,2,2005,7000,1,...,1750,4500,34,0,0]]]
2024-17-Apr 16:16:33 anomaly.count: [[0,0,0,0,0,...,0,0,0,0,0]]
2024-17-Apr 16:16:33 out.variable: [[[718013.75],[615094.56],...,[706823.56],[581003]]]

Failed Task Logs

We can create a task that fails and show it in the last_runs list, then retrieve the logs to display why it failed.

# Example: run once

import datetime
task_start = datetime.datetime.now()

taskfail = orchestration.run_once(name="house price run once 2", json_args={"workspace_name": "bob", 
                                                                           "pipeline_name":"does not exist",
                                                                           "connection_name": connection_name
                                                                           }
                            )

while taskfail.status() != "started":
    display(taskfail.status())
    time.sleep(5)
'pending'
# sleep to give time for the task to complete
time.sleep(60)
taskfail.last_runs()
task idpod idstatuscreated atupdated at
f8c0e1ae-1a3d-4d1e-9aea-d950f9343bd6f1ada6c8-dd55-4562-9fc7-e6acdca7d0d6failure2024-17-Apr 16:18:152024-17-Apr 16:18:15
# sleep to give time for the logs to be available
time.sleep(60)
taskfaillogs = taskfail.last_runs()[0].logs()
display(taskfaillogs)
2024-17-Apr 16:18:22 Getting the workspace bob
2024-17-Apr 16:18:22 Traceback (most recent call last):
2024-17-Apr 16:18:22   File "/home/jovyan/main.py", line 30, in 
2024-17-Apr 16:18:22     workspace = wl.get_workspace(workspace_name)
2024-17-Apr 16:18:22   File "/home/jovyan/venv/lib/python3.9/site-packages/wallaroo/client.py", line 2390, in get_workspace
2024-17-Apr 16:18:22     return Workspace.get_workspace(
2024-17-Apr 16:18:22   File "/home/jovyan/venv/lib/python3.9/site-packages/wallaroo/workspace.py", line 231, in get_workspace
2024-17-Apr 16:18:22 Exception: Error: Workspace with name bob does not exist. If you would like to create one, send in the request with `create_if_not_exist` flag set to True.
2024-17-Apr 16:18:22     raise Exception(

Task Results

We can view the inferences from our logs and verify that new entries were added from our task. In our case, we’ll assume the task once started takes about 1 minute to run (deploy the pipeline, run the inference, undeploy the pipeline). We’ll add in a wait of 1 minute, then display the logs during the time period the task was running.

task_end = datetime.datetime.now()
display(task_end)

pipeline.logs(start_datetime = task_start, end_datetime = task_end)
datetime.datetime(2024, 4, 17, 10, 20, 20, 795804)

Scheduled Tasks

Scheduled tasks are run with the Orchestration run_scheduled method. We’ll set it up to run every 5 minutes, then check the results.

It is recommended that orchestrations that have pipeline deploy or undeploy commands be spaced out no less than 5 minutes to prevent colliding with other tasks that use the same pipeline.

task_start = datetime.datetime.now()
schedule = "*/5 * * * *"
task_scheduled = orchestration.run_scheduled(name="schedule example", 
                                             timeout=600, 
                                             schedule=schedule, 
                                             json_args={"workspace_name": workspace_name, 
                                                        "pipeline_name": pipeline_name,
                                                        "connection_name": connection_name
                                            })
while task_scheduled.status() != "started":
    display(task_scheduled.status())
    time.sleep(5)
task_scheduled
'started'
FieldValue
ID9e022d46-1998-4614-8912-953798b9893a
Nameschedule example
Last Run Statusunknown
TypeScheduled Run
ActiveTrue
Schedule*/5 * * * *
Created At2024-17-Apr 16:20:21
Updated At2024-17-Apr 16:20:21
# time for the task to complete in the next 5 minutes
time.sleep(420)
recent_run = task_scheduled.last_runs()[0]
display(recent_run.logs())
2024-17-Apr 16:25:59 Getting the workspace orchestrationworkspace
2024-17-Apr 16:25:59 Getting arrow table file
2024-17-Apr 16:25:59 Getting the pipeline orchestrationpipeline
2024-17-Apr 16:25:59 Inference time.  Displaying results after.
2024-17-Apr 16:25:59 pyarrow.Table
2024-17-Apr 16:25:59 in.tensor: list not null
2024-17-Apr 16:25:59 time: timestamp[ms]
2024-17-Apr 16:25:59   child 0, item: float
2024-17-Apr 16:25:59 out.variable: list not null
2024-17-Apr 16:25:59 anomaly.count: uint32 not null
2024-17-Apr 16:25:59 ----
2024-17-Apr 16:25:59   child 0, inner: float not null
2024-17-Apr 16:25:59 time: [[2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,...,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405,2024-04-17 16:25:22.405]]
2024-17-Apr 16:25:59 in.tensor: [[[4,2.5,2900,5505,2,...,2970,5251,12,0,0],[2,2.5,2170,6361,1,...,2310,7419,6,0,0],...,[3,1.75,2910,37461,1,...,2520,18295,47,0,0],[3,2,2005,7000,1,...,1750,4500,34,0,0]]]
2024-17-Apr 16:25:59 out.variable: [[[718013.75],[615094.56],...,[706823.56],[581003]]]
2024-17-Apr 16:25:59 anomaly.count: [[0,0,0,0,0,...,0,0,0,0,0]]

Kill a Task

Killing a task removes the schedule or removes it from a service. Tasks are killed with the Task kill() method, and returns a message with the status of the kill procedure.

If necessary, all tasks can be killed through the following script.

  • IMPORTANT NOTE: This command will kill all running tasks - scheduled or otherwise. Only use this if required.
# Kill all tasks
for t in wl.list_tasks(): t.kill()

When listed with Wallaroo client task_list(killed=True) , the field active displays tasks that are killed (False) or either completed running or still scheduled to run (True).

task_scheduled.kill()

<ArbexStatus.PENDING_KILL: ‘pending_kill’>

wl.list_tasks()
idnamelast run statustypeactiveschedulecreated atupdated at
f8c0e1ae-1a3d-4d1e-9aea-d950f9343bd6house price run once 2failureTemporary RunTrue-2024-17-Apr 16:18:132024-17-Apr 16:18:18
2db258ba-265d-4728-98ef-2cf8982ffe47house price run once 2successTemporary RunTrue-2024-17-Apr 16:15:452024-17-Apr 16:15:51

Cleaning Up

With the tutorial complete we will undeploy the pipeline and ensure the resources are returned back to the Wallaroo instance.

pipeline.undeploy()
nameorchestrationpipeline
created2024-04-17 16:12:26.901337+00:00
last_updated2024-04-17 16:12:54.533539+00:00
deployedFalse
archx86
accelnone
tags
versionsa15ebfce-d271-44ef-ba5b-25ddff8f45d8, 4df71b1e-fcc6-4373-aa18-83ffb7fe28a8, cbf93219-d40f-44e5-9269-21af5749964d
stepsorchestrationmodel
publishedFalse