Wallaroo Glossary
Definitions for Wallaroo terms and concepts.
Table of Contents
A
Term | Definition |
---|---|
Assays | An assay in Wallaroo are a series of built-in automated validation checks used for data analysis. Assays in Wallaroo are defined and used by data scientists to troubleshoot models in Wallaroo by generating drift detection reports and alerts on a given model’s data inputs or inference results. Wallaroo assays consist of the following:
|
Assay Values
Attribute | Default | Description |
---|---|---|
Name | The name of the assay. Assay names must be unique. | |
Baseline Data | Data that is known to be “typical” (typically distributed) and can be used to determine whether the distribution of new data has changed. | |
Schedule | Every 24 hours at 1 AM | Configure the start time and frequency of when the new analysis will run. New assays are configured to run a new analysis for every 24 hours starting at the end of the baseline period. This period can be configured through the SDK. |
Group Results | Daily | How the results are grouped: Daily (Default), Every Minute, Weekly, or Monthly. |
Metric | PSI | Population Stability Index (PSI) is an entropy-based measure of the difference between distributions. Maximum Difference of Bins measures the maximum difference between the baseline and current distributions (as estimated using the bins). Sum of the difference of bins sums up the difference of occurrences in each bin between the baseline and current distributions. |
Threshold | 0.1 | Threshold for deciding the difference between distributions is similar(small) or different(large), as evaluated by the metric. The default of 0.1 is generally a good threshold when using PSI as the metric. |
Number of Bins | 5 | Number of bins used to partition the baseline data. By default, the binning scheme is percentile (quantile) based. The binning scheme can be configured (see Bin Mode, below). Note that the total number of bins will include the set number plus the left_outlier and the right_outlier , so the total number of bins will be the total set + 2. |
Bin Mode | Quantile | Specify the Binning Scheme. Available options are: Quantile binning defines the bins using percentile ranges (each bin holds the same percentage of the baseline data). Equal binning defines the bins using equally spaced data value ranges, like a histogram. Custom allows users to set the range of values for each bin, with the Left Outlier always starting at Min (below the minimum values detected from the baseline) and the Right Outlier always ending at Max (above the maximum values detected from the baseline). |
Bin Weight | Equally Weighted | The weight applied to each bin. The bin weights can be either set to Equally Weighted (the default) where each bin is weighted equally, or Custom where the bin weights can be adjusted depending on which are considered more important for detecting model drift. |
D
Term | Definition |
---|---|
Data Connectors And Connections | Data Connectors encapsulate details of a data source or sink scoped to a specific Wallaroo workspace. This allows customers to specify the external data stores the Wallaroo platform uses to ingest data for running models or stream data from inference results and logs to external data stores and other services. Data connectors are managed in Wallaroo through pipeline orchestration. |
E
Term | Definition |
---|---|
Engine Replicas | Engine replicas are instances of the Wallaroo inference engine dynamically created allocate compute resources to run inferences on deployed models. |
L
Term | Definition |
---|---|
Logs | Wallaroo provides Pipeline Logs as part of its architecture. These are records of inference requests and their results. These logs include the input data, the output results, any shadow outputs, check failures, and other details. |
M
Term | Definition |
---|---|
MLOps APIs | MLOps APIs are a set of endpoints that allow external systems to interact with the Wallaroo platform programmatically from their ecosystem (CI/CD, ML Platforms etc.) and perform the necessary model operations. MLOps APIs support user management, workspace management, model upload, pipeline deployment, model version management, pipeline version management, pipeline inferencing, model serving, generating inference logs, generating model monitoring assays. |
ML Workload Orchestration | ML Workload orchestration allows data scientists and ML Engineers to automate and scale production ML workflows in Wallaroo to ensure a tight feedback loop and continuous tuning of models from training to production. Wallaroo platform users (data scientists or ML Engineers) have the ability to deploy, automate and scale recurring batch production ML workloads that can ingest data from predefined data sources to run inferences in Wallaroo, chain pipelines, and send inference results to predefined destinations to analyze model insights and assess business outcomes. |
Model | A model or Machine Learning (ML) model is an algorithm developed using historical datasets (also known as training data) to generate a specific set of insights. Trained models can operate on future datasets (non-training sets) and offer predictions (also known as inferences). Inferences help inform decisions based on the similarity between historical data and future data. Some examples of using a ML model are:
Model in Wallaroo refers to the resulting object from converting the model file artifact. For example, a model file would typically be produced from training a model (e.g .zip file, .onnx file etc) outside of Wallaroo. Uploading the model file to be able to run in a given Wallaroo runtime (onnx, TensorFlow etc.) results in a Wallaroo model object. Model artifacts imported to Wallaroo may include other files related to a given model such as preprocessing files, postprocessing files, training sets, notebooks etc. |
Model Artifacts | Artifacts or model artifacts are specific files and elements used or generated during model training to develop, test and track the algorithm from early experimentation to a fully trained model. Artifacts are intended to represent everything an AI team would need to be able to run and track a model from development to production. Artifacts typically include:
As models transition from the development stage to the production stage, it is important to keep track of model artifacts. This guarantees a smooth transition from the development to production, but enable AI teams developing the models to continuously optimize/tune their models leveraging production insights. |
Model Serving | Model Serving is the process of integrating a ML model with operations that consume its predictions to make a decision. In Wallaroo, model serving is managed leveraging ML pipelines, which expose an integration endpoint (also call inference endpoints) to consume the predictions/inferences from a model. |
Model Version | Model version refers to the version of the model object in Wallaroo. In Wallaroo, a model version update happens when we upload a new model file (artifact) against the same model object name. |
P
Term | Definition |
---|---|
Deployment Pipeline | Pipeline or Wallaroo Deployment Pipeline is the mechanism for deploying, serving, monitoring and updating ML models behind a single endpoint in a deployment, in a de-centralized multi-region cloud or on an edge device. A deployment pipeline contains all the artifacts required for a specific ML process (e.g trained model, data pre-processing and post-processing scripts) and can be configured to run on different types of hardware architectures (x86, ARM and GPU). |
Pipeline Step | A pipeline step is one stage of processing in an ML pipeline. Most commonly a step is a Model, but includes data processing and transformations algorithms to prepare incoming data for running inferences in the model or outgoing data to be consumed by other services. |
Product User Persona | Product user personas in Wallaroo align with specific job titles and responsibilities within a Wallaroo customer’s organization. Wallaroo supports the following personas:
A 4th persona, called the analyst, is being considered to complement the role of the ML/Data scientist by focusing on aligning model analytics to business analytics." |
U
Term | Definition |
---|---|
User Type | User types in Wallaroo are split into 2 categories:
|
W
Term | Definition |
---|---|
Wallaroo Admin Console | The Wallaroo Admin Console Interface for Wallaroo administrators to manage wallaroo platform configurations in the cloud environment in which Wallaroo has been installed. The administration console also includes managing installations and version updates for the Wallaroo platform. |
Wallaroo Engine | The Wallaroo Engine is a distributed computing and orchestration framework written in Rust to ensure the necessary underlying computational resources are utilized efficiently to perform deployment, inferences, management and observability on ML models in Wallaroo. The Wallaroo engine offers a set of runtime environments that are allow running models developed across the most common ML development frameworks in the market (TF, sk-learn, XGB, PyTorch, HF etc.) with optimal performance with reduced infrastructure overhead. |
Workspace | Workspaces are used to segment groups of models and pipelines into separate environments. This allows different users to either manage as workspace owners or have access as workspace collaborators to each workspace, controlling the models and pipelines assigned to the workspace. For more information, see the Workspace Management Guide. |