Wallaroo Glossary

Definitions for Wallaroo terms and concepts.


AssaysAn assay in Wallaroo are a series of built-in automated validation checks used for data analysis. Assays in Wallaroo are defined and used by data scientists to troubleshoot models in Wallaroo by generating drift detection reports and alerts on a given model’s data inputs or inference results. Wallaroo assays consist of the following:
  • baseline: A set of data within expected values.
  • window: An assay window sets the interval of time between when assays are run, and the width - or time period - of the data to analyze and compare against the baseline. Typically the window is set to 24 hour intervals with a width of 24 hours.
  • threshold: The amount of variance allowed between the established baseline and the analyzed data.

Assay Values

Name The name of the assay. Assay names must be unique.
Baseline Data Data that is known to be “typical” (typically distributed) and can be used to determine whether the distribution of new data has changed.
ScheduleEvery 24 hours at 1 AMConfigure the start time and frequency of when the new analysis will run. New assays are configured to run a new analysis for every 24 hours starting at the end of the baseline period. This period can be configured through the SDK.
Group ResultsDailyHow the results are grouped: Daily (Default), Every Minute, Weekly, or Monthly.
MetricPSIPopulation Stability Index (PSI) is an entropy-based measure of the difference between distributions. Maximum Difference of Bins measures the maximum difference between the baseline and current distributions (as estimated using the bins). Sum of the difference of bins sums up the difference of occurrences in each bin between the baseline and current distributions.
Threshold0.1Threshold for deciding the difference between distributions is similar(small) or different(large), as evaluated by the metric. The default of 0.1 is generally a good threshold when using PSI as the metric.
Number of Bins5Number of bins used to partition the baseline data. By default, the binning scheme is percentile (quantile) based. The binning scheme can be configured (see Bin Mode, below). Note that the total number of bins will include the set number plus the left_outlier and the right_outlier, so the total number of bins will be the total set + 2.
Bin ModeQuantileSpecify the Binning Scheme. Available options are: Quantile binning defines the bins using percentile ranges (each bin holds the same percentage of the baseline data). Equal binning defines the bins using equally spaced data value ranges, like a histogram. Custom allows users to set the range of values for each bin, with the Left Outlier always starting at Min (below the minimum values detected from the baseline) and the Right Outlier always ending at Max (above the maximum values detected from the baseline).
Bin WeightEqually WeightedThe weight applied to each bin. The bin weights can be either set to Equally Weighted (the default) where each bin is weighted equally, or Custom where the bin weights can be adjusted depending on which are considered more important for detecting model drift.


Data Connectors And ConnectionsData Connectors encapsulate details of a data source or sink scoped to a specific Wallaroo workspace. This allows customers to specify the external data stores the Wallaroo platform uses to ingest data for running models or stream data from inference results and logs to external data stores and other services. Data connectors are managed in Wallaroo through pipeline orchestration.


Engine ReplicasEngine replicas are instances of the Wallaroo inference engine dynamically created allocate compute resources to run inferences on deployed models.


LogsWallaroo provides Pipeline Logs as part of its architecture. These are records of inference requests and their results. These logs include the input data, the output results, any shadow outputs, check failures, and other details.


MLOps APIsMLOps APIs are a set of endpoints that allow external systems to interact with the Wallaroo platform programmatically from their ecosystem (CI/CD, ML Platforms etc.) and perform the necessary model operations. MLOps APIs support user management, workspace management, model upload, pipeline deployment, model version management, pipeline version management, pipeline inferencing, model serving, generating inference logs, generating model monitoring assays.
ML Workload OrchestrationML Workload orchestration allows data scientists and ML Engineers to automate and scale production ML workflows in Wallaroo to ensure a tight feedback loop and continuous tuning of models from training to production. Wallaroo platform users (data scientists or ML Engineers) have the ability to deploy, automate and scale recurring batch production ML workloads that can ingest data from predefined data sources to run inferences in Wallaroo, chain pipelines, and send inference results to predefined destinations to analyze model insights and assess business outcomes.
ModelA model or Machine Learning (ML) model is an algorithm developed using historical datasets (also known as training data) to generate a specific set of insights. Trained models can operate on future datasets (non-training sets) and offer predictions (also known as inferences). Inferences help inform decisions based on the similarity between historical data and future data.
Some examples of using a ML model are:
  • Approving credit card transaction based on fraud predictions.
  • Recommending a specific therapy to a patient based on diagnosis predictions.
  • Recommending a specific product to purchase in an e-commerce experience based on consumer’s likelihood to be interested in it, their predicted shopping budgets as well as projected revenue from this consumer.

Model in Wallaroo refers to the resulting object from converting the model file artifact. For example, a model file would typically be produced from training a model (e.g .zip file, .onnx file etc) outside of Wallaroo. Uploading the model file to be able to run in a given Wallaroo runtime (onnx, TensorFlow etc.) results in a Wallaroo model object. Model artifacts imported to Wallaroo may include other files related to a given model such as preprocessing files, postprocessing files, training sets, notebooks etc.
Model ArtifactsArtifacts or model artifacts are specific files and elements used or generated during model training to develop, test and track the algorithm from early experimentation to a fully trained model. Artifacts are intended to represent everything an AI team would need to be able to run and track a model from development to production.
Artifacts typically include:
  • Test datasets
  • Model worksheets/notebooks
  • Model test results
  • Model files generated from training a model (.onnx files, .zip files etc
  • Pre-processing methods to prepare the data for consumption by the model.
  • Post-processing methods to format the data for use by external services.

As models transition from the development stage to the production stage, it is important to keep track of model artifacts. This guarantees a smooth transition from the development to production, but enable AI teams developing the models to continuously optimize/tune their models leveraging production insights.
Model ServingModel Serving is the process of integrating a ML model with operations that consume its predictions to make a decision. In Wallaroo, model serving is managed leveraging ML pipelines, which expose an integration endpoint (also call inference endpoints) to consume the predictions/inferences from a model.
Model VersionModel version refers to the version of the model object in Wallaroo. In Wallaroo, a model version update happens when we upload a new model file (artifact) against the same model object name.


Deployment PipelinePipeline or Wallaroo Deployment Pipeline is the mechanism for deploying, serving, monitoring and updating ML models behind a single endpoint in a centralized cloud, in a de-centralized multi-region cloud or on an edge device. A deployment pipeline contains all the artifacts required for a specific ML process (e.g trained model, data pre-processing and post-processing scripts) and can be configured to run on different types of hardware architectures (x86, ARM and GPU).
Pipeline StepA pipeline step is one stage of processing in an ML pipeline. Most commonly a step is a Model, but includes data processing and transformations algorithms to prepare incoming data for running inferences in the model or outgoing data to be consumed by other services.
Product User PersonaProduct user personas in Wallaroo align with specific job titles and responsibilities within a Wallaroo customer’s organization. Wallaroo supports the following personas:
  • The Wallaroo platform admin
  • The data/ML scientists
  • The ML Engineer

A 4th persona, called the analyst, is being considered to complement the role of the ML/Data scientist by focusing on aligning model analytics to business analytics."


User TypeUser types in Wallaroo are split into 2 categories:
  • Platform User(Default): Users who work within their assigned workspaces and collaborate with other users as needed in the context of a workspace in Wallaroo.
  • Platform Admin: Set up the Wallaroo instance and have access to all workspaces in Wallaroo regardless of their membership.


Wallaroo Admin ConsoleThe Wallaroo Admin Console Interface for Wallaroo administrators to manage wallaroo platform configurations in the cloud environment in which Wallaroo has been installed. The administration console also includes managing installations and version updates for the Wallaroo platform.
Wallaroo EngineThe Wallaroo Engine is a distributed computing and orchestration framework written in Rust to ensure the necessary underlying computational resources are utilized efficiently to perform deployment, inferences, management and observability on ML models in Wallaroo. The Wallaroo engine offers a set of runtime environments that are allow running models developed across the most common ML development frameworks in the market (TF, sk-learn, XGB, PyTorch, HF etc.) with optimal performance with reduced infrastructure overhead.
WorkspaceWorkspaces are used to segment groups of models and pipelines into separate environments. This allows different users to either manage as workspace owners or have access as workspace collaborators to each workspace, controlling the models and pipelines assigned to the workspace. For more information, see the Workspace Management Guide.