Perform inference with a single model.
Replaces the step at the given index with a model step
Perform inference on the same input data for any number of models.
Replaces the step at the index with a multi model step
Run audit logging on a specified
slice of model outputs.
The slice must be in python-like format.
:end are supported.
Replaces the step at the index with an audit step
Select only the model output with the given
index from an array of
Replaces the step at the index with a select step
Split traffic based on the value at a given
meta_key in the input data,
routing to the appropriate model.
If the resulting value is a key in
options, the corresponding model is used.
default model is used for inference.
Replace the step at the index with a key split step
Routes inputs to a single model, randomly chosen from the list of
Each model receives inputs that are approximately proportional to the weight it is assigned. For example, with two models having weights 1 and 1, each will receive roughly equal amounts of inference inputs. If the weights were changed to 1 and 2, the models would receive roughly 33% and 66% respectively instead.
When choosing the model to use, a random number between 0.0 and 1.0 is generated. The weighted inputs are mapped to that range, and the random input is then used to select the model to use. For example, for the two-models equal-weight case, a random key of 0.4 would route to the first model. 0.6 would route to the second.
To support consistent assignment to a model, a
hash_key can be
specified. This must be between 0.0 and 1.0. The value at this key, when
present in the input data, will be used instead of a random number for
Replace the step at the index with a random split step
Create a "shadow deployment" experiment pipeline. The
model and all
challengers are run for each input. The result data for
all models is logged, but the output of the
champion is the only
This is particularly useful for "burn-in" testing a new model with real world data without displacing the currently proven model.
This is currently implemented as three steps: A multi model step, an audit step, and a select step. To remove or replace this step, you need to remove or replace all three. You can remove steps using pipeline.remove_step
validation with the given
name. All validations are run on
all outputs, and all failures are logged.
Replace the step at the given index with a validation step
Replace the step at the given index with the specified alert
Remove all steps from the pipeline. This might be desireable if replacing models, for example.