A/B Tests and Shadow Deployment
Wallaroo provides model optimization through pipeline testing steps. This provides mechanisms for organizations to compare multiple ML models and contrast the performance, accuracy, or other factors. Once a champion model is selected, a pipeline step can be “hot swapped” to replace an existing model in a pipeline with the updated one without undeploying the pipeline. This allows executing production model deployment updates on production systems without pausing existing processes.
Once a pipeline has been created, or during its creation process, a pipeline step can be added. The pipeline step refers to the model that will perform an inference off of the data submitted to it. Each time a step is added, it is added to the pipeline’s models
array.
Most steps are employed with a single ML model. Pipeline testing steps allow for multiple models applied to the same step so their results are compared simultaneously.
A/B Testing
A/B testing is a method that provides the ability to test competing ML models for performance, accuracy or other useful benchmarks. Different models are added to the same pipeline steps as follows:
- Control or Champion model: The model currently used for inferences.
- Challenger model(s): The model or set of models compared to the challenger model.
A/B testing splits a portion of the inference requests between the champion model and the one or more challengers through the add_random_split
method. This method splits the inferences submitted to the model through a randomly weighted step.
Each model receives inputs that are approximately proportional to the weight it is assigned. For example, with two models having weights 1 and 1, each will receive roughly equal amounts of inference inputs. If the weights were changed to 1 and 2, the models would receive roughly 33% and 66% respectively instead.
When choosing the model to use, a random number between 0.0 and 1.0 is generated. The weighted inputs are mapped to that range, and the random input is then used to select the model to use. For example, for the two-models equal-weight case, a random key of 0.4 would route to the first model, 0.6 would route to the second.
Add Random Split
A random split step can be added to a pipeline through the add_random_split
method.
The following parameters are used when adding a random split step to a pipeline:
Parameter | Type | Description |
---|---|---|
champion_weight | Float (Required) | The weight for the champion model. |
champion_model | Wallaroo.Model (Required) | The uploaded champion model. |
challenger_weight | Float (Required) | The weight of the challenger model. |
challenger_model | Wallaroo.Model (Required) | The uploaded challenger model. |
hash_key | String(Optional) | A key used instead of a random number for model selection. This must be between 0.0 and 1.0. |
Note that multiple challenger models with different weights can be added as the random split step.
add_random_split([(champion_weight, champion_model), (challenger_weight, challenger_model), (challenger_weight2, challenger_model2),...], hash_key)
In this example, a pipeline will be built with a 2:1 weighted ratio between the champion and a single challenger model.
pipeline = (wl.build_pipeline("randomsplitpipeline-demo")
.add_random_split([(2, control), (1, challenger)]))
The results for a series of single are displayed to show the random weighted split between the two models in action:
results = []
results.append(experiment_pipeline.infer_from_file("data/data-1.json"))
results.append(experiment_pipeline.infer_from_file("data/data-1.json"))
results.append(experiment_pipeline.infer_from_file("data/data-1.json"))
results.append(experiment_pipeline.infer_from_file("data/data-1.json"))
results.append(experiment_pipeline.infer_from_file("data/data-1.json"))
for result in results:
print(result[0].model())
print(result[0].data())
('aloha-control', 'ff81f634-8fb4-4a62-b873-93b02eb86ab4')
[array([[0.00151959]]), array([[0.98291481]]), array([[0.01209957]]), array([[4.75912966e-05]]), array([[2.02893716e-05]]), array([[0.00031977]]), array([[0.01102928]]), array([[0.99756402]]), array([[0.01034162]]), array([[0.00803896]]), array([[0.01615506]]), array([[0.00623623]]), array([[0.00099858]]), array([[1.79337805e-26]]), array([[1.38899512e-27]])]
('aloha-control', 'ff81f634-8fb4-4a62-b873-93b02eb86ab4')
[array([[0.00151959]]), array([[0.98291481]]), array([[0.01209957]]), array([[4.75912966e-05]]), array([[2.02893716e-05]]), array([[0.00031977]]), array([[0.01102928]]), array([[0.99756402]]), array([[0.01034162]]), array([[0.00803896]]), array([[0.01615506]]), array([[0.00623623]]), array([[0.00099858]]), array([[1.79337805e-26]]), array([[1.38899512e-27]])]
('aloha-challenger', '87fdfe08-170e-4231-a0b9-543728d6fc57')
[array([[0.00151959]]), array([[0.98291481]]), array([[0.01209957]]), array([[4.75912966e-05]]), array([[2.02893716e-05]]), array([[0.00031977]]), array([[0.01102928]]), array([[0.99756402]]), array([[0.01034162]]), array([[0.00803896]]), array([[0.01615506]]), array([[0.00623623]]), array([[0.00099858]]), array([[1.79337805e-26]]), array([[1.38899512e-27]])]
('aloha-challenger', '87fdfe08-170e-4231-a0b9-543728d6fc57')
[array([[0.00151959]]), array([[0.98291481]]), array([[0.01209957]]), array([[4.75912966e-05]]), array([[2.02893716e-05]]), array([[0.00031977]]), array([[0.01102928]]), array([[0.99756402]]), array([[0.01034162]]), array([[0.00803896]]), array([[0.01615506]]), array([[0.00623623]]), array([[0.00099858]]), array([[1.79337805e-26]]), array([[1.38899512e-27]])]
('aloha-challenger', '87fdfe08-170e-4231-a0b9-543728d6fc57')
[array([[0.00151959]]), array([[0.98291481]]), array([[0.01209957]]), array([[4.75912966e-05]]), array([[2.02893716e-05]]), array([[0.00031977]]), array([[0.01102928]]), array([[0.99756402]]), array([[0.01034162]]), array([[0.00803896]]), array([[0.01615506]]), array([[0.00623623]]), array([[0.00099858]]), array([[1.79337805e-26]]), array([[1.38899512e-27]])]
Replace With Random Split
If a pipeline already had steps as detailed in Add a Step to a Pipeline, this step can be replaced with a random split with the replace_with_random_split
method.
The following parameters are used when adding a random split step to a pipeline:
Parameter | Type | Description |
---|---|---|
index | Integer (Required) | The pipeline step being replaced. |
champion_weight | Float (Required) | The weight for the champion model. |
champion_model | Wallaroo.Model (Required) | The uploaded champion model. |
**challenger_weight | Float (Required) | The weight of the challenger model. |
challenger_model | Wallaroo.Model (Required) | The uploaded challenger model. |
hash_key | String(Optional) | A key used instead of a random number for model selection. This must be between 0.0 and 1.0. |
Note that one or more challenger models can be added for the random split step:
replace_with_random_split(index, [(champion_weight, champion_model), (challenger_weight, challenger_model)], (challenger_weight2, challenger_model2),...], hash_key)
A/B Testing Logs
A/B Testing logs entries contain the model used for the inferences in the column out._model_split
.
logs = experiment_pipeline.logs(limit=5)
display(logs.loc[:,['time', 'out._model_split', 'out.main']])
time | out._model_split | out.main | |
---|---|---|---|
0 | 2023-03-03 19:08:35.653 | [{“name”:“aloha-control”,“version”:“89389786-0c17-4214-938c-aa22dd28359f”,“sha”:“fd998cd5e4964bbbb4f8d29d245a8ac67df81b62be767afbceb96a03d1a01520”}] | [0.9999754] |
1 | 2023-03-03 19:08:35.702 | [{“name”:“aloha-challenger”,“version”:“3acd3835-be72-42c4-bcae-84368f416998”,“sha”:“223d26869d24976942f53ccb40b432e8b7c39f9ffcf1f719f3929d7595bceaf3”}] | [0.9999727] |
2 | 2023-03-03 19:08:35.753 | [{“name”:“aloha-challenger”,“version”:“3acd3835-be72-42c4-bcae-84368f416998”,“sha”:“223d26869d24976942f53ccb40b432e8b7c39f9ffcf1f719f3929d7595bceaf3”}] | [0.6606688] |
3 | 2023-03-03 19:08:35.799 | [{“name”:“aloha-control”,“version”:“89389786-0c17-4214-938c-aa22dd28359f”,“sha”:“fd998cd5e4964bbbb4f8d29d245a8ac67df81b62be767afbceb96a03d1a01520”}] | [0.9998954] |
4 | 2023-03-03 19:08:35.846 | [{“name”:“aloha-control”,“version”:“89389786-0c17-4214-938c-aa22dd28359f”,“sha”:“fd998cd5e4964bbbb4f8d29d245a8ac67df81b62be767afbceb96a03d1a01520”}] | [0.99999803] |
Pipeline Shadow Deployments
Wallaroo provides a method of testing the same data against two different models or sets of models at the same time through shadow deployments otherwise known as parallel deployments or A/B test. This allows data to be submitted to a pipeline with inferences running on several different sets of models. Typically this is performed on a model that is known to provide accurate results - the champion - and a model or set of models that is being tested to see if it provides more accurate or faster responses depending on the criteria known as the challenger(s). Multiple challengers can be tested against a single champion to determine which is “better” based on the organization’s criteria.
As described in the Wallaroo blog post The What, Why, and How of Model A/B Testing:
In data science, A/B tests can also be used to choose between two models in production, by measuring which model performs better in the real world. In this formulation, the control is often an existing model that is currently in production, sometimes called the champion. The treatment is a new model being considered to replace the old one. This new model is sometimes called the challenger….
Keep in mind that in machine learning, the terms experiments and trials also often refer to the process of finding a training configuration that works best for the problem at hand (this is sometimes called hyperparameter optimization).
When a shadow deployment is created, only the inference from the champion is returned in the InferenceResult Object data
, while the result data for the shadow deployments is stored in the InferenceResult Object shadow_data
.
Create Shadow Deployment
Create a parallel or shadow deployment for a pipeline with the pipeline.add_shadow_deploy(champion, challengers[])
method, where the champion
is a Wallaroo Model object, and challengers[]
is one or more Wallaroo Model objects.
Each inference request sent to the pipeline is sent to all the models. The prediction from the champion is returned by the pipeline, while the predictions from the challengers are not part of the standard output, but are kept stored in the shadow_data
attribute and in the logs for later comparison.
In this example, a shadow deployment is created with the champion versus two challenger models.
champion = wl.upload_model(champion_model_name, champion_model_file).configure()
model2 = wl.upload_model(shadow_model_01_name, shadow_model_01_file).configure()
model3 = wl.upload_model(shadow_model_02_name, shadow_model_02_file).configure()
pipeline.add_shadow_deploy(champion, [model2, model3])
pipeline.deploy()
name | cc-shadow |
created | 2022-08-04 20:06:55.102203+00:00 |
last_updated | 2022-08-04 20:37:28.785947+00:00 |
deployed | True |
tags | |
steps | ccfraud-lstm |
An alternate method is with the pipeline.replace_with_shadow_deploy(index, champion, challengers[])
method, where the index
is the pipeline step to replace.
Shadow Deploy Outputs
Model outputs are listed by column based on the model’s outputs. The output data is set by the term out
, followed by the name of the model. For the default model, this is out.{variable_name}
, while the shadow deployed models are in the format out_{model name}.variable
, where {model name}
is the name of the shadow deployed model.
sample_data_file = './smoke_test.df.json'
response = pipeline.infer_from_file(sample_data_file)
time | in.tensor | out.dense_1 | anomaly.count | out_ccfraudrf.variable | out_ccfraudxgb.variable | |
---|---|---|---|---|---|---|
0 | 2023-03-03 17:35:28.859 | [1.0678324729, 0.2177810266, -1.7115145262, 0.682285721, 1.0138553067, -0.4335000013, 0.7395859437, -0.2882839595, -0.447262688, 0.5146124988, 0.3791316964, 0.5190619748, -0.4904593222, 1.1656456469, -0.9776307444, -0.6322198963, -0.6891477694, 0.1783317857, 0.1397992467, -0.3554220649, 0.4394217877, 1.4588397512, -0.3886829615, 0.4353492889, 1.7420053483, -0.4434654615, -0.1515747891, -0.2668451725, -1.4549617756] | [0.0014974177] | 0 | [1.0] | [0.0005066991] |
Retrieve Shadow Deployment Logs
Shadow deploy results are part of the Pipeline.logs()
method. The output data is set by the term out
, followed by the name of the model. For the default model, this is out.dense_1
, while the shadow deployed models are in the format out_{model name}.variable
, where {model name}
is the name of the shadow deployed model.
logs = pipeline.logs()
display(logs)
time | in.tensor | out.dense_1 | anomaly.count | out_ccfraudrf.variable | out_ccfraudxgb.variable | |
---|---|---|---|---|---|---|
0 | 2023-03-03 17:35:28.859 | [1.0678324729, 0.2177810266, -1.7115145262, 0.682285721, 1.0138553067, -0.4335000013, 0.7395859437, -0.2882839595, -0.447262688, 0.5146124988, 0.3791316964, 0.5190619748, -0.4904593222, 1.1656456469, -0.9776307444, -0.6322198963, -0.6891477694, 0.1783317857, 0.1397992467, -0.3554220649, 0.4394217877, 1.4588397512, -0.3886829615, 0.4353492889, 1.7420053483, -0.4434654615, -0.1515747891, -0.2668451725, -1.4549617756] | [0.0014974177] | 0 | [1.0] | [0.0005066991] |
Remove a Pipeline Step
To remove a step from the pipeline, use the Pipeline remove_step(index)
command, where the index
is the array index for the pipeline’s steps.
In the following example the pipeline imdb_pipeline
will have the step with the model smodel-o
removed.
imdb_pipeline.status
<bound method Pipeline.status of {'name': 'imdb-pipeline', 'create_time': datetime.datetime(2022, 3, 30, 21, 21, 31, 127756, tzinfo=tzutc()), 'definition': "[{'ModelInference': {'models': [{'name': 'embedder-o', 'version': '1c16d21d-fe4c-4081-98bc-65fefa465f7d', 'sha': 'd083fd87fa84451904f71ab8b9adfa88580beb92ca77c046800f79780a20b7e4'}]}}, {'ModelInference': {'models': [{'name': 'smodel-o', 'version': '8d311ba3-c336-48d3-99cd-85d95baa6f19', 'sha': '3473ea8700fbf1a1a8bfb112554a0dde8aab36758030dcde94a9357a83fd5650'}]}}]"}>
imdb_pipeline.remove_step(1)
{'name': 'imdb-pipeline', 'create_time': datetime.datetime(2022, 3, 30, 21, 21, 31, 127756, tzinfo=tzutc()), 'definition': "[{'ModelInference': {'models': [{'name': 'embedder-o', 'version': '1c16d21d-fe4c-4081-98bc-65fefa465f7d', 'sha': 'd083fd87fa84451904f71ab8b9adfa88580beb92ca77c046800f79780a20b7e4'}]}}]"}
Clear All Pipeline Steps
The Pipeline clear()
method removes all pipeline steps from a pipeline. Note that pipeline steps are not saved until the pipeline is deployed.