Demand Curve Quick Start Guide

The Demand Curve Quick Start Guide demonstrates how to use Wallaroo to chart a demand curve based on submitted data. This example uses a model plus preprocess and postprocessing steps.

This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.

Demand Curve Pipeline Tutorial

This worksheet demonstrates a Wallaroo pipeline with data preprocessing, a model, and data postprocessing.

The model is a “demand curve” that predicts the expected number of units of a product that will be sold to a customer as a function of unit price and facts about the customer. Such models can be used for price optimization or sales volume forecasting. This is purely a “toy” demonstration, but is useful for detailing the process of working with models and pipelines.

Data preprocessing is required to create the features used by the model. Simple postprocessing prevents nonsensical estimates (e.g. negative units sold).

Prerequisites

  • An installed Wallaroo instance.
  • The following Python libraries installed:
    • os
    • wallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
import json
import wallaroo
from wallaroo.object import EntityNotFoundError
import pandas
import numpy
import conversion
from wallaroo.object import EntityNotFoundError
import pyarrow as pa

# used to display dataframe information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)

# ignoring warnings for demonstration
import warnings
warnings.filterwarnings('ignore')

Connect to the Wallaroo Instance

The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.

This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.

If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.

# Login through local Wallaroo instance

wl = wallaroo.Client()

Now that the Wallaroo client has been initialized, we can create the workspace and call it demandcurveworkspace, then set it as our current workspace. We’ll also create our pipeline so it’s ready when we add our models to it.

We’ll set some variables and methods to create our workspace, pipelines and models. Note that as of the July 2022 release of Wallaroo, workspace names must be unique. Pipelines with the same name will be created as a new version when built.

workspace_name = 'demandcurveworkspace'
pipeline_name = 'demandcurvepipeline'
model_name = 'demandcurvemodel'
model_file_name = './models/demand_curve_v1.onnx'
def get_workspace(name, client):
    workspace = None
    for ws in client.list_workspaces():
        if ws.name() == name:
            workspace= ws
    if(workspace == None):
        workspace = client.create_workspace(name)
    return workspace
workspace = get_workspace(workspace_name, wl)

wl.set_current_workspace(workspace)

demandcurve_pipeline = wl.build_pipeline(pipeline_name)
demandcurve_pipeline
namedemandcurvepipeline
created2024-03-13 19:22:55.459643+00:00
last_updated2024-03-13 19:23:42.945824+00:00
deployed(none)
archNone
tags
versionscf5f73c5-9725-49e7-a55c-530737653959, 5be24e45-8175-4b73-a091-d397d7bc5514
steps
publishedFalse

With our workspace established, we’ll upload three models:

  • ./models/preprocess_dc_byop.zip: A preprocess model step that formats the data into a tensor that the model can inference from.
  • ./models/demand_curve_v1.onnx: Our demand_curve model. We’ll store the upload configuration into demand_curve_model.
  • ./models/postprocess_dc_byop.zip: A postprocess model step that will zero out any negative values and return the output variable as “prediction”.

Note that the order we upload our models isn’t important - we’ll be establishing the actual process of moving data from one model to the next when we set up our pipeline.

demand_curve_model = wl.upload_model(model_name, 
                                     model_file_name, 
                                     framework=wallaroo.framework.Framework.ONNX).configure(tensor_fields=["tensor"])                   
input_schema = pa.schema([
    pa.field('Date', pa.string()),
    pa.field('cust_known', pa.bool_()),
    pa.field('StockCode', pa.int32()),
    pa.field('UnitPrice', pa.float32()),
    pa.field('UnitsSold', pa.int32())
])

output_schema = pa.schema([
    pa.field('tensor', pa.list_(pa.float64()))
])

preprocess_step = wl.upload_model('curve-preprocess', 
                                  './models/preprocess_dc_byop.zip', 
                                  framework=wallaroo.framework.Framework.CUSTOM, 
                                  input_schema=input_schema, 
                                  output_schema=output_schema)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime........successful

Ready

input_schema = pa.schema([
    pa.field('variable', pa.list_(pa.float64()))
])

output_schema = pa.schema([
    pa.field('prediction', pa.list_(pa.float64()))
])

postprocess_step = wl.upload_model('curve-postprocess', 
                                   './models/postprocess_dc_byop.zip', 
                                   framework=wallaroo.framework.Framework.CUSTOM, 
                                   input_schema=input_schema, 
                                   output_schema=output_schema)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime........successful

Ready

With our models uploaded, we’re going to create our own pipeline and give it three steps:

  • The preprocess step to put the data into a tensor format.
  • Then we apply the data to our demand_curve_model.
  • And finally, we prepare our data for output with the module_post.
# now make a pipeline
demandcurve_pipeline.undeploy()
demandcurve_pipeline.clear()
demandcurve_pipeline.add_model_step(preprocess_step)
demandcurve_pipeline.add_model_step(demand_curve_model)
demandcurve_pipeline.add_model_step(postprocess_step)
namedemandcurvepipeline
created2024-03-13 19:22:55.459643+00:00
last_updated2024-03-13 19:23:42.945824+00:00
deployed(none)
archNone
tags
versionscf5f73c5-9725-49e7-a55c-530737653959, 5be24e45-8175-4b73-a091-d397d7bc5514
steps
publishedFalse

And with that - let’s deploy our model pipeline. This usually takes about 45 seconds for the deployment to finish.

deploy_config = wallaroo.DeploymentConfigBuilder().replica_count(1).cpus(1).memory("1Gi").build()
demandcurve_pipeline.deploy(deployment_config=deploy_config)
Waiting for deployment - this will take up to 45s ............. ok
namedemandcurvepipeline
created2024-03-13 19:22:55.459643+00:00
last_updated2024-03-13 19:25:24.927731+00:00
deployedTrue
archNone
tags
versionsc2baa959-f50c-468e-875e-b3d14972d400, cf5f73c5-9725-49e7-a55c-530737653959, 5be24e45-8175-4b73-a091-d397d7bc5514
stepscurve-preprocess
publishedFalse

We can check the status of our pipeline to make sure everything was set up correctly:

demandcurve_pipeline.status()
{'status': 'Running',
 'details': [],
 'engines': [{'ip': '10.28.0.139',
   'name': 'engine-7b76dd59dd-kjvhm',
   'status': 'Running',
   'reason': None,
   'details': [],
   'pipeline_statuses': {'pipelines': [{'id': 'demandcurvepipeline',
      'status': 'Running'}]},
   'model_statuses': {'models': [{'name': 'curve-postprocess',
      'version': '9fd8b767-943e-477a-8a7f-f0424eb7a438',
      'sha': 'cf4cb335761e2bd5f238bd13f70e777f1fcc1eb31837ebea9cf3eb55c8faeb2f',
      'status': 'Running'},
     {'name': 'demandcurvemodel',
      'version': 'df1251a5-3fa3-4aff-9633-5a5577f40a3f',
      'sha': '2820b42c9e778ae259918315f25afc8685ecab9967bad0a3d241e6191b414a0d',
      'status': 'Running'},
     {'name': 'curve-preprocess',
      'version': 'e82604bb-5e6e-45a7-9147-6e7eb209b8ef',
      'sha': '22d6886115cbf667cfb7dbd394730625e09d0f8a8ff853848a7edebdb3c26f01',
      'status': 'Running'}]}}],
 'engine_lbs': [{'ip': '10.28.2.108',
   'name': 'engine-lb-d7cc8fc9c-2l2l2',
   'status': 'Running',
   'reason': None,
   'details': []}],
 'sidekicks': [{'ip': '10.28.0.138',
   'name': 'engine-sidekick-curve-preprocess-6-86cdc949f7-2khcl',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': '\n'},
  {'ip': '10.28.2.107',
   'name': 'engine-sidekick-curve-postprocess-7-55c99ff755-c5w25',
   'status': 'Running',
   'reason': None,
   'details': [],
   'statuses': '\n'}]}

Everything is ready. Let’s feed our pipeline some data. We have some information prepared with the daily_purchasses.csv spreadsheet. We’ll start with just one row to make sure that everything is working correctly.

# read in some purchase data
purchases = pandas.read_csv('daily_purchases.csv')

# start with a one-row data frame for testing
subsamp_raw = purchases.iloc[0:1,: ]
subsamp_raw
Datecust_knownStockCodeUnitPriceUnitsSold
02010-12-01False219284.211
result = demandcurve_pipeline.infer(subsamp_raw)
display(result)
timein.Datein.StockCodein.UnitPricein.UnitsSoldin.cust_knownout.predictionanomaly.count
02024-03-13 19:25:39.1522010-12-01219284.211False[6.680255142999893]0

We can see from the out.prediction field that the demand curve has a predicted slope of 6.68 from our sample data. We can isolate that by specifying just the data output below.

display(result.loc[0, ['out.prediction']][0])
[6.680255142999893]

Bulk Inference

The initial test went perfectly. Now let’s throw some more data into our pipeline. We’ll draw 10 random rows from our spreadsheet, perform an inference from that, and then display the results and the logs showing the pipeline’s actions.

ix = numpy.random.choice(purchases.shape[0], size=10, replace=False)
converted = conversion.pandas_to_dict(purchases.iloc[ix,: ])
test_df = pd.DataFrame(converted['query'], columns=converted['colnames'])
display(test_df)

output = demandcurve_pipeline.infer(test_df)
display(output)
Datecust_knownStockCodeUnitPriceUnitsSold
02011-02-15True85099C1.9515
12011-04-14True232012.0820
22011-04-20True85099B2.08110
32011-08-19True219312.0820
42011-09-21True223862.0823
52011-09-12True85099C2.0810
62011-11-30False219292.0810
72011-01-21True85099C1.956
82011-11-25True232022.082
92011-09-29True223862.0852
timein.Datein.StockCodein.UnitPricein.UnitsSoldin.cust_knownout.predictionanomaly.count
02024-03-13 19:25:39.2582011-02-1585099C1.9515True[40.57067616108544]0
12024-03-13 19:25:39.2582011-04-14232012.0820True[33.125327529877765]0
22024-03-13 19:25:39.2582011-04-2085099B2.08110True[33.125327529877765]0
32024-03-13 19:25:39.2582011-08-19219312.0820True[33.125327529877765]0
42024-03-13 19:25:39.2582011-09-21223862.0823True[33.125327529877765]0
52024-03-13 19:25:39.2582011-09-1285099C2.0810True[33.125327529877765]0
62024-03-13 19:25:39.2582011-11-30219292.0810False[9.110871233285868]0
72024-03-13 19:25:39.2582011-01-2185099C1.956True[40.57067616108544]0
82024-03-13 19:25:39.2582011-11-25232022.082True[33.125327529877765]0
92024-03-13 19:25:39.2582011-09-29223862.0852True[33.125327529877765]0

Undeploy the Pipeline

Once we’ve finished with our demand curve demo, we’ll undeploy the pipeline and give the resources back to our Kubernetes cluster.

demandcurve_pipeline.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
namedemandcurvepipeline
created2024-03-13 19:22:55.459643+00:00
last_updated2024-03-13 19:25:24.927731+00:00
deployedFalse
archNone
tags
versionsc2baa959-f50c-468e-875e-b3d14972d400, cf5f73c5-9725-49e7-a55c-530737653959, 5be24e45-8175-4b73-a091-d397d7bc5514
stepscurve-preprocess
publishedFalse

Thank you for being a part of this demonstration. If you have additional questions, please feel free to contact us at Wallaroo.