Wallaroo requires at least 8 cores with 16 GB of RAM per node, and runs best in an environment with a total of 16 cores. Wallaroo recommends at least 16 cores total to enable all services. At less than 16 cores, services will have to be disabled to allow basic functionality as detailed in this table.
Note that even when disabling these services, Wallaroo performance may be impacted by the models, pipelines, and data used. The greater the size of the models and steps in a pipeline, the more resources will be required for Wallaroo to operate efficiently.
Cluster Size | < 8 core | 8 core/48GB | 16 core/48GB | 32 core/48GB | Description | |
Inference | ✔ | ✔ | ✔ | ✔ | The Wallaroo inference engine that performs inference requests from deployed pipelines. | |
Dashboard | ✘ | ✔ | ✔ | ✔ | The graphics user interface for configuring workspaces, deploying pipelines, tracking metrics, and other uses. | |
Jupyter HUB/Lab | The JupyterHub service for running Python scripts, JupyterNotebooks, and other related tasks within the Wallaroo instance. | |||||
Single Lab | ✘ | ✔ | ✔ | ✔ | ||
Multiple Labs | ✘ | ✘ | ✔ | ✔ | ||
Prometheus | ✘ | ✔ | ✔ | ✔ | Used for collecting and reporting on metrics. Typical metrics are values such as CPU utilization and memory usage. | |
Alerting | ✘ | ✘ | ✔ | ✔ | ||
Model Validation | ✘ | ✘ | ✔ | ✔ | ||
Dashboard Graphs | ✘ | ✔ | ✔ | ✔ | ||
Plateau | ✘ | ✘ | ✔ | ✔ | A Wallaroo developed service for storing inference logs at high speed. This is not a long term service; organizations are encouraged to store logs in long term solutions if required. | |
Model Insights | ✘ | ✘ | ✔ | ✔ | ||
Python API | ||||||
Model Conversion | ✘ | ✔ | ✔ | ✔ | Converts models into a native runtime for use with the Wallaroo inference engine. |
To install Wallaroo with minimum services, a configuration file will be used as parts of the kots
based installation. For full details on the Wallaroo installation process, see the Wallaroo Install Guides.
This guide is broken up into two segments:
- Wallaroo Installation with less than 16 Cores: Installations of Wallaroo with less than 16 cores and 8 cores or greater.
- Wallaroo Installation with less than 8 Cores: Installations of Wallaroo with less than 8 cores.
Wallaroo Installation with less than 16 Cores
To install Wallaroo with less than 16 cores and 8 cores or greater, the following services must be disabled:
- Model Conversion
- Model Insights
- Plateau
The following configuration settings can be used at the installation procedure to disable these services.
A sample file wallaroo-install-8-cores.yaml
is available from the following link:
apiVersion: kots.io/v1beta1
kind: ConfigValues
metadata:
name: wallaroo
spec:
values:
dashboard_enabled:
value: "1"
enable_model_insights:
value: "0"
model_conversion_enabled:
value: "1"
plateau_enabled:
value: "0"
The configuration file can be applied via the --config-values={CONFIG YAML FILE}
option. For example:
kubectl kots install "wallaroo/ce" \
-n wallaroo \
--config-values=wallaroo-install-8-cores.yaml
Wallaroo Installation with less than 8 Cores
For installation of Wallaroo with less than 8 cores, all services except for Inference must be disabled.
The following configuration settings can be used at the installation procedure to disable these services.
A sample file wallaroo-install-less-8-cores.yaml
is available from the following link:
apiVersion: kots.io/v1beta1
kind: ConfigValues
metadata:
name: wallaroo
spec:
values:
alert_manager_enabled:
value: "0"
arbitrary_execution_enabled:
value: "0"
dashboard_enabled:
value: "0"
enable_grafana:
value: "0"
enable_model_insights:
value: "0"
explainability_enabled:
value: "0"
model_conversion_enabled:
value: "0"
plateau_enabled:
value: "0"
jupyter_mode:
value: none
The configuration file can be applied via the --config-values={CONFIG YAML FILE}
option. For example:
kubectl kots install "wallaroo/ce" \
-n wallaroo \
--config-values=wallaroo-install-less-8-cores.yaml