2024.1 Product Release Notes
We are pleased to announce the following product improvements in our 2024.1 release:
- Edge model in-line updates: Edge and multicloud model deployments in-line updates are available. Organizations can replace model versions, entirely new models, or a set of model steps and replace multiple edge locations model deployments with easy.
- Edge observability with Low/No Connectivity: Multicloud and edge location deployment observability is provided for conditions when connectivity is sporadic, or there is no connectivity from the edge deployment to the Wallaroo Ops center.
- In-Line Platform Upgrades: Starting from this version, upgrading from a previous version to a new version of Wallaroo is easier than ever with in-line platform upgrades. With a few clicks and a short wait time, organizations can upgrade to new versions of Wallaroo and get right back to managing and deploying ML models with new features.
- User Administration Enhancements:The Platform Admin Dashboard provides a centralized location for users with the
admin
role to create, update, and manage users. Admin users can now navigate to any platform workspace and take action as needed within those workspaces. - Oracle Cloud Infrastructure (OCI) Installation Support Beta:
- Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) support is offered as a beta release. Please contact your Wallaroo Support representative for more information. See the Wallaroo Install Guides for details on how to install Wallaroo in your environment.
- Model Acceleration Integration: Models uploaded to Wallaroo are assigned an AI Model Accelerator. This setting is is inherited for model deployments and edge model deployments.
- Model Deployment on ARM Enhancements: Models uploaded to Wallaroo are assigned either the
X86/X64
infrastructure by default, orARM
processor support. These infrastructure assignments are inherited for model deployments and edge model deployments. - Enhanced Anomaly Detection: Wallaroo supports user-defined validations using polars expressions on model inference input and outputs that determine if data falls outside expected norms. This provides greater flexibility for models with multiple input and output fields and more complex anomaly detection expressions.
- Model Deployment Autoscaling: Model Deployment configurations provide support to auto-scale containerized models based on CPU utilization. This provides organization the ability to reduce costs and scale up and down resources as required.
- Python Model Enhancements: Python models used as pipeline steps for pre and post data processing, or support for models deploying in Python script such as ARIMA are enhanced to expand the Python libraries available and increase the flexibility for Python Models.
- XGBoost Native Support: XGBoost support is enhanced to performantly support a wider set of XGBoost models. XGBoost models are not required to be trained with ONNX nomenclature in order to successfully convert to a performant runtime.
- SDK Assay Improvements: Model drift observability is enhanced with new Wallaroo SDK methods to create baselines and track model drift with assays.
- New SDK Helper Functions for workspace, pipeline, and model management: The Wallaroo SDK enhancements bring new methods to create and access workspaces, pipelines and models to enhance the quality of life for SDK users.
- Parallel Infer with DataFrames: For large datasets and increased performance on multiple replica model deployments, the Wallaroo SDK
parallel_infer
method supports pandas DataFrames and Apache Arrow tables.