Inference on ARM Architecture
How to deploy ML models with ARM processors and infrastructure.
Wallaroo provides the ability to deploy models and perform inferences on them in any environment (edge or multicloud), on any hardware. The inferences in these environments are observed for drift detection, the deployed models updated when new versions or entire new sets of models are created, and are deployed with or without GPUs.
The following hardware and AI Accelerators are supported.
Accelerator | ARM Support | X64/X86 Support | Description |
---|---|---|---|
None | √ | √ | The default acceleration, used for all scenarios and architectures. |
AIO | √ | X | AIO acceleration for Ampere Optimized trained models, only available with ARM processors. |
Jetson | √ | X | Nvidia Jetson acceleration used with edge deployments with ARM processors. |
CUDA | √ | √ | Nvidia Cuda acceleration supported by both ARM and X64/X86 processors. This is intended for deployment with GPUs. |
The following guides describe how to:
How to deploy ML models with ARM processors and infrastructure.
How to use package models to run on GPUs
How to use package models to run with hardware accelerators