Inference
How to perform inferences on deployed models in edge and multicloud environments.
Edge and Multi-cloud Inference Anywhere provides the ability to deploy models and perform inferences on them in any environment (edge or multicloud), on any hardware. The inferences in these environments are observed for drift detection, the deployed models updated when new versions or entire new sets of models are created, and are deployed with or without GPUs.
The following hardware and AI Accelerators are supported.
Accelerator | ARM Support | X64/X86 Support | Intel GPU | Nvidia GPU | Description |
---|---|---|---|---|---|
None | N/A | N/A | N/A | N/A | The default acceleration, used for all scenarios and architectures. |
AIO | √ | X | X | X | AIO acceleration for Ampere Optimized trained models, only available with ARM processors. |
Jetson | √ | X | X | √ | Nvidia Jetson acceleration used with edge deployments with ARM processors. |
CUDA | √ | √ | X | √ | Nvidia Cuda acceleration supported by both ARM and X64/X86 processors. Intended for deployment with Nvidia GPUs. |
OpenVINO | X | √ | √ | X | Intel OpenVino acceleration. AI Accelerator from Intel compatible with x86/64 architectures. Aimed at edge and multi-cloud deployments either with or without Intel GPUs. |
The following guides describe how to:
How to perform inferences on deployed models in edge and multicloud environments.
How to observe edge and multicloud deployed models for performance, model drift, and related issues.
How to update and manage edge and multicloud models.
How to run the Wallaroo Inference Server on diverse hardware architectures and their associated acceleration libraries.