Serve

How to perform inferences and view inference logs.

Deployed ML models provide endpoints for performing inference requests. The results of these requests are available to through the Wallaroo pipeline logs.

The following guides detail how to perform inference requests through a Wallaroo deployed ML model and retrieve the logs of those inference requests.


Inference

How to perform inferences through a Wallaroo deployed ML model.

Inference Logs

How to retrieve inference logs.