Module wallaroo.conductor

Conductor client serves as a common basis/collection of the various parts of the SDK.

Classes

class Conductor (rest_api_host='http://rest-api.wallaroo:3030', graphql_api_host='http://graphql-api.wallaroo:8080')

This part of the SDK serves as a common basis/collection of the various other parts.

Create an instance of the conductor. Currently we expect the SDK and thus the conductor object to be running inside of Kubernetes. We also expect the default 'wallaroo' namespace. If that is not the case a different url with a different namespace load balancer (service) can be specified.

The conductor client creates various clients for the other parts of the SDK.

Methods

def deploy_model(self, deployment_name: str, model_name: str, model_version: str, model_path: str, model_type: str = 'onnx', deployed: bool = True, cpu: int = 1) ‑> dict

One step function to upload and deploy a model.

NOTE: This function returns immediately but the operation may take several minutes to complete.

def get_audit_logs(self, model_id: str = None, pipeline_id: str = None, start_time: datetime.datetime = None, end_time: datetime.datetime = None, limit: int = 1000)

Get all audit logs for all engines. Can be scoped down via various optional parameters.

For large volumes of records, we recommend implementing pagination via time, and using a reasonable limit.

:param start_time: fetch all logs after this time, inclusive :param end_time: fetch all logs before this time, exclusive :param model_id: only fetch logs tagged with this model id :param pipeline_id: only fetch logs tagged with this pipeline id :param limit: hard cap on number of records to fetch

def inference_file(self, deployment_id: str, model_id: str, data_path: str) ‑> dict

Convenience function to submit the json contents of a file to the models endpoint for inference.

def inference_tensor(self, deployment_id: str, model_id: str, data: dict) ‑> dict

Submit tensor for inference to the models endpoint and return the result as a dict. The data should be in the form {"tensor" : [1.0, 2.0]} with any additional metadata if necessary or desired.

def pipeline_inference_file(self, deployment_id: str, pipeline_id: str, data_path: str) ‑> dict

Convenience function to submit the json contents of a file to the pipeline endpoint for inference.

def pipeline_inference_tensor(self, deployment_id: str, pipeline_id: str, data: dict) ‑> dict

Submit tensor for inference to the pipeline endpoint and return the result as a dict. The data should be in the form {"tensor" : [1.0, 2.0]} with any additional metadata if necessary or desired.