wallaroo.client
Client handle to a Wallaroo platform instance.
Objects of this class serve as the entrypoint to Wallaroo platform functionality.
Create a Client handle.
Parameters
- Optional[str] api_endpoint: Host/port of the platform API endpoint. If not provided, the value of the
WALLAROO_URL
environment variable will be used. - Optional[str] auth_endpoint: Host/port of the platform Keycloak instance. If not provided, the value of the
WALLAROO_AUTH_URL
environment variable will be used. - Optional[int] request_timeout: Max timeout of web requests, in seconds
- Optional[str] auth_type: Authentication type to use. Can be one of: "none", "sso", "user_password".
- Optional[bool] interactive: If provided and True, some calls will print additional human information, or won't when False. If not provided, interactive defaults to True if running inside Jupyter and False otherwise.
- str time_format: Preferred
strftime
format string for displaying timestamps in a human context.
Method to calculate the auth values specified as defaults, as params or in ENV vars. Made static to be testable without reaching out to SSO, etc.
List all deployments (active or not) on the platform.
Returns
A list of all deployments on the platform.
Search for pipelines. All parameters are optional, in which case the result is the same as
list_pipelines()
. All times are strings to be parsed by datetime.isoformat
. Example:
myclient.search_pipelines(created_end='2022-04-19 13:17:59+00:00', search_term="foo")
Parameters
- str search_term: Will be matched against tags and model names. Example: "footag123".
- bool deployed: Pipeline was deployed or not
- str created_start: Pipeline was created at or after this time
- str created_end: Pipeline was created at or before this time
- str updated_start: Pipeline was updated at or before this time
- str updated_end: Pipeline was updated at or before this time
Returns
A list of pipeline versions matching the search criteria.
Search for pipeline versions. All parameters are optional. All times are strings to be parsed by
datetime.isoformat
.
Example:
myclient.search_pipeline_versions(created_end='2022-04-19 13:17:59+00:00', search_term="foo")
Parameters
- str search_term: Will be matched against tags and model names. Example: "footag123".
- bool deployed: Pipeline was deployed or not
- str created_start: Pipeline was created at or after this time
- str created_end: Pipeline was created at or before this time
- str updated_start: Pipeline was updated at or before this time
- str updated_end: Pipeline was updated at or before this time
Returns
A list of pipeline versions matching the search criteria.
Search models owned by you. Example:
client.search_my_models(search_term="my_model")
Parameters
- search_term: Searches the following metadata: names, shas, versions, file names, and tags
- uploaded_time_start: Inclusive time of upload
- uploaded_time_end: Inclusive time of upload
Returns
ModelVersionList
Search model versions owned by you. Example:
client.search_my_model_versions(search_term="my_model")
Parameters
- search_term: Searches the following metadata: names, shas, versions, file names, and tags
- uploaded_time_start: Inclusive time of upload
- uploaded_time_end: Inclusive time of upload
Returns
ModelVersionList
Search all models you have access to.
Parameters
- search_term: Searches the following metadata: names, shas, versions, file names, and tags
- uploaded_time_start: Inclusive time of upload
- uploaded_time_end: Inclusive time of upload
Returns
ModelVersionList
Search all model versions you have access to. Example:
client.search_model_versions(search_term="my_model")
Parameters
- search_term: Searches the following metadata: names, shas, versions, file names, and tags
- uploaded_time_start: Inclusive time of upload
- uploaded_time_end: Inclusive time of upload
Returns
ModelVersionList
Deactivates an existing user of the platform
Deactivated users cannot log into the platform. Deactivated users do not count towards the number of allotted user seats from the license.
The Models and Pipelines owned by the deactivated user are not removed from the platform.
Parameters
- str email: The email address of the user to deactivate.
Returns
None
Activates an existing user of the platform that had been previously deactivated.
Activated users can log into the platform.
Parameters
- str email: The email address of the user to activate.
Returns
None
Upload a model defined by a file as a new model variant.
Parameters
- name: str The name of the model of which this is a variant. Names must be ASCII alpha-numeric characters or dash (-) only.
- path: Union[str, pathlib.Path] Path of the model file to upload.
- framework: Optional[Framework] Supported model frameworks. Use models from Framework Enum. Example: Framework.PYTORCH, Framework.TENSORFLOW
- input_schema: Optional pa.Schema Input schema, required for flavors other than ONNX, Tensorflow, and Python
- output_schema: Optional pa.Schema Output schema, required for flavors other than ONNX, Tensorflow, and Python
- convert_wait: Optional bool Defaults to True. Specifies if method should return when conversion is over or not.
Returns
The created Model.
Registers an MLFlow model as a new model.
Parameters
- str model_name: The name of the model of which this is a variant. Names must be ASCII alpha-numeric characters or dash (-) only.
- str image: Image name of the MLFlow model to register.
Returns
The created Model.
Retrieves a model by name and optionally version from the current workspace.
Parameters
- name: The name of the model.
- version: The version of the model. If not provided, the latest version is returned.
Returns
The requested model. Raises: Exception: If the model with the given name does not exist. Exception: If the model with the given version does not exist.
Fetch a Model by name.
Parameters
- str model_class: Name of the model class.
- str model_name: Name of the variant within the specified model class.
Returns
The Model with the corresponding model and variant name.
Fetch a Model version by name.
Parameters
- str model_class: Name of the model class.
- str model_name: Name of the variant within the specified model class.
Returns
The Model with the corresponding model and variant name.
Fetch a Deployment by name.
Parameters
- str deployment_name: Name of the deployment.
Returns
The Deployment with the corresponding name.
Fetch Pipelines by name.
Parameters
- str pipeline_name: Name of the pipeline.
Returns
The Pipeline with the corresponding name.
Retrieves a pipeline by name and optional version from the current workspace.
Parameters
- name: The name of the pipeline to retrieve.
- version: The version of the pipeline to retrieve. Defaults to None.
Returns
Pipeline: The requested pipeline. Raises: Exception: If the pipeline with the given name is not found in the workspace. Exception: If the pipeline with the given version is not found in the workspace.
Starts building a pipeline with the given pipeline_name
,
returning a :py:PipelineConfigBuilder:
When completed, the pipeline can be uploaded with .upload()
Parameters
- pipeline_name string: Name of the pipeline, must be composed of ASCII alpha-numeric characters plus dash (-).
Creates a new PipelineVariant of a "value-split experiment" type.
Parameters
- str name: Name of the Pipeline
- meta_key str: Inference input key on which to redirect inputs to experiment models.
- default_model ModelConfig: Model to send inferences by default.
- challenger_models List[Tuple[Any, ModelConfig]]: A list of meta_key values -> Models to send inferences. If the inference data referred to by meta_key is equal to one of the keys in this tuple, that inference is redirected to the corresponding model instead of the default model.
Cleans up the inference result and log data from engine / plateau for display (ux) purposes.
Get logs for the given topic.
Parameters
- topic: str The topic to get logs for.
- limit: Optional[int] The maximum number of logs to return.
- start_datetime: Optional[datetime] The start time to get logs for.
- end_datetime: Optional[datetime] The end time to get logs for. :param dataset: Optional[List[str]] By default this is set to ["*"] which returns, ["time", "in", "out", "anomaly"]. Other available options - ["metadata"]
- dataset_exclude: Optional[List[str]] If set, allows user to exclude parts of dataset.
- dataset_separator: Optional[Union[Sequence[str], str]] If set to ".", return dataset will be flattened.
- directory: Optional[str] If set, logs will be exported to a file in the given directory.
- file_prefix: Optional[str] Prefix to name the exported file. Required if directory is set.
- data_size_limit: Optional[str] The maximum size of the exported data in MB. Size includes all files within the provided directory. By default, the data_size_limit will be set to 100MB.
- arrow: Optional[bool] If set to True, return logs as an Arrow Table. Else, returns Pandas DataFrame.
Returns
Tuple[Union[pa.Table, pd.DataFrame, LogEntries], str] The logs and status.
Gets logs from Plateau for a particular time window without attempting to convert them to Inference LogEntries. Logs can be returned as strings or the json parsed into lists and dicts.
Parameters
- topic str: The name of the topic to query
- start Optional[datetime]: The start of the time window
- end Optional[datetime]: The end of the time window
- limit int: The number of records to retrieve. Note retrieving many records may be a performance bottleneck.
- parse bool: Wether to attempt to parse the string as a json object.
- verbose bool: Prints out info to help diagnose issues.
Gets logs from Plateau for a particular time window and filters them for the model specified.
Parameters
- pipeline_name str: The name/pipeline_id of the pipeline to query
- topic str: The name of the topic to query
- start Optional[datetime]: The start of the time window
- end Optional[datetime]: The end of the time window
- model_id: The name of the specific model to filter if any
- limit int: The number of records to retrieve. Note retrieving many records may be a performance bottleneck.
- verbose bool: Prints out info to help diagnose issues.
Gets the assay results for a particular time window, parses them, and returns an List of AssayAnalysis.
Parameters
- assay_id: int The id of the assay we are looking for.
- start: datetime The start of the time window. If timezone info not set, uses UTC timezone by default.
- end: datetime The end of the time window. If timezone info not set, uses UTC timezone by default.
Creates an AssayBuilder that can be used to configure and create Assays.
Parameters
- assay_name: str Human friendly name for the assay
- pipeline: Pipeline The pipeline this assay will work on
- iopath: str The path to the input or output of the model that this assay will monitor.
- model_name: Optional[str] The name of the model to use for the assay.
- baseline_start: Optional[datetime] The start time for the inferences to use as the baseline
- baseline_end: Optional[datetime] The end time of the baseline window. the baseline. Windows start immediately after the baseline window and are run at regular intervals continuously until the assay is deactivated or deleted.
- baseline_data: Optional[np.ndarray] Use this to load existing baseline data at assay creation time.
Creates an assay in the database.
Parameters
- config AssayConfig: The configuration for the assay to create.
Returns
The identifier for the assay that was created. :rtype int
Get information about a specific assay.
Parameters
- assay_id: int The identifier for the assay to retrieve.
Returns
The assay with the given identifier
Sets the state of an assay to active or inactive.
Parameters
- assay_id: int The id of the assay to set the active state of.
- active: bool The active state to set the assay to. Default is True.
Create a new workspace with the current user as its first owner.
Parameters
- str workspace_name: Name of the workspace, must be composed of ASCII alpha-numeric characters plus dash (-)
List all workspaces on the platform which this user has permission see.
Returns
A list of all workspaces on the platform.
Get a workspace by name. If the workspace does not exist, create it.
Parameters
- name: The name of the workspace to get.
- create_if_not_exist: If set to True, create a new workspace if workspace by given name doesn't already exist. Set to False by default.
Returns
The workspace with the given name.
Any calls involving pipelines or models will use the given workspace from then on.
Return the current workspace. See also set_current_workspace
.
List all Orchestrations in the current workspace.
Returns
A List containing all Orchestrations in the current workspace.
Upload a file to be packaged and used as an Orchestration.
The uploaded artifact must be a ZIP file which contains:
- User code. If
main.py
exists, then that will be used as the task entrypoint. Otherwise, the firstmain.py
found in any subdirectory will be used as the entrypoint. - Optional: A standard Python
requirements.txt
for any dependencies to be provided in the task environment. The Wallaroo SDK will already be present and should not be mentioned. Multiplerequirements.txt
files are not allowed. - Optional: Any other artifacts desired for runtime, including data or code.
Parameters
- Optional[str] path: The path to the file on your filesystem that will be uploaded as an Orchestration.
- Optional[bytes] bytes_buffer: The raw bytes to upload to be used Orchestration. Cannot be used with the
path
param. - Optional[str] name: An optional descriptive name for this Orchestration.
- Optional[str] file_name: An optional filename to describe your Orchestration when using the bytes_buffer param. Ignored when
path
is used.
Returns
The Orchestration that was uploaded. :raises OrchestrationUploadFailed If a server-side error prevented the upload from succeeding.
List all Tasks in the current Workspace.
Returns
A List containing Task objects.
Retrieve a Task by its ID.
Parameters
- str task_id: The ID of the Task to retrieve.
Returns
A Task object.
Determines if this code is inside an orchestration task.
Returns
True if running in a task.
When running inside a task (see in_task()
), obtain arguments passed to the task.
Returns
Dict of the arguments
Creates a Connection with the given name, type, and type-specific details.
Returns
Connection to an external data source.
Create a Model Registry connection in this workspace that can be reused across workspaces.
Parameters
- name str A descriptive name for this registry
- token str A Bearer token necessary for accessing this Registry.
- url str The root URL for this registry. It should NOT include
/api/2.0/mlflow
as part of it. - workspace_id int The ID of the workspace to attach this registry to, i.e.
client.get_current_workspace().id()
.
Returns
A ModelRegistry object.