Wallaroo MLOps API Essentials Guide: Model Management
Model Naming Requirements
Model names map onto Kubernetes objects, and must be DNS compliant. The strings for model names must be lower case ASCII alpha-numeric characters or dash (-) only. . and _ are not allowed.
This tutorial and the assets are available as part of the Wallaroo Tutorials repository.
Wallaroo MLOps API User Management Tutorial
This tutorial focuses on using the Wallaroo MLOps API for model management. For this tutorial, we will be using the Wallaroo SDK to provide authentication credentials for ease of use examples. See the Wallaroo API Guide for full details on using the Wallaroo MLOps API.
References
The following references are available for more information about Wallaroo and the Wallaroo MLOps API:
- Wallaroo Documentation Site: The Wallaroo Documentation Site.
- Wallaroo MLOps API Documentation from a Wallaroo instance: A Swagger UI based documentation is available from your Wallaroo instance at
https://{Wallaroo Domain}/v1/api/docs. For example, if the Wallaroo Domain isexample.wallaroo.ai, the Wallaroo MLOps API Documentation is athttps://example.wallaroo.ai/v1/api/docs. Note the.is part of the prefix.
IMPORTANT NOTE: The Wallaroo MLOps API is provided as an early access features. Future iterations may adjust the methods and returns to provide a better user experience. Please refer to this guide for updates.
Prerequisites
- An installed Wallaroo instance.
- The following Python libraries installed:
requestsjsonwallaroo: The Wallaroo SDK. Included with the Wallaroo JupyterHub service by default.
Connection Steps
Import Libraries
For these examples, we will rely on the wallaroo SDK and requests library for making connections to our sample Wallaroo Ops instance.
pyarrow is the Apache Arrow library used for data schemas in Wallaroo, while base64 is used to convert data schemas to base64 format for model uploads.
import wallaroo
import requests
import json
import pyarrow as pa
import base64
Connect to the Wallaroo Instance
The next step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client() command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client(). For more information on Wallaroo Client settings, see the Client Connection guide.
# Login through local Wallaroo instance
wl = wallaroo.Client()
Retrieve API Service URL
The Wallaroo SDK provides the API endpoint through the wallaroo.client.api_endpoint variable. This is derived from the Wallaroo OPs DNS settings.
The method wallaroo.client.auth.auth_header() retrieves the HTTP authorization headers for the API connection.
Both of these are used to authenticate for the Wallaroo MLOps API calls used in the future examples.
- References
display(wl.api_endpoint)
display(wl.auth.auth_header())
'https://autoscale-uat-gcp.wallaroo.dev'
{'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJoVUcyQ1puTTZpa0EtQlNRVFNsVkJnaEd0dk45QXItN0g2R3NLcHlrY0ZjIn0.eyJleHAiOjE3Njc3MjIwMDAsImlhdCI6MTc2NzcxNDgwMCwianRpIjoiNjRlMDIxNzktZmEzNy00MDVhLTg5NTItNTA4YTA4Y2QwMGMzIiwiaXNzIjoiaHR0cHM6Ly9hdXRvc2NhbGUtdWF0LWdjcC53YWxsYXJvby5kZXYvYXV0aC9yZWFsbXMvbWFzdGVyIiwiYXVkIjpbIm1hc3Rlci1yZWFsbSIsImFjY291bnQiXSwic3ViIjoiN2Q2MDM4NTgtODhlMC00NzJlLThmNzEtZTQxMDk0YWZkN2VjIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoic2RrLWNsaWVudCIsInNlc3Npb25fc3RhdGUiOiI1N2Q3YzFjZi03ZGFkLTRiMGUtYjUxMy01NzgxMjA0MDZmY2EiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbImNyZWF0ZS1yZWFsbSIsImRlZmF1bHQtcm9sZXMtbWFzdGVyIiwib2ZmbGluZV9hY2Nlc3MiLCJhZG1pbiIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsibWFzdGVyLXJlYWxtIjp7InJvbGVzIjpbInZpZXctaWRlbnRpdHktcHJvdmlkZXJzIiwidmlldy1yZWFsbSIsIm1hbmFnZS1pZGVudGl0eS1wcm92aWRlcnMiLCJpbXBlcnNvbmF0aW9uIiwiY3JlYXRlLWNsaWVudCIsIm1hbmFnZS11c2VycyIsInF1ZXJ5LXJlYWxtcyIsInZpZXctYXV0aG9yaXphdGlvbiIsInF1ZXJ5LWNsaWVudHMiLCJxdWVyeS11c2VycyIsIm1hbmFnZS1ldmVudHMiLCJtYW5hZ2UtcmVhbG0iLCJ2aWV3LWV2ZW50cyIsInZpZXctdXNlcnMiLCJ2aWV3LWNsaWVudHMiLCJtYW5hZ2UtYXV0aG9yaXphdGlvbiIsIm1hbmFnZS1jbGllbnRzIiwicXVlcnktZ3JvdXBzIl19LCJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIG9wZW5pZCBwcm9maWxlIiwic2lkIjoiNTdkN2MxY2YtN2RhZC00YjBlLWI1MTMtNTc4MTIwNDA2ZmNhIiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJodHRwczovL2hhc3VyYS5pby9qd3QvY2xhaW1zIjp7IngtaGFzdXJhLXVzZXItaWQiOiI3ZDYwMzg1OC04OGUwLTQ3MmUtOGY3MS1lNDEwOTRhZmQ3ZWMiLCJ4LWhhc3VyYS11c2VyLWVtYWlsIjoiam9obi5odW1tZWxAd2FsbGFyb28uYWkiLCJ4LWhhc3VyYS1kZWZhdWx0LXJvbGUiOiJhZG1pbl91c2VyIiwieC1oYXN1cmEtYWxsb3dlZC1yb2xlcyI6WyJ1c2VyIiwiYWRtaW5fdXNlciJdLCJ4LWhhc3VyYS11c2VyLWdyb3VwcyI6Int9In0sIm5hbWUiOiJKb2huIEhhbnNhcmljayIsInByZWZlcnJlZF91c2VybmFtZSI6ImpvaG4uaHVtbWVsQHdhbGxhcm9vLmFpIiwiZ2l2ZW5fbmFtZSI6IkpvaG4iLCJmYW1pbHlfbmFtZSI6IkhhbnNhcmljayIsImVtYWlsIjoiam9obi5odW1tZWxAd2FsbGFyb28uYWkifQ.ix3iHG7s8VJVxhU6EnSjh3Kmmx2AX5yiaDHSt4Cvnz5renlbDyA6iIITg1wd1I7Ub2xzMQe03eVjc3erFob_Xg_12dhL4JLrkXJGwQJgfzN-UqA4Zr1b0YgiiAaIxFKWc75_AtHmCiK9R8mVSqjrCQzjyfuQgtQypcRUfkHvS26k0Ps3Ov1J9V1qlvh9Sdhvc84vjLBxiNtvBXQInjIj-PgjMyVQ0HR4cyMgiSY5u4Hr9wMXN2gMvGQHlZBMER8VoOwuV5Z-EyhjIfYbkc1f9bfzdySJ9zoPh5e2xD0ZkxBc8zyFLbdor8f8HQ0JMyADh-6ydTW48b28gEcW3RzsUQ'}
Models
The Wallaroo MLOps API allows users to upload different types of ML models and frameworks into Wallaroo.
Upload Model to Workspace
- Endpoint:
/v1/api/models/upload_and_convert - Content-Type:
multipart/form-data
Models uploaded through this method that are not Wallaroo Native Runtimes (ONNX, Tensorflow, and Python script) are containerized within the Wallaroo instance then run by the Wallaroo engine. See Wallaroo MLOps API Essentials Guide: Pipeline Management for details on pipeline configurations and deployments.
Upload Model to Workspace Parameters
| Field | Type | Description | |
|---|---|---|---|
| name | String (Required) | The model name. | |
| visibility | String (Required) | Either public or private. | |
| workspace_id | String (Required) | The numerical ID of the workspace to upload the model to. | |
| conversion | String (Required) | The conversion parameters that include the following: | |
| framework | String (Required) | The framework of the model being uploaded. See the list of supported models for more details. | |
| python_version | String (Required) | The version of Python required for model. | |
| requirements | String (Required) | Required libraries. Can be [] if the requirements are default Wallaroo JupyterHub libraries. | |
| input_schema | String (Optional) | The input schema from the Apache Arrow pyarrow.lib.Schema format, encoded with base64.b64encode. Only required for Containerized Wallaroo Runtime models. | |
| output_schema | String (Optional) | The output schema from the Apache Arrow pyarrow.lib.Schema format, encoded with base64.b64encode. Only required for non-native runtime models. |
Files are uploaded in the multipart/form-data format with two parts:
metadata: Contains the parameters listed above asapplication/json.file: The binary file (ONNX, .zip, etc) as Content-Typeapplication/octet-stream.
Upload Model to Workspace Returns
| Field | Type | Description | |
|---|---|---|---|
| insert_models{‘returning’: [models]} | List[models] | The uploaded models details. | |
| id | Integer | The model’s numerical id. |
Upload Model to Workspace Examples
The following example shows uploading an ONNX model to a Wallaroo instance. Note that the input_schema and output_schema encoded details are not required.
This example assumes the workspace id of 18. Modify this code block based on your Wallaroo Ops instance.
Upload model via Requests library.
workspace = wl.get_current_workspace()
workspace.id()
108
When using the requests library, the files data must be in the following format:
'file': (model_file_name, open(model_file_path, 'rb'), "application/octet-stream")
For example, if the file name is test.onnx, and the path is models/test.onnx, the command would be:
'file': ('test.onnx', open(`models/test.onnx`, 'rb'), "application/octet-stream")
Three model aspects to track:
- The model name: The model name unique to the workspace; models uploaded to the same workspace with the same model name are saved as new model versions.
- The model file name: The file name for the specific model (onnx, zip, etc).
- The model file path: The path to the model file location.
# Retrieve the token
headers = wl.auth.auth_header()
endpoint = f"{wl.api_endpoint}/v1/api/models/upload_and_convert"
display(endpoint)
workspace_id = workspace.id()
framework='onnx'
model_name = f"api-sample-model"
model_file_name = 'ccfraud.onnx'
model_file_path = './models/ccfraud.onnx'
metadata = {
"name": model_name,
"visibility": "public",
"workspace_id": workspace_id,
"conversion": {
"framework": framework,
"python_version": "3.8",
"requirements": []
}
}
files = {
"metadata": (None, json.dumps(metadata), "application/json"),
'file': (model_file_name, open(model_file_path, 'rb'), "application/octet-stream")
}
response = requests.post(endpoint, files=files, headers=headers).json()
display(f"Uploaded Model Name: {model_name}.")
display(f"Sample model file: ./models/ccfraud.onnx")
display(response)
'https://autoscale-uat-gcp.wallaroo.dev/v1/api/models/upload_and_convert'
'Uploaded Model Name: api-sample-model.'
'Sample model file: ./models/ccfraud.onnx'
{'insert_models': {'returning': [{'models': [{'id': 1215}]}]}}
model_version_id = response['insert_models']['returning'][0]['models'][0]['id']
model_version_id
1215
Upload ONNX model via curl.
metadata = {
"name": model_name,
"visibility": "public",
"workspace_id": workspace_id,
"conversion": {
"framework": framework,
"python_version": "3.8",
"requirements": []
}
}
# save metadata to a file
with open("data/onnx_file_upload.json", "w") as outfile:
json.dump(metadata, outfile)
!curl -H 'Authorization: {wl.auth.auth_header()["Authorization"]}' \
-F 'metadata={json.dumps(metadata)};type=application/json' \
-F 'file=@{model_file_path};type=application/octet-stream' \
{wl.api_endpoint}/v1/api/models/upload_and_convert
HTTP/1.1 202 Accepted
[1mcontent-type[0m: application/json
[1mcontent-length[0m: 58
[1mdate[0m: Tue, 06 Jan 2026 16:48:12 GMT
[1mx-envoy-upstream-service-time[0m: 315
[1mserver[0m: opscenter-https
{"insert_models":{"returning":[{"models":[{"id":1216}]}]}}
The following example shows uploading a Pytorch model to a Wallaroo instance. Note that the input_schema and output_schema encoded details are required.
Upload Pytorch via Requests.
input_schema = pa.schema([
pa.field('input_1', pa.list_(pa.float32(), list_size=10)),
pa.field('input_2', pa.list_(pa.float32(), list_size=5))
])
output_schema = pa.schema([
pa.field('output_1', pa.list_(pa.float32(), list_size=3)),
pa.field('output_2', pa.list_(pa.float32(), list_size=2))
])
encoded_input_schema = base64.b64encode(
bytes(input_schema.serialize())
).decode("utf8")
encoded_output_schema = base64.b64encode(
bytes(output_schema.serialize())
).decode("utf8")
framework = 'pytorch'
model_name = 'api-upload-pytorch-multi-io'
model_file_name = 'model-auto-conversion_pytorch_multi_io_model.pt'
model_file_path = './models/model-auto-conversion_pytorch_multi_io_model.pt'
metadata = {
"name": model_name,
"visibility": "private",
"workspace_id": workspace_id,
"conversion": {
"framework": framework,
"python_version": "3.8",
"requirements": []
},
"input_schema": encoded_input_schema,
"output_schema": encoded_output_schema,
}
headers = wl.auth.auth_header()
files = {
'metadata': (None, json.dumps(metadata), "application/json"),
'file': (model_file_name, open(model_file_path,'rb'),'application/octet-stream')
}
response = requests.post(endpoint, files=files, headers=headers).json()
display(f"Uploaded Model Name: {model_name}.")
display(f"Sample model file: ./models/model-auto-conversion_pytorch_multi_io_model.pt")
display(response)
'Uploaded Model Name: api-upload-pytorch-multi-io.'
'Sample model file: ./models/model-auto-conversion_pytorch_multi_io_model.pt'
{'insert_models': {'returning': [{'models': [{'id': 1217}]}]}}
Upload Pytorch via curl.
!curl -H 'Authorization: {wl.auth.auth_header()["Authorization"]}' \
-F 'metadata={json.dumps(metadata)};type=application/json' \
-F 'file=@{model_file_path};type=application/octet-stream' \
{wl.api_endpoint}/v1/api/models/upload_and_convert
{"insert_models":{"returning":[{"models":[{"id":1218}]}]}}
List Models in Workspace
- Endpoint:
/v1/api/models/list
Returns a list of models added to a specific workspace.
List Models in Workspace Parameters
| Field | Type | Description |
|---|---|---|
| workspace_id | Integer (REQUIRED) | The workspace id to list. |
List Models in Workspace Returns
| Field | Type | Description | |
|---|---|---|---|
| models | List[models] | List of models in the workspace. | |
| id | Integer | The numerical id of the model. | |
| **owner_id | String | Identifer of the model owner. | |
| created_at | String | DateTime of the model’s creation. | |
| updated_at | String | DateTime of the model’s last update. |
List Models in Workspace Examples
Display the models for the workspace. This is assumed to be workspace_id of 10. Adjust the script for your own use.
List models in workspace via Requests.
# Retrieve the token
headers = wl.auth.auth_header()
endpoint = f"{wl.api_endpoint}/v1/api/models/list"
data = {
"workspace_id": workspace_id
}
response = requests.post(endpoint,
json=data,
headers=headers,
verify=True).json()
display(response)
{'models': [{'model': {'id': 839,
'name': 'yolov8n-openvino',
'owner_id': '""',
'created_at': '2025-07-14T20:43:07.104895+00:00',
'updated_at': '2025-07-14T20:48:48.255132+00:00',
'workspace_id': 108},
'model_versions': [{'model_version': {'name': 'yolov8n-openvino',
'visibility': 'private',
'workspace_id': 108,
'conversion': {'arch': 'x86',
'accel': 'openvino',
'python_version': '3.8',
'requirements': [],
'framework': 'onnx',
'framework_config': None},
'id': 838,
'image_path': None,
'status': 'ready',
'task_id': None,
'file_info': {'version': '87ca2f41-5645-4040-82bc-6fc431f85ece',
'sha': '3ed5cd199e0e6e419bd3d474cf74f2e378aacbf586e40f24d1f8c89c2c476a08',
'file_name': 'yolov8n.onnx',
'size': 12823491},
'created_on_version': '2025.1.2',
'created_by': 'john.hummel@wallaroo.ai',
'created_at': '2025-07-14T20:48:48.255132+00:00',
'deployed': False,
'error_summary': None},
'config': {'id': 1259,
'model_version_id': 838,
'runtime': 'onnx',
'filter_threshold': None,
'tensor_fields': None,
'input_schema': None,
'output_schema': None,
'batch_config': None,
'dynamic_batching_config': None,
'continuous_batching_config': None,
'queue_depth': None,
'sidekick_uri': None,
'openai': None}},
{'model_version': {'name': 'yolov8n-openvino',
'visibility': 'private',
'workspace_id': 108,
'conversion': {'arch': 'x86',
'accel': 'none',
'python_version': '3.8',
'requirements': [],
'framework': 'onnx',
'framework_config': None},
'id': 837,
'image_path': None,
'status': 'ready',
'task_id': None,
'file_info': {'version': '36a26352-e497-46ed-a693-b01a0324da55',
'sha': '3ed5cd199e0e6e419bd3d474cf74f2e378aacbf586e40f24d1f8c89c2c476a08',
'file_name': 'yolov8n.onnx',
'size': 12823491},
'created_on_version': '2025.1.2',
'created_by': 'john.hummel@wallaroo.ai',
'created_at': '2025-07-14T20:43:07.104895+00:00',
'deployed': False,
'error_summary': None},
'config': {'id': 1257,
'model_version_id': 837,
'runtime': 'onnx',
'filter_threshold': None,
'tensor_fields': None,
'input_schema': None,
'output_schema': None,
'batch_config': None,
'dynamic_batching_config': None,
'continuous_batching_config': None,
'queue_depth': None,
'sidekick_uri': None,
'openai': None}}],
'workspace': {'id': 108,
'name': 'john.hummel@wallaroo.ai - Default Workspace',
'created_by': 'john.hummel@wallaroo.ai',
'archived': False,
'created_at': '2024-10-25T21:54:29.172083+00:00',
'group_id': None}},
...]}
List models in workspace via curl.
!curl {wl.api_endpoint}/v1/api/models/list \
-H "Authorization: {wl.auth.auth_header()['Authorization']}" \
-H "Content-Type: application/json" \
--data '{{"workspace_id": {workspace_id}}}'
{"models":[{"model":{"id":839,"name":"yolov8n-openvino","owner_id":"\"\"","created_at":"2025-07-14T20:43:07.104895+00:00","updated_at":"2025-07-14T20:48:48.255132+00:00","workspace_id":108},"model_versions":[{"model_version":{"name":"yolov8n-openvino","visibility":"private","workspace_id":108,"conversion":{"arch":"x86","accel":"openvino","python_version":"3.8","requirements":[],"framework":"onnx","framework_config":null},"id":838,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"87ca2f41-5645-4040-82bc-6fc431f85ece","sha":"3ed5cd199e0e6e419bd3d474cf74f2e378aacbf586e40f24d1f8c89c2c476a08","file_name":"yolov8n.onnx","size":12823491},"created_on_version":"2025.1.2","created_by":"john.hummel@wallaroo.ai","created_at":"2025-07-14T20:48:48.255132+00:00","deployed":false,"error_summary":null},"config":{"id":1259,"model_version_id":838,"runtime":"onnx","filter_threshold":null,"tensor_fields":null,"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"continuous_batching_config":null,"queue_depth":null,"sidekick_uri":null,"openai":null}},{"model_version":{"name":"yolov8n-openvino","visibility":"private","workspace_id":108,"conversion":{"arch":"x86","accel":"none","python_version":"3.8","requirements":[],"framework":"onnx","framework_config":null},"id":837,"image_path":null,"status":"ready","task_id":null,"file_info":{"version":"36a26352-e497-46ed-a693-b01a0324da55","sha":"3ed5cd199e0e6e419bd3d474cf74f2e378aacbf586e40f24d1f8c89c2c476a08","file_name":"yolov8n.onnx","size":12823491},"created_on_version":"2025.1.2","created_by":"john.hummel@wallaroo.ai","created_at":"2025-07-14T20:43:07.104895+00:00","deployed":false,"error_summary":null},"config":{"id":1257,"model_version_id":837,"runtime":"onnx","filter_threshold":null,"tensor_fields":null,"input_schema":null,"output_schema":null,"batch_config":null,"dynamic_batching_config":null,"continuous_batching_config":null,"queue_depth":null,"sidekick_uri":null,"openai":null}}],"workspace":{"id":108,"name":"john.hummel@wallaroo.ai - Default Workspace","created_by":"john.hummel@wallaroo.ai","archived":false,"created_at":"2024-10-25T21:54:29.172083+00:00","group_id":null}},...]}
Get Model Version Details
- Endpoint:
/v1/api/models/get_version_by_id
Get Model Details Parameters
Returns details regarding a single model, including versions.
| Field | Type | Description |
|---|---|---|
| model_version_id | Integer (REQUIRED) | The numerical value of the model’s id. |
Get Model Details Examples
Submit the model id for the model uploaded in the Upload Model to Workspace step to retrieve configuration details.
# Retrieve the token
headers = wl.auth.auth_header()
endpoint = f"{wl.api_endpoint}/v1/api/models/get_version_by_id"
data = {
"model_version_id": model_version_id
}
response = requests.post(endpoint, json=data, headers=headers, verify=True).json()
display(response)
{'model_version': {'model_version': {'name': 'api-sample-model',
'visibility': 'public',
'workspace_id': 108,
'conversion': {'python_version': '3.8',
'requirements': [],
'framework': 'onnx',
'framework_config': None},
'id': 1215,
'image_path': None,
'status': 'attempting_load_container',
'task_id': '3f04e6b6-b0f2-41c9-8acf-6509e194ae80',
'file_info': {'version': 'b935b572-04d5-4071-a174-985be94c724d',
'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507',
'file_name': 'ccfraud.onnx',
'size': 1928},
'created_on_version': '2025.2.1',
'created_by': 'john.hummel@wallaroo.ai',
'created_at': '2026-01-06T16:34:30.953078+00:00',
'deployed': False,
'error_summary': None},
'config': {'id': 1788,
'model_version_id': 1215,
'runtime': 'onnx',
'filter_threshold': None,
'tensor_fields': None,
'input_schema': None,
'output_schema': None,
'batch_config': None,
'dynamic_batching_config': None,
'continuous_batching_config': None,
'queue_depth': None,
'sidekick_uri': None,
'openai': None}}}
Get Model Versions
- Endpoint:
/v1/api/models/list_versions
Retrieves all versions of a model based on either the name of the model or the model_pk_id.
Get Model Versions Parameters
| Field | Type | Description |
|---|---|---|
| model_id | String (REQUIRED) | The model name. |
| models_pk_id | Integer (REQUIRED) | The model’s numerical id. |
Get Model Versions Returns
| Field | Type | Description | |
|---|---|---|---|
| Unnamed | List[models] | A list of model versions for the requested model. | |
| sha | String | The sha hash of the model version. | |
| models_pk_id | Integer | The pk id of the model. | |
| model_version | String | The UUID identifier of the model version. | |
| owner_id | String | The Keycloak user id of the model’s owner. | |
| model_id | String | The name of the model. | |
| id | Integer | The integer id of the model. | |
| file_name | String | The filename used when uploading the model. | |
| image_path | String | The image path of the model. |
Retrieve the versions for a previously uploaded model. This assumes a workspace with id 10 has models already loaded into it.
Retrieve model versions via Requests.
## List model versions
# Retrieve the token
headers = wl.auth.auth_header()
endpoint = f"{wl.api_endpoint}/v1/api/models/list_versions"
data = {
"model_id": "api-sample-model",
"models_pk_id": model_version_id
}
response = requests.post(endpoint, json=data, headers=headers, verify=True).json()
display(response)
[{'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507',
'models_pk_id': 1140,
'model_version': 'c1074d11-510c-4f72-8e39-43cd437cf827',
'owner_id': '13443b0a-de12-406b-a718-10bd26decce6',
'model_id': 'api-sample-model',
'id': 1137,
'file_name': 'keras_ccfraud.onnx',
'image_path': None,
'status': 'ready'},
{'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507',
'models_pk_id': 1218,
'model_version': '1d7191d4-5996-4750-b209-3a3a757bd4dd',
'owner_id': '7d603858-88e0-472e-8f71-e41094afd7ec',
'model_id': 'api-sample-model',
'id': 1216,
'file_name': 'ccfraud.onnx',
'image_path': None,
'status': 'attempting_load_container'},
{'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507',
'models_pk_id': 1218,
'model_version': 'b935b572-04d5-4071-a174-985be94c724d',
'owner_id': '7d603858-88e0-472e-8f71-e41094afd7ec',
'model_id': 'api-sample-model',
'id': 1215,
'file_name': 'ccfraud.onnx',
'image_path': None,
'status': 'attempting_load_container'},
{'sha': 'bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507',
'models_pk_id': 1215,
'model_version': 'fa08ab20-2dbf-431e-8ad0-90630ff21a5c',
'owner_id': '7d603858-88e0-472e-8f71-e41094afd7ec',
'model_id': 'ccfraud',
'id': 1212,
'file_name': 'ccfraud.onnx',
'image_path': None,
'status': 'attempting_load_container'}]
Retrieve model versions via curl.
!curl {wl.api_endpoint}/v1/api/models/list_versions \
-H "Authorization: {wl.auth.auth_header()['Authorization']}" \
-H "Content-Type: application/json" \
-d '{json.dumps(data)}'
[{"sha":"bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507","models_pk_id":1140,"model_version":"c1074d11-510c-4f72-8e39-43cd437cf827","owner_id":"13443b0a-de12-406b-a718-10bd26decce6","model_id":"api-sample-model","id":1137,"file_name":"keras_ccfraud.onnx","image_path":null,"status":"ready"},{"sha":"bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507","models_pk_id":1218,"model_version":"1d7191d4-5996-4750-b209-3a3a757bd4dd","owner_id":"7d603858-88e0-472e-8f71-e41094afd7ec","model_id":"api-sample-model","id":1216,"file_name":"ccfraud.onnx","image_path":null,"status":"attempting_load_container"},{"sha":"bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507","models_pk_id":1218,"model_version":"b935b572-04d5-4071-a174-985be94c724d","owner_id":"7d603858-88e0-472e-8f71-e41094afd7ec","model_id":"api-sample-model","id":1215,"file_name":"ccfraud.onnx","image_path":null,"status":"attempting_load_container"},{"sha":"bc85ce596945f876256f41515c7501c399fd97ebcb9ab3dd41bf03f8937b4507","models_pk_id":1215,"model_version":"fa08ab20-2dbf-431e-8ad0-90630ff21a5c","owner_id":"7d603858-88e0-472e-8f71-e41094afd7ec","model_id":"ccfraud","id":1212,"file_name":"ccfraud.onnx","image_path":null,"status":"attempting_load_container"}]
Get Model Configuration by Id
- Endpoints:
/v1/api/models/get_config_by_id
Returns the model’s configuration details.
Get Model Configuration by Id Parameters
| Field | Type | Description |
|---|---|---|
| model_id | Integer (Required) | The numerical value of the model’s id. |
Get Model Configuration by Id Returns
| Field | Type | Description |
|---|
Get Model Configuration by Id Examples
Submit the model id for the model uploaded in the Upload Model to Workspace step to retrieve configuration details.
Retrieve model configuration via Requests.
## Get model config by id
# Retrieve the token
headers = wl.auth.auth_header()
endpoint = f"{wl.api_endpoint}/v1/api/models/get_config_by_id"
data = {
"model_id": model_version_id
}
response = requests.post(endpoint, json=data, headers=headers, verify=True).json()
response
{'model_config': {'id': 1788,
'runtime': 'onnx',
'tensor_fields': None,
'filter_threshold': None}}
Retrieve model configuration via curl.
!curl {wl.api_endpoint}/v1/api/models/get_config_by_id \
-H "Authorization: {wl.auth.auth_header()['Authorization']}" \
-H "Content-Type: application/json" \
-d '{json.dumps(data)}'
{"model_config":{"id":1788,"runtime":"onnx","tensor_fields":null,"filter_threshold":null}}