Deploy on GPU
Table of Contents
The following procedure demonstrates how to upload and deploy a LLM with GPUs. The majority are Hugging Face LLMs packaged as a Wallaroo BYOP framework models.
These upload and deploy instructions have been tested with and apply to the following LLM models:
- Llama
- IBM-Granite
For access to these sample models and a demonstration on using LLMs with Wallaroo:
- Contact your Wallaroo Support Representative OR
- Schedule Your Wallaroo.AI Demo Today
Upload the LLM Model
LLM models are uploaded to Wallaroo via one of two methods:
- The Wallaroo SDK
wallaroo.client.Client.upload_model
method. - The Wallaroo MLOps API
POST /v1/api/models/upload_and_convert
endpoint.
Upload LLM via the Wallaroo SDK
Models are uploaded with the Wallaroo SDK via the wallaroo.client.Client.upload_model
.
SDK Upload Model Parameters
wallaroo.client.Client.upload_model
has the following parameters.
Parameter | Type | Description |
---|---|---|
name | string (Required) | The name of the model. Model names are unique per workspace. Models that are uploaded with the same name are assigned as a new version of the model. |
path | string (Required) | The path to the model file being uploaded. |
framework | string (Required) | The framework of the model from wallaroo.framework . |
input_schema | pyarrow.lib.Schema
| The input schema in Apache Arrow schema format. |
output_schema | pyarrow.lib.Schema
| The output schema in Apache Arrow schema format. |
convert_wait | bool (Optional) |
|
arch | wallaroo.engine_config.Architecture (Optional) | The architecture the model is deployed to. If a model is intended for deployment to an ARM architecture, it must be specified during this step. Values include:
|
accel | wallaroo.engine_config.Acceleration (Optional) | The AI hardware accelerator used. If a model is intended for use with a hardware accelerator, it should be assigned at this step.
|
SDK Upload Model Returns
wallaroo.client.Client.upload_model
returns the model version. The model version refers to the version of the model object in Wallaroo. In Wallaroo, a model version update happens when we upload a new model file (artifact) against the same model object name.
SDK Upload Model Example
The following example demonstrates uploading a LLM using the Wallaroo SDK.
import wallaroo
# connect to Wallaroo
wl = wallaroo.Client()
# upload the model
model = wl.upload_client(
name = model_name,
path = file_path,
input_schema = input_schema,
output_schema = output.schema,
framework = framework
)
Upload LLM via the Wallaroo MLOps API
The method wallaroo.client.Client.generate_upload_model_api_command
generates a curl
script for uploading models to Wallaroo via the Wallaroo MLOps API. The generated curl
script is based on the Wallaroo SDK user’s current workspace. This is useful for environments that do not have the Wallaroo SDK installed, or uploading very large models (10 gigabytes or more).
The command assumes that other upload parameters are set to default. For details on uploading models via the Wallaroo MLOps API, see Wallaroo MLOps API Essentials Guide: Model Upload and Registrations.
This method takes the following parameters:
Parameter | Type | Description |
---|---|---|
base_url | String (Required) | The Wallaroo domain name. For example: wallaroo.example.com . |
name | String (Required) | The name to assign the model at upload. This must match DNS naming conventions. |
path | String (Required) | Path to the ML or LLM model file. |
framework | String (Required) | The framework from wallaroo.framework.Framework For a complete list, see Wallaroo Supported Models. |
input_schema | String (Required) | The model’s input schema in PyArrow.Schema format. |
output_schema | String (Required) | The model’s output schema in PyArrow.Schema format. |
This outputs a curl
command in the following format (indentions added for emphasis). The sections marked with {}
represent the variable names that are injected into the script from the above parameter or from the current SDK session:
{Current Workspace['id']}
: The value of theid
for the current workspace.{Bearer Token}
: The bearer token used to authentication to the Wallaroo MLOps API.
curl --progress-bar -X POST \
-H "Content-Type: multipart/form-data" \
-H "Authorization: Bearer {Bearer Token}"
-F "metadata={"name": {name}, "visibility": "private", "workspace_id": {Current Workspace['id']}, "conversion": {"arch": "x86", "accel": "none", "framework": "custom", "python_version": "3.8", "requirements": []}, \
"input_schema": "{base64 version of input_schema}", \
"output_schema": "base64 version of the output_schema"};type=application/json" \
-F "file=@{path};type=application/octet-stream" \
https://{base_url}/v1/api/models/upload_and_convert
Once generated, users can use the script to upload the model via the Wallaroo MLOps API.
The following example shows setting the parameters above and generating the model upload API command.
import wallaroo
import pyarrow as pa
# set the input and output schemas
input_schema = pa.schema([
pa.field("text", pa.string())
])
output_schema = pa.schema([
pa.field("generated_text", pa.string())
])
# use the generate model upload api command
wl.generate_upload_model_api_command(
base_url='https://example.wallaroo.ai/',
name='sample_model_name',
path='llama_byop.zip',
framework=Framework.CUSTOM,
input_schema=input_schema,
output_schema=output_schema)
The output of this command is:
curl --progress-bar -X POST -H "Content-Type: multipart/form-data" -H "Authorization: Bearer abc123" -F "metadata={"name": "sample_model_name", "visibility": "private", "workspace_id": 20, "conversion": {"arch": "x86", "accel": "none", "framework": "custom", "python_version": "3.8", "requirements": []}, "input_schema": "/////3AAAAAQAAAAAAAKAAwABgAFAAgACgAAAAABBAAMAAAACAAIAAAABAAIAAAABAAAAAEAAAAUAAAAEAAUAAgABgAHAAwAAAAQABAAAAAAAAEFEAAAABwAAAAEAAAAAAAAAAQAAAB0ZXh0AAAAAAQABAAEAAAA", "output_schema": "/////3gAAAAQAAAAAAAKAAwABgAFAAgACgAAAAABBAAMAAAACAAIAAAABAAIAAAABAAAAAEAAAAUAAAAEAAUAAgABgAHAAwAAAAQABAAAAAAAAEFEAAAACQAAAAEAAAAAAAAAA4AAABnZW5lcmF0ZWRfdGV4dAAABAAEAAQAAAA="};type=application/json" -F "file=@llama_byop.zip;type=application/octet-stream" https://example.wallaroo.ai/v1/api/models/upload_and_convert'
LLM Deploy
LLM’s are deployed via the Wallaroo SDK through the following process:
- After the model is uploaded, get the LLM model reference from Wallaroo.
- Create or use an existing Wallaroo pipeline and assign the LLM as a pipeline model step.
- Set the deployment configuration to assign the resources including the number of CPUs, amount of RAM, etc for the LLM deployment.
- Deploy the LLM with the deployment configuration.
Retrieve LLM
LLM’s previously uploaded to Wallaroo can be retrieved without re-uploading the LLM via the Wallaroo SDK method wallaroo.client.Client.get_model(name:String, version:String)
which takes the following parameters:
name
: The name of the model.
The method wallaroo.client.get_model(name)
retrieves the most recent model version in the current workspace that matches the provided model name unless a specific version is requested. For more details on managing ML models in Wallaroo, see Manage Models.
The following demonstrates retrieving an uploaded LLM and storing it in the variable model_version
.
import wallaroo
# connect with the Wallaroo client
wl = wallaroo.Client()
llm_model = wl.get_model(name=model_name)
Create the Wallaroo Pipeline and Add Model Step
LLMs are deployed via Wallaroo pipelines. Wallaroo pipelines are created in the current user’s workspace with the Wallaroo SDK method wallaroo.client.Client.build_pipeline(pipeline_name:String)
method. This creates a pipeline in the user’s current workspace with with provided pipeline_name
, and returns wallaroo.pipeline.Pipeline
, which can be saved to a variable for other commands.
Pipeline names are unique within a workspace; using the build_pipeline
method within a workspace where another pipeline with the same name exists will connect to the existing pipeline.
Once the pipeline reference is stored to a variable, LLMs are added to the pipeline as a pipeline step with the method wallaroo.pipeline.Pipeline.add_model_step(model_version: wallaroo.model_version.ModelVersion)
. We demonstrated retrieving the LLM model version in the step Get Model.
This example demonstrates creating a pipeline and adding a model version as a pipeline step. For more details on managing Wallaroo pipelines for model deployment, see the Model Deploy guide.
# create the pipeline
llm_pipeline = wl.build_pipeline('sample-llm-pipeline')
# add the LLM as a pipeline model step
llm_pipeline.add_model_step(llm_model)
Set the Deployment Configuration and Deploy the Model
Before deploying the LLM, a deployment configuration is created. This sets how the cluster’s resources are allocated for the LLM’s exclusive use.
- Pipeline deployment configurations are created through the
wallaroo.deployment_config.DeploymentConfigBuilder()
class. - Various options including the number of cpus, RAM, and other resources are set for the Wallaroo Native Runtime, and the Wallaroo Containerized Runtime.
- Typically, LLM’s are deployed in the Wallaroo Containerized Runtime, which are referenced in the
DeploymentConfigBuilder
’s sidekick options.
- Typically, LLM’s are deployed in the Wallaroo Containerized Runtime, which are referenced in the
LLMs deployed with GPUs must include the following parameters:
sidekick_cpus(core_count: int)
: Sets the number of GPUs allocated to the LLM.deployment_label(label: string)
: The deployment label that matches the nodepool with the GPU nodes. This ensures that the LLM is deployed in the correct nodepool with the required hardware. For examples on setting up a nodepool with GPUs for LLM deployment, see Large Language Models Infrastructure Requirements.
Once the configuration options are set the deployment configuration is finalized with the wallaroo.deployment_config.DeploymentConfigBuilder().build()
method.
The following options are available for deployment configurations for LLM deployments. For more details on deployment configurations, see Deployment Configuration guide.
Method | Parameters | Description |
---|---|---|
replica_count | (count: int) | The number of replicas to deploy. This allows for multiple deployments of the same models to be deployed to increase inferences through parallelization. |
replica_autoscale_min_max | (maximum: int, minimum: int = 0) | Provides replicas to be scaled from 0 to some maximum number of replicas. This allows deployments to spin up additional replicas as more resources are required, then spin them back down to save on resources and costs. |
autoscale_cpu_utilization | (cpu_utilization_percentage: int) | Sets the average CPU percentage metric for when to load or unload another replica. |
cpus | (core_count: float) | Sets the number or fraction of CPUs to use for the deployment, for example: 0.25 , 1 , 1.5 , etc. The units are similar to the Kubernetes CPU definitions. |
gpus | (core_count: int) | Sets the number of GPUs to allocate for native runtimes. GPUs are only allocated in whole units, not as fractions. Organizations should be aware of the total number of GPUs available to the cluster, and monitor which deployment configurations have gpus allocated to ensure they do not run out. If there are not enough gpus to allocate to a deployment configuration, and error message is returned during deployment. If gpus is called, then the deployment_label must be called and match the GPU Nodepool for the Wallaroo Cluster hosting the Wallaroo instance. |
memory | (memory_spec: str) | Sets the amount of RAM to allocate the deployment. The memory_spec string is in the format “{size as number}{unit value}”. The accepted unit values are:
|
deployment_label | (label: string) | Label used to match the nodepool label used for the deployment. Required if gpus are set and must match the GPU nodepool label. See Create GPU Nodepools for Kubernetes Clusters for details on setting up GPU nodepools for Wallaroo. |
sidekick_cpus | (model: wallaroo.model.Model, core_count: float) | Sets the number of CPUs to be used for the model’s sidekick container. Only affects image-based models (e.g. MLFlow models) in a deployment. The parameters are as follows:
|
sidekick_memory | (model: wallaroo.model.Model, memory_spec: str) | Sets the memory available to for the model’s sidekick container.The parameters are as follows:
|
Once the deployment configuration is set, the LLM is deployed via the wallaroo.pipeline.Pipeline.deploy(deployment_config: Optional[wallaroo.deployment_config.DeploymentConfig])
method. This allocates resources from the cluster for the LLMs deployment based on the DeploymentConfig
settings. If the resources set in the deployment configuration are not available at deployment, an error is returned.
The following example shows setting the deployment configuration for a LLM for deployment on x86 architecture with a single GPU, then deploying a pipeline with this deployment configuration.
# set the deployment config with the following:
# Wallaroo Native Runtime: 0.5 cpu, 2 Gi RAM
# Wallaroo Containerized Runtime where the LLM is deployed: 32 CPUs and 40 Gi RAM
deployment_config = DeploymentConfigBuilder() \
.cpus(0.5).memory('2Gi') \
.sidekick_cpus(llm_model, 2) \
.sidekick_memory(llm_model, '40Gi') \
.sidekick_gpus(llm_model, 1) \
.deployment_label(deployment_label) \
.build()
llm_pipeline.deploy(deployment_config)