AI Workloads on ARM: Large Language Model Hugging Face Summarizer Tutorial
Features:
Models:
This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.
Run Anywhere for ARM Architecture Tutorial: Hugging Face Summarization Model
Wallaroo Run Anywhere provides model deployment in any device, any cloud, and any architecture. Models uploaded to Wallaroo are set to their targeted architecture.
Organizations can deploy uploaded models to clusters that have nodes with the provisioned architecture. The following architectures are supported:
X86
: The standard X86 architecture.ARM
: For more details on cloud providers and their ARM offerings, see Create ARM Nodepools for Kubernetes Clusters.
Model Architecture Inheritance
The model’s deployment configuration inherits its architecture. Models automatically deploy in the target architecture provided nodepools with the architecture are available. For information on setting up nodepools with specific architectures, see Infrastructure Configuration Guides.
That deployment configuration is carried over to the models’ publication in an Open Container Initiative (OCI) Registries, which allows edge model deployments on X64
and ARM
architectures. More details on deploying models on edge devices is available with the Wallaroo Run Anywhere Guides.
The deployment configuration can be overridden for either model deployment in the Wallaroo Ops instance, or in the Edge devices.
This tutorial demonstrates deploying a ML model trained to predict house prices to ARM edge locations through the following steps.
- Upload a model with the architecture set to
ARM
. - Create a pipeline with the uploaded model as a model step.
- Publish the pipeline model to an Open Container Initiative (OCI) Registry for both X64 and ARM deployments.
Goal
Demonstrate publishing a pipeline with model steps to various architectures.
Resources
This tutorial provides the following:
- Models:
models/model-auto-conversion_hugging-face_complex-pipelines_hf-summarisation-bart-large-samsun.zip
: This model should be downloaded and placed into the./models
folder before beginning this demonstration. model-auto-conversion_hugging-face_complex-pipelines_hf-summarisation-bart-large-samsun.zip (1.4 GB)
Prerequisites
- A deployed Wallaroo instance with Edge Registry Services and Edge Observability enabled.
- The following Python libraries installed:
- A X64 Docker deployment to deploy the model on an edge location.
Steps
- Upload the model with the targeted architecture set to
ARM
. - Create the pipeline add the model as a model step.
- Deploy the model in the targeted architecture and perform sample inferences.
- Publish the pipeline an OCI registry.
- Deploy the model from the pipeline publish to the edge deployment with ARM architecture.
- Perform sample inferences on the ARM edge model deployment.
Import Libraries
The first step will be to import our libraries, and set variables used through this tutorial.
import wallaroo
from wallaroo.object import EntityNotFoundError
from wallaroo.framework import Framework
from wallaroo.engine_config import Architecture
import pyarrow as pa
from IPython.display import display
# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
import datetime
import time
workspace_name = f'run-anywhere-architecture-hf-summarizer-demonstration-tutorial'
arm_pipeline_name = f'architecture-demonstration-arm'
model_name_arm = f'hf-summarizer-arm'
model_file_name = './models/hf_summarization.zip'
# ignoring warnings for demonstration
import warnings
warnings.filterwarnings('ignore')
# used to display DataFrame information without truncating
from IPython.display import display
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
Connect to the Wallaroo Instance
The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client()
command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client()
. For more information on Wallaroo Client settings, see the Client Connection guide.
# Login through local Wallaroo instance
wl = wallaroo.Client()
Create Workspace
We will create a workspace to manage our pipeline and models. The following variables will set the name of our sample workspace then set it as the current workspace.
Workspace, pipeline, and model names should be unique to each user, so we’ll add in a randomly generated suffix so multiple people can run this tutorial in a Wallaroo instance without effecting each other.
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)
{'name': 'run-anywhere-architecture-hf-summarizer-demonstration-tutorial', 'id': 56, 'archived': False, 'created_by': 'eed2002f-769f-4cbd-a189-8ca1e9bf496c', 'created_at': '2024-04-22T15:06:57.981462+00:00', 'models': [], 'pipelines': []}
Upload Models and Set Target Architecture to ARM
For our example, we will upload the Hugging Face Summarizer model. The model file is llm-models/model-auto-conversion_hugging-face_complex-pipelines_hf-summarisation-bart-large-samsun.zip
, and is uploaded with the name hf-summarizer-arm
.
Models are uploaded to Wallaroo via the wallaroo.client.upload_model
method which takes the following arguments:
Parameter | Type | Description |
---|---|---|
path | String (Required) | The file path to the model. |
framework | wallaroo.framework.Framework (Required) | The model’s framework. See Wallaroo SDK Essentials Guide: Model Uploads and Registrations for supported model frameworks. |
input_schema | pyarrow.lib.Schema (Optional) | The model’s input schema. **Only required for non-Native Wallaroo frameworks. See Wallaroo SDK Essentials Guide: Model Uploads and Registrations for more details. |
output_schema | pyarrow.lib.Schema (Optional) | The model’s output schema. **Only required for non-Native Wallaroo frameworks. See Wallaroo SDK Essentials Guide: Model Uploads and Registrations for more details. |
convert_wait | bool (Optional) | Whether to wait in the SDK session to complete the auto-packaging process for non-native Wallaroo frameworks. |
arch | wallaroo.engine_config.Architecture (Optional) | The targeted architecture for the model. Options are
|
Verify the ML model is downloaded from model-auto-conversion_hugging-face_complex-pipelines_hf-summarisation-bart-large-samsun.zip (1.4 GB) and placed into the ./models
directory.
For this example, the arch
setting is set to ARM
.
input_schema = pa.schema([
pa.field('inputs', pa.string()),
pa.field('return_text', pa.bool_()),
pa.field('return_tensors', pa.bool_()),
pa.field('clean_up_tokenization_spaces', pa.bool_()),
# pa.field('generate_kwargs', pa.map_(pa.string(), pa.null())), # dictionaries are not currently supported by the engine
])
output_schema = pa.schema([
pa.field('summary_text', pa.string()),
])
model_name_arm = f'hf-summarizer-arm'
model_file_name = './models/hf_summarization.zip'
model_arm = wl.upload_model(model_name_arm,
model_file_name,
framework=wallaroo.framework.Framework.HUGGING_FACE_SUMMARIZATION,
input_schema=input_schema,
output_schema=output_schema,
arch=Architecture.ARM
)
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime......................successful
Ready
display(model_arm)
Name | hf-summarizer-arm |
Version | e0ba943b-1c8b-4294-b941-0829c0bad71c |
File Name | hf_summarization.zip |
SHA | ee71d066a83708e7ca4a3c07caf33fdc528bb000039b6ca2ef77fa2428dc6268 |
Status | ready |
Image Path | proxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mac-deploy:v2024.1.0-main-4963 |
Architecture | arm |
Acceleration | none |
Updated At | 2024-22-Apr 15:11:38 |
Build Pipeline
We build the pipeline with the wallaroo.client.build_pipeline(pipeline_name
command, and set the model as a model step in the pipeline.
pipeline_arm = wl.build_pipeline('architecture-demonstration-arm')
_ = pipeline_arm.add_model_step(model_arm)
Deploy Pipeline
For the pipeline deployment example, we specify the number of cpus and the memory. Because Hugging Face is deployed to the Wallaroo Containerized Runtime, we specify its settings via the sidekick_cpus
and sidekick_memory
settings. Note that that architecture is not specified, as that is inherited from the model. Once deployed, we show the pipeline’s settings and confirm the arch
setting is arm
, as inherited from the model.
from wallaroo.deployment_config import DeploymentConfigBuilder
deployment_config = DeploymentConfigBuilder() \
.cpus(0.25).memory('1Gi') \
.sidekick_cpus(model_arm, 4) \
.sidekick_memory(model_arm, "8Gi") \
.build()
pipeline_arm.deploy(deployment_config=deployment_config)
display(pipeline_arm)
name | architecture-demonstration-arm |
---|---|
created | 2024-04-22 15:11:39.028390+00:00 |
last_updated | 2024-04-22 15:13:12.770301+00:00 |
deployed | True |
arch | arm |
accel | none |
tags | |
versions | 792a6a0c-ef1f-4d6d-ab31-12e9b740a1ef, 660ecf06-b10b-4ee5-a398-134241fb4d56, ff211455-8516-4b65-a3aa-a3946eafe612, 217ac602-e275-4381-b551-8f96a6ff190e, 027136e0-d8dc-4ed3-8e77-69e66ea65b94, c5b0861c-42e1-491d-b90a-970887ae0c02 |
steps | hf-summarizer-arm |
published | False |
pipeline_arm.undeploy()
Waiting for undeployment - this will take up to 45s .................................... ok
name | architecture-demonstration-arm |
---|---|
created | 2024-04-22 15:11:39.028390+00:00 |
last_updated | 2024-04-22 15:13:12.770301+00:00 |
deployed | False |
arch | arm |
accel | none |
tags | |
versions | 792a6a0c-ef1f-4d6d-ab31-12e9b740a1ef, 660ecf06-b10b-4ee5-a398-134241fb4d56, ff211455-8516-4b65-a3aa-a3946eafe612, 217ac602-e275-4381-b551-8f96a6ff190e, 027136e0-d8dc-4ed3-8e77-69e66ea65b94, c5b0861c-42e1-491d-b90a-970887ae0c02 |
steps | hf-summarizer-arm |
published | False |
Pipeline Publish for ARM Architecture via the Wallaroo SDK
We now publish our pipeline to the OCI registry.
Publish Pipeline for ARM
Publishing the pipeline uses the pipeline wallaroo.pipeline.Pipeline.publish()
command. This requires that the Wallaroo Ops instance have Edge Registry Services enabled.
When publishing, we specify the pipeline deployment configuration through the wallaroo.DeploymentConnfigBuilder
. As with the deployment, we do not specify the architecture as that it inherited from the model.
The following publishes the pipeline to the OCI registry and displays the container details. For more information, see Wallaroo SDK Essentials Guide: Pipeline Edge Publication.
# default deployment configuration
assay_pub_arm = pipeline_arm.publish(deployment_config=wallaroo.DeploymentConfigBuilder().build())
assay_pub_arm
Waiting for pipeline publish... It may take up to 600 sec.
Pipeline is publishing................. Published.
ID | 7 | |
Pipeline Name | architecture-demonstration-arm | |
Pipeline Version | 1fde089a-509a-4908-95b2-3a3e5694b91e | |
Status | Published | |
Engine URL | sample.registry.example.com/uat/engines/proxy/wallaroo/ghcr.io/wallaroolabs/fitzroy-mini-aarch64:v2024.1.0-main-4963 | |
Pipeline URL | sample.registry.example.com/uat/pipelines/architecture-demonstration-arm:1fde089a-509a-4908-95b2-3a3e5694b91e | |
Helm Chart URL | oci://sample.registry.example.com/uat/charts/architecture-demonstration-arm | |
Helm Chart Reference | sample.registry.example.com/uat/charts@sha256:e97c8f5ce871bcd1bcddc449c1b830515c41663a9a2dd4c98bf2ab655f92f5ba | |
Helm Chart Version | 0.0.1-1fde089a-509a-4908-95b2-3a3e5694b91e | |
Engine Config | {'engine': {'resources': {'limits': {'cpu': 1.0, 'memory': '512Mi'}, 'requests': {'cpu': 1.0, 'memory': '512Mi'}, 'accel': 'none', 'arch': 'arm', 'gpu': False}}, 'engineAux': {'autoscale': {'type': 'none'}, 'images': {}}} | |
User Images | [] | |
Created By | john.hummel@wallaroo.ai | |
Created At | 2024-04-22 15:17:05.578374+00:00 | |
Updated At | 2024-04-22 15:17:05.578374+00:00 | |
Replaces | ||
Docker Run Command |
Note: Please set the EDGE_PORT , OCI_USERNAME , and OCI_PASSWORD environment variables. | |
Helm Install Command |
Note: Please set the HELM_INSTALL_NAME , HELM_INSTALL_NAMESPACE ,
OCI_USERNAME , and OCI_PASSWORD environment variables. |
For details on performing inference requests through an edge deployed model, see Edge Deployment Endpoints.