CLIP ViT-B/32 Transformer Demonstration with Wallaroo
This tutorial and the assets can be downloaded as part of the Wallaroo Tutorials repository.
CLIP ViT-B/32 Transformer Demonstration with Wallaroo
The following tutorial demonstrates deploying and performing sample inferences with the Hugging Face CLIP ViT-B/32 Transformer model.
Prerequisites
This tutorial is geared towards the Wallaroo version 2023.2.1 and above. The model clip-vit-base-patch-32.zip
must be downloaded and placed into the ./models
directory. This is available from the following URL:
https://storage.googleapis.com/wallaroo-public-data/hf-clip-vit-b32/clip-vit-base-patch-32.zip
If performing this tutorial from outside the Wallaroo JupyterHub environment, install the Wallaroo SDK.
Steps
Imports
The first step is to import the libraries used for the example.
import json
import os
import requests
import wallaroo
from wallaroo.pipeline import Pipeline
from wallaroo.deployment_config import DeploymentConfigBuilder
from wallaroo.framework import Framework
from wallaroo.object import EntityNotFoundError
import pyarrow as pa
import numpy as np
import pandas as pd
from PIL import Image
Connect to the Wallaroo Instance
The first step is to connect to Wallaroo through the Wallaroo client. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
This is accomplished using the wallaroo.Client()
command, which provides a URL to grant the SDK permission to your specific Wallaroo environment. When displayed, enter the URL into a browser and confirm permissions. Store the connection into a variable that can be referenced later.
If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client()
. For more information on Wallaroo Client settings, see the Client Connection guide.
wl = wallaroo.Client()
Set Workspace and Pipeline
The next step is to create the Wallaroo workspace and pipeline used for the inference requests.
- References
# create the workspace and pipeline
workspace_name = 'clip-demo'
pipeline_name = 'clip-demo'
workspace = wl.get_workspace(name=workspace_name, create_if_not_exist=True)
wl.set_current_workspace(workspace)
display(wl.get_current_workspace())
pipeline = wl.build_pipeline(pipeline_name)
pipeline
{'name': 'clip-demo', 'id': 17, 'archived': False, 'created_by': '65124b18-8382-49af-b3c8-ada3b9df3330', 'created_at': '2024-04-16T19:48:49.513409+00:00', 'models': [], 'pipelines': []}
name | clip-demo |
---|---|
created | 2024-04-16 19:48:50.468599+00:00 |
last_updated | 2024-04-16 19:48:50.468599+00:00 |
deployed | (none) |
arch | None |
accel | None |
tags | |
versions | 4f40df1a-3f57-43cd-98d6-48a30e96d365 |
steps | |
published | False |
Configure and Upload Model
The 🤗 Hugging Face model is uploaded to Wallaroo by defining the input and output schema, and specifying the model’s framework as wallaroo.framework.Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION
.
The data schemas are defined in Apache PyArrow Schema format.
The model is converted to the Wallaroo Containerized runtime after the upload is complete.
input_schema = pa.schema([
pa.field('inputs', # required, fixed image dimensions
pa.list_(
pa.list_(
pa.list_(
pa.int64(),
list_size=3
),
list_size=640
),
list_size=480
)),
pa.field('candidate_labels', pa.list_(pa.string(), list_size=4)), # required, equivalent to `options` in the provided demo
])
output_schema = pa.schema([
pa.field('score', pa.list_(pa.float64(), list_size=4)), # has to be same as number of candidate labels
pa.field('label', pa.list_(pa.string(), list_size=4)), # has to be same as number of candidate labels
])
Upload Model
model = wl.upload_model('clip-vit', './models/clip-vit-base-patch-32.zip',
framework=Framework.HUGGING_FACE_ZERO_SHOT_IMAGE_CLASSIFICATION,
input_schema=input_schema,
output_schema=output_schema)
model
Waiting for model loading - this will take up to 10.0min.
Model is pending loading to a container runtime..
Model is attempting loading to a container runtime.................................................successful
Ready
Name | clip-vit |
Version | 85588fdc-73ed-4d92-b9e1-35e26ecc46e1 |
File Name | clip-vit-base-patch-32.zip |
SHA | 4efc24685a14e1682301cc0085b9db931aeb5f3f8247854bedc6863275ed0646 |
Status | ready |
Image Path | proxy.replicated.com/proxy/wallaroo/ghcr.io/wallaroolabs/mac-deploy:v2024.1.0-main-4921 |
Architecture | x86 |
Acceleration | none |
Updated At | 2024-16-Apr 19:54:02 |
Deploy Pipeline
With the model uploaded and prepared, we add the model as a pipeline step and deploy it. For this example, we will allocate 4 Gi of RAM and 1 CPU to the model’s use through the pipeline deployment configuration.
deployment_config = wallaroo.DeploymentConfigBuilder() \
.cpus(.25).memory('1Gi') \
.sidekick_memory(model, '4Gi') \
.sidekick_cpus(model, 1.0) \
.build()
The pipeline is deployed with the specified engine deployment.
Because the model is converted to the Wallaroo Containerized Runtime, the deployment step may timeout with the status
still as Starting
. If this occurs, wait an additional 60 seconds, then run the pipeline.status()
cell. Once the status is Running
, the rest of the tutorial can proceed.
pipeline.clear()
pipeline.add_model_step(model)
pipeline.deploy(deployment_config=deployment_config)
pipeline.status()
{'status': 'Running',
'details': [],
'engines': [{'ip': '10.28.2.174',
'name': 'engine-7899fc8548-6q2rb',
'status': 'Running',
'reason': None,
'details': [],
'pipeline_statuses': {'pipelines': [{'id': 'clip-demo',
'status': 'Running'}]},
'model_statuses': {'models': [{'name': 'clip-vit',
'sha': '4efc24685a14e1682301cc0085b9db931aeb5f3f8247854bedc6863275ed0646',
'status': 'Running',
'version': '85588fdc-73ed-4d92-b9e1-35e26ecc46e1'}]}}],
'engine_lbs': [{'ip': '10.28.2.173',
'name': 'engine-lb-d7cc8fc9c-6p666',
'status': 'Running',
'reason': None,
'details': []}],
'sidekicks': [{'ip': '10.28.3.203',
'name': 'engine-sidekick-clip-vit-20-55475c9f57-ndwk2',
'status': 'Running',
'reason': None,
'details': [],
'statuses': '\n'}]}
Run Inference
We verify the pipeline is deployed by checking the status()
.
The sample images in the ./data
directory are converted into numpy arrays, and the candidate labels added as inputs. Both are set as DataFrame arrays where the field inputs
are the image values, and candidate_labels
the labels.
image_paths = [
"./data/bear-in-tree.jpg",
"./data/elephant-and-zebras.jpg",
"./data/horse-and-dogs.jpg",
"./data/kittens.jpg",
"./data/remote-monitor.jpg"
]
images = []
for iu in image_paths:
image = Image.open(iu)
image = image.resize((640, 480)) # fixed image dimensions
images.append(np.array(image))
dataframe = pd.DataFrame({"images": images})
input_data = {
"inputs": images,
"candidate_labels": [["cat", "dog", "horse", "elephant"]] * 5,
}
dataframe = pd.DataFrame(input_data)
dataframe
inputs | candidate_labels | |
---|---|---|
0 | [[[60, 62, 61], [62, 64, 63], [67, 69, 68], [7... | [cat, dog, horse, elephant] |
1 | [[[228, 235, 241], [229, 236, 242], [230, 237,... | [cat, dog, horse, elephant] |
2 | [[[177, 177, 177], [177, 177, 177], [177, 177,... | [cat, dog, horse, elephant] |
3 | [[[140, 25, 56], [144, 25, 67], [146, 24, 73],... | [cat, dog, horse, elephant] |
4 | [[[24, 20, 11], [22, 18, 9], [18, 14, 5], [21,... | [cat, dog, horse, elephant] |
Inference Outputs
The inference is run, and the labels with their corresponding confidence values for each label are mapped to out.label
and out.score
for each image.
results = pipeline.infer(dataframe,timeout=600)
pd.set_option('display.max_colwidth', None)
display(results.loc[:, ['out.label', 'out.score']])
out.label | out.score | |
---|---|---|
0 | [elephant, dog, horse, cat] | [0.41468262672424316, 0.3483855128288269, 0.1285742223262787, 0.10835772752761841] |
1 | [elephant, horse, dog, cat] | [0.9981434345245361, 0.001765849650837481, 6.823775038355961e-05, 2.2441257897298783e-05] |
2 | [horse, dog, elephant, cat] | [0.7596790790557861, 0.2171126902103424, 0.020392922684550285, 0.0028152766171842813] |
3 | [cat, dog, elephant, horse] | [0.9870226979255676, 0.006646980997174978, 0.003271638648584485, 0.003058758797124028] |
4 | [dog, horse, cat, elephant] | [0.5713965892791748, 0.17229433357715607, 0.15523898601531982, 0.1010700911283493] |
Undeploy Pipelines
With the tutorial complete, the pipeline is undeployed and the resources returned back to the cluster.
pipeline.undeploy()
name | clip-demo |
---|---|
created | 2024-04-16 19:48:50.468599+00:00 |
last_updated | 2024-04-16 19:54:06.278297+00:00 |
deployed | False |
arch | x86 |
accel | none |
tags | |
versions | 8fac2531-ab09-4702-aa5e-9458566063c4, 4f40df1a-3f57-43cd-98d6-48a30e96d365 |
steps | clip-vit |
published | False |