philschmid blog

Multi-Container Endpoints with Hugging Face Transformers and Amazon SageMaker

#HuggingFace #AWS #BERT #SageMaker
, February 22, 2022 · 7 min read

Photo by Venti Views on Unsplash

Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker to deploy multiple transformer models as Multi-Container Endpoint. Amazon SageMaker Multi-Container Endpoint is an inference option to deploy multiple containers (multiple models) to the same SageMaker real-time endpoint. These models/containers can be accessed individually or in a pipeline. Amazon SageMaker Multi-Container Endpoint can be used to improve endpoint utilization and optimize costs. An example for this is time zone differences, the workload for model A (U.S) is mostly at during the day and the workload for model B (Germany) is mostly during the night, you can deploy model A and model B to the same SageMaker endpoint and optimize your costs.

NOTE: At the time of writing this, only CPU Instances are supported for Multi-Container Endpoint.



Development Environment and Permissions

NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances

1 %pip install sagemaker --upgrade
2 import sagemaker
4 assert sagemaker.__version__ >= "2.75.0"


If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.

1 import sagemaker
2 import boto3
3 sess = sagemaker.Session()
4 # sagemaker session bucket -> used for uploading data, models and logs
5 # sagemaker will automatically create this bucket if it not exists
6 sagemaker_session_bucket=None
7 if sagemaker_session_bucket is None and sess is not None:
8 # set to default bucket if a bucket name is not given
9 sagemaker_session_bucket = sess.default_bucket()
11 try:
12 role = sagemaker.get_execution_role()
13 except ValueError:
14 iam = boto3.client('iam')
15 role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
17 sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
18 region = sess.boto_region_name
19 sm_client = boto3.client('sagemaker')
21 print(f"sagemaker role arn: {role}")
22 print(f"sagemaker bucket: {sess.default_bucket()}")
23 print(f"sagemaker session region: {region}")

Multi-Container Endpoint creation

When writing this does the Amazon SageMaker Python SDK not support Multi-Container Endpoint deployments. That’s why we are going to use boto3 to create the endpoint.

The first step though is to use the SDK to get our container uris for the Hugging Face Inference DLCs.

1 from sagemaker import image_uris
3 hf_inference_dlc = image_uris.retrieve(framework='huggingface',
4 region=region,
5 version='4.12.3',
6 image_scope='inference',
7 base_framework_version='pytorch1.9.1',
8 py_version='py38',
9 container_version='ubuntu20.04',
10 instance_type='ml.c5.xlarge')
11 # ''

Define Hugging Face models

Next, we need to define the models we want to deploy to our multi-container endpoint. To stick with our example from the introduction, we will deploy an English sentiment-classification model and a german sentiment-classification model. For the English model, we will use distilbert-base-uncased-finetuned-sst-2-english and for the German model, we will use oliverguhr/german-sentiment-bert. Similar to the endpoint creation with the SageMaker SDK do we need to provide the “Hub” configurations for the models as HF_MODEL_ID and HF_TASK.

1 # english model
2 englishModel = {
3 'Image': hf_inference_dlc,
4 'ContainerHostname': 'englishModel',
5 'Environment': {
6 'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
7 'HF_TASK':'text-classification'
8 }
9 }
11 # german model
12 germanModel = {
13 'Image': hf_inference_dlc,
14 'ContainerHostname': 'germanModel',
15 'Environment': {
16 'HF_MODEL_ID':'oliverguhr/german-sentiment-bert',
17 'HF_TASK':'text-classification'
18 }
19 }
21 # Set the Mode parameter of the InferenceExecutionConfig field to Direct for direct invocation of each container,
22 # or Serial to use containers as an inference pipeline. The default mode is Serial.
23 inferenceExecutionConfig = {"Mode": "Direct"}

Create Multi-Container Endpoint

After we define our model configuration, we can deploy our endpoint. To create/deploy a real-time endpoint with boto3 you need to create a “SageMaker Model”, a “SageMaker Endpoint Configuration” and a “SageMaker Endpoint”. The “SageMaker Model” contains our multi-container configuration including our two models. The “SageMaker Endpoint Configuration” contains the configuration for the endpoint. The “SageMaker Endpoint” is the actual endpoint.

1 deployment_name = "multi-container-sentiment"
2 instance_type = "ml.c5.4xlarge"
5 # create SageMaker Model
6 sm_client.create_model(
7 ModelName = f"{deployment_name}-model",
8 InferenceExecutionConfig = inferenceExecutionConfig,
9 ExecutionRoleArn = role,
10 Containers = [englishModel, germanModel]
11 )
13 # create SageMaker Endpoint configuration
14 sm_client.create_endpoint_config(
15 EndpointConfigName= f"{deployment_name}-config",
16 ProductionVariants=[
17 {
18 "VariantName": "AllTraffic",
19 "ModelName": f"{deployment_name}-model",
20 "InitialInstanceCount": 1,
21 "InstanceType": instance_type,
22 },
23 ],
24 )
26 # create SageMaker Endpoint configuration
27 endpoint = sm_client.create_endpoint(
28 EndpointName= f"{deployment_name}-ep", EndpointConfigName=f"{deployment_name}-config"
29 )

this will take a few minutes to deploy. You can check the console to see if the endpoint is in service

Invoke Multi-Container Endpoint

To invoke our multi-container endpoint we can either use boto3 or any other AWS SDK or the Amazon SageMaker SDK. We will test both ways and do some light load testing to take a look at the performance of our endpoint in cloudwatch.

1 english_payload={"inputs":"This is a great way for saving money and optimizing my resources."}
3 german_payload={"inputs":"Das wird uns sehr helfen unsere Ressourcen effizient zu nutzen."}

Sending requests with boto3

To send requests to our models we will use the sagemaker-runtime with the invoke_endpoint method. Compared to sending regular requests to a single-container endpoint we are passing TargetContainerHostname as additional information to point to the container, which should receive the request. In our case this is either englishModel or germanModel.


1 import json
2 import boto3
4 # create client
5 invoke_client = boto3.client('sagemaker-runtime')
7 # send request to first container (bi-encoder)
8 response = invoke_client.invoke_endpoint(
9 EndpointName=f"{deployment_name}-ep",
10 ContentType="application/json",
11 Accept="application/json",
12 TargetContainerHostname="englishModel",
13 Body=json.dumps(english_payload),
14 )
15 result = json.loads(response['Body'].read().decode())


1 import json
2 import boto3
4 # create client
5 invoke_client = boto3.client('sagemaker-runtime')
7 # send request to first container (bi-encoder)
8 response = invoke_client.invoke_endpoint(
9 EndpointName=f"{deployment_name}-ep",
10 ContentType="application/json",
11 Accept="application/json",
12 TargetContainerHostname="germanModel",
13 Body=json.dumps(german_payload),
14 )
15 result = json.loads(response['Body'].read().decode())

Sending requests with HuggingFacePredictor

The Python SageMaker SDK can not be used for deploying Multi-Container Endpoints but can be used to invoke/send requests to those. We will use the HuggingFacePredictor to send requests to the endpoint, where we also pass the TargetContainerHostname as additional information to point to the container, which should receive the request. In our case this is either englishModel or germanModel.

1 from sagemaker.huggingface import HuggingFacePredictor
3 # predictor
4 predictor = HuggingFacePredictor(f"{deployment_name}-ep")
6 # english request
7 en_res = predictor.predict(english_payload, initial_args={"TargetContainerHostname":"englishModel"})
8 print(en_res)
10 # german request
11 de_res = predictor.predict(german_payload, initial_args={"TargetContainerHostname":"germanModel"})
12 print(de_res)

Load testing the multi-container endpoint

As mentioned, we are doing some light load-testing, meaning sending a few alternating requests to the containers and looking at the latency in cloudwatch.

1 for i in range(1000):
2 predictor.predict(english_payload, initial_args={"TargetContainerHostname":"englishModel"})
3 predictor.predict(german_payload, initial_args={"TargetContainerHostname":"germanModel"})
5 # link to cloudwatch metrics dashboard
6 print("'AWS*2fSageMaker~'ContainerLatency~'EndpointName~'multi-container-sentiment-ep~'ContainerName~'germanModel~'VariantName~'AllTraffic~(visible~false))~(~'...~'englishModel~'.~'.~(visible~false))~(~'.~'Invocations~'.~'.~'.~'.~'.~'.~(stat~'SampleCount))~(~'...~'germanModel~'.~'.~(stat~'SampleCount)))~view~'timeSeries~stacked~false~region~'us-east-1~stat~'Average~period~60~start~'-PT15M~end~'P0D);query=~'*7bAWS*2fSageMaker*2cContainerName*2cEndpointName*2cVariantName*7d")

We can see that the latency for the englishModel is around 2x faster than the for the germanModel, which makes sense since the englishModel is a DistilBERT model and the german one is a BERT-base model.


In terms of invocations we can see both enpdoints are invocated the same amount, which makes sense since our test invoked both endpoints alternately.


Delete the Multi-Container Endpoint

1 predictor.delete_model()
2 predictor.delete_endpoint()


We successfully deployed two Hugging Face Transformers to Amazon SageMaer for inference using the Multi-Container Endpoint, which allowed using the same instance two host multiple models as a container for inference. Multi-Container Endpoints are a great option to optimize compute utilization and costs for your models. Especially when you have independent inference workloads due to time differences or use-case differences.

You should try Multi-Container Endpoints for your models when you have workloads that are not correlated.

You can find the code here.

Thanks for reading! If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.