Deploy Embedding Models on AWS inferentia2 with Amazon SageMaker
In this end-to-end tutorial, you will learn how to deploy and speed up Embeddings Model inference using AWS Inferentia2 and optimum-neuron on Amazon SageMaker. Optimum Neuron is the interface between the Hugging Face Transformers & Diffusers library and AWS Accelerators including AWS Trainium and AWS Inferentia2.
You will learn how to:
- Convert Embeddings Model to AWS Neuron (Inferentia2) with
optimum-neuron
- Create a custom
inference.py
script forembeddings
- Upload the neuron model and inference script to Amazon S3
- Deploy a Real-time Inference Endpoint on Amazon SageMaker
- Run and evaluate Inference performance of Embeddings Model on Inferentia2
Let's get started! 🚀
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
optimum-neuron
1. Convert Embeddings Model to AWS Neuron (Inferentia2) with We are going to use the optimum-neuron. 🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including AWS Trainium and AWS Inferentia. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks.
As a first step, we need to install the optimum-neuron
and other required packages.
Tip: If you are using Amazon SageMaker Notebook Instances or Studio you can go with the conda_python3
conda kernel.
After we have installed the optimum-neuron
we can convert load and convert our model.
We are going to use the BAAI/bge-base-en-v1.5 model. BGE Base is a fine-tuned BERT model to map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. It works perfectly for vector databases for LLMs. The base model is the perfect trade-off between size and performance, it is currently ranked top 5 on the MTEB Leaderboard.
At the time of writing, the AWS Inferentia2 does not support dynamic shapes for inference, which means that the input size needs to be static for compiling and inference.
In simpler terms, this means we need to define the input shapes for our prompt (sequence length), batch size, height and width of the image.
We precompiled the model with the following parameters and pushed it to the Hugging Face Hub:
sequence_length
: 384batch_size
: 1neuron
: 2.15.0
Note: If you want to compile your own model, comment in the code below and change the model id. We used an inf2.8xlarge
ec2 instance with the Hugging Face Neuron Deep Learning AMI to compile the model.
inference.py
script for embeddings
2. Create a custom The Hugging Face Inference Toolkit supports zero-code deployments on top of the pipeline feature from 🤗 Transformers. This allows users to deploy Hugging Face transformers without an inference script [Example].
Currently is this feature not supported with AWS Inferentia2, which means we need to provide an inference.py
for running inference. But optimum-neuron
has integrated support for the 🤗 Transformers pipeline feature. That way we can use the optimum-neuron
to create a pipeline for our model.
If you want to know more about the inference.py
script check out this example. It explains amongst other things what the model_fn
and predict_fn
are.
We are using the NEURON_RT_NUM_CORES=1
to make sure that each HTTP worker uses 1 Neuron core to maximize throughput.
3. Upload the neuron model and inference script to Amazon S3
Before we can deploy our neuron model to Amazon SageMaker we need to create a model.tar.gz
archive with all our model artifacts saved into, e.g. model.neuron
and upload this to Amazon S3.
To do this we need to set up our permissions. Currently inf2
instances are only available in the us-east-2
region [REF]. Therefore we need to force the region to us-east-2.
Now lets create our SageMaker session and upload our model to Amazon S3.
Next, we create our model.tar.gz
. The inference.py
script will be placed into a code/
folder.
Now we can upload our model.tar.gz
to our session S3 bucket with sagemaker
.
4. Deploy a Real-time Inference Endpoint on Amazon SageMaker
After we have uploaded our model.tar.gz
to Amazon S3 can we create a custom HuggingfaceModel
. This class will be used to create and deploy our real-time inference endpoint on Amazon SageMaker.
The inf2.xlarge
instance type is the smallest instance type with AWS Inferentia2 support. It comes with 1 Inferentia2 chip with 2 Neuron Cores. This means we can use 2 Model server workers to maximize throughput and run 2 inferences in parallel.
5. Run and evaluate Inference performance of Embeddings Model on Inferentia2
The .deploy()
returns an HuggingFacePredictor
object which can be used to request inference.
Awesome we can now generate embeddings with our model, Lets test the performance of our model.
A load test will we send 10,000 requests to our endpoint use threading with 10 concurrent threads. We will measure the average latency and throughput of our endpoint. We are going to sent an input of 300 tokens to have a total of 3 Million tokens, but remember the model is compiled with a sequence_length
of 384. This means that the model will pad the input to 384 tokens, this increases the latency a bit.
We decided to use 300 tokens as input length to find the balance between shorter inputs which are padded and longer inputs, which are truncated. If you know your chunk size, we recommend to compile the model with that length to get maximum performance.
Note: When running the load test, the requests are sent from europe and the endpoint is deployed in us-east-2. This adds a network overhead to it.
Sending 10,000 requests or generating 3 million tokens took around 101 seconds. This means we can run around ~99 inferences per second. But keep in mind that includes the network latency from europe to us-east-2. When we inspect the latency of the endpoint through cloudwatch we can see that the average request latency is around 13ms. This means we can serve around 153 inferences per second (having 2 HTTP workers).
The average latency for our Embeddings model is 11.1-11.5ms
with a Framework overhead of 2ms
leading to an request latency of ~13ms
Delete model and endpoint
To clean up, we can delete the model and endpoint.
Conclusion
In this post, we deployed a top open source Embeddings Model (BGE) on a single inf2.xlarge
instance costing $0.99/hour on Amazon SageMaker using Optimum Neuron. We are able to run 2 replicas of the model on a single instance with a avg. model latency of 11.1-11.5ms for inputs of 300 tokens with a max sequence length of 384 and a throughput without network overhead of 180 inferences per second.
This means we can create (300 tokens * 153 requests) 45,900 tokens per second, 2,754,000 tokens per minute and 165,240,000 tokens per hour. This leads to a cost of ~$0.006 1M/tokens
if utilized well. For comparison OpenAI or Amazon Bedrock charges $0.10 per 1M tokens.
For startups and companies looking into GPU alternative for generating emebddings Inferentia2 is a great option for not only efficient and fast but also cost-effective inference.
Thanks for reading! If you have any questions, feel free to contact me on Twitter or LinkedIn.