Deploy FLAN-UL2 20B on Amazon SageMaker
Welcome to this Amazon SageMaker guide on how to deploy the FLAN-UL2 20B on Amazon SageMaker for inference. We will deploy google/flan-ul2 to Amazon SageMaker for real-time inference using Hugging Face Inference Deep Learning Container.
What we are going to do
- Create FLAN-UL2 20B inference script
- Create SageMaker
model.tar.gz
artifact - Deploy the model to Amazon SageMaker
- Run inference using the deployed model
Quick intro: FLAN-UL2, a bigger FLAN-T5
Flan-UL2 is an encoder decoder (seq2seq) model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. FLAN-UL2 was trained as part of the Scaling Instruction-Finetuned Language Models paper. Noticeable difference to FLAN-T5 XXL are:
- FLAN-UL2 has context window of 2048 compared to 512 for FLAN-T5 XXL
- +~3% better performance than FLAN-T5 XXL on benchmarks
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
Before we can get started we have to install the missing dependencies to be able to create our model.tar.gz
artifact and create our Amazon SageMaker endpoint.
We also have to make sure we have the permission to create our SageMaker Endpoint.
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
Create FLAN-UL2 20B inference script
Amazon SageMaker allows us to customize the inference script by providing a inference.py
file. The inference.py
file is the entry point to our model. It is responsible for loading the model and handling the inference request. If you are used to deploying Hugging Face Transformers that might be new to you. Usually, we just provide the HF_MODEL_ID
and HF_TASK
and the Hugging Face DLC takes care of the rest. For FLAN-UL2
thats not yet possible. We have to provide the inference.py
file and implement the model_fn
and predict_fn
functions to efficiently load the 11B large model.
If you want to learn more about creating a custom inference script you can check out Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker
In addition to the inference.py
file we also have to provide a requirements.txt
file. The requirements.txt
file is used to install the dependencies for our inference.py
file.
The first step is to create a code/
directory.
As next we create a requirements.txt
file and add the accelerate
to it. The accelerate
library is used efficiently to load the model on multiple GPUs.
The last step for our inference handler is to create the inference.py
file. The inference.py
file is responsible for loading the model and handling the inference request. The model_fn
function is called when the model is loaded. The predict_fn
function is called when we want to do inference.
We are using the AutoModelForSeq2SeqLM
class from transformers load the model from the local directory (model_dir
) in the model_fn
. In the predict_fn
function we are using the generate
function from transformers to generate the text for a given input prompt.
model.tar.gz
artifact
Create SageMaker To use our inference.py
we need to bundle it together with our model weights into a model.tar.gz
. The archive includes all our model-artifcats to run inference. The inference.py
script will be placed into a code/
folder. We will use the huggingface_hub
SDK to easily downloadgoogle/flan-ul2 from Hugging Face and then upload it to Amazon S3 with the sagemaker
SDK.
Make sure the enviornment has enough diskspace to store the model, ~35GB should be enough.
Before we can upload the model to Amazon S3 we have to create a model.tar.gz
archive. Important is that the archive should directly contain all files and not a folder with the files. For example, your file should look like this:
model.tar.gz/
|- config.json
|- pytorch_model-00001-of-00012.bin
|- tokenizer.json
|- ...
After we created the model.tar.gz
archive we can upload it to Amazon S3. We will use the sagemaker
SDK to upload the model to our sagemaker session bucket.
Deploy the model to Amazon SageMaker
After we have uploaded our model archive we can deploy our model to Amazon SageMaker. We will use HuggingfaceModel
to create our real-time inference endpoint.
We are going to deploy the model to an g5.12xlarge
instance. The g5.12xlarge
instance is a GPU instance with 4x NVIDIA A10G GPU. If you are interested in how you could add autoscaling to your endpoint you can check out Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker.
Run inference using the deployed model
The .deploy()
returns an HuggingFacePredictor
object which can be used to request inference using the .predict()
method. Our endpoint expects a json
with at least inputs
key.
When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjusting the temperature to reduce repetition. The Transformers library provides different strategies and kwargs to do this, the Hugging Face Inference toolkit offers the same functionality using the parameters attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this blog post.
Lets try another examples! This time we focus ond questions answering with a step by step approach including some simple math.
Delete model and endpoint
To clean up, we can delete the model and endpoint.
Thanks for reading! If you have any questions, feel free to contact me on Twitter or LinkedIn.