Deploy Llama 2 7B/13B/70B on Amazon SageMaker
LLaMA 2 is the next version of the LLaMA. It is trained on more data - 2T tokens and supports context length window upto 4K tokens. Meta fine-tuned conversational models with Reinforcement Learning from Human Feedback on over 1 million human annotations.
In this blog you will learn how to deploy Llama 2 model to Amazon SageMaker. Your are going to use the Hugging Face LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment. The DLC is powered by Text Generation Inference (TGI) a scalelable, optimized solution for deploying and serving Large Language Models (LLMs). The Blog post also includes Hardware requirements for the different model sizes.
In the blog will cover how to:
- Setup development environment
- Retrieve the new Hugging Face LLM DLC
- Hardware requirements
- Deploy Llama 2 to Amazon SageMaker
- Run inference and chat with the model
- Clean up
Lets get started!
1. Setup development environment
You are going to use the sagemaker
python SDK to deploy Llama 2 to Amazon SageMaker. You need to make sure to have an AWS account configured and the sagemaker
python SDK installed.
If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
2. Retrieve the new Hugging Face LLM DLC
Compared to deploying regular Hugging Face models you first need to retrieve the container uri and provide it to our HuggingFaceModel
model class with a image_uri
pointing to the image. To retrieve the new Hugging Face LLM DLC in Amazon SageMaker, you can use the get_huggingface_llm_image_uri
method provided by the sagemaker
SDK. This method allows us to retrieve the URI for the desired Hugging Face LLM DLC based on the specified backend
, session
, region
, and version
. You can find the available versions here
3. Hardware requirements
Llama 2 comes in 3 different sizes - 7B, 13B & 70B parameters. The hardware requirements will vary based on the model size deployed to SageMaker. Below is a set up minimum requirements for each model size we tested.
Note: We haven't tested GPTQ models yet.
Model | Instance Type | Quantization | # of GPUs per replica |
---|---|---|---|
Llama 7B | (ml.)g5.2xlarge | - | 1 |
Llama 13B | (ml.)g5.12xlarge | - | 4 |
Llama 70B | (ml.)g5.48xlarge | bitsandbytes | 8 |
Llama 70B | (ml.)p4d.24xlarge | - | 8 |
Note: Amazon SageMaker currently doesn't support instance slicing meaning, e.g. for Llama 70B you cannot run multiple replica on a single instance.
These are the minimum setups we have validated for 7B, 13B and 70B LLaMA 2 models to work on SageMaker. In the coming weeks, we plan to run detailed benchmarking covering latency and throughput numbers across different hardware configurations. We are currently not recommending deploying Llama 70B to g5.48xlarge instances, since long request can timeout due to the 60s request timeout limit for SageMaker. Use p4d
instances for deploying Llama 70B it.
It might be possible to run Llama 70B on g5.48xlarge
instances without quantization by reducing the MAX_TOTAL_TOKENS
and MAX_BATCH_TOTAL_TOKENS
parameters. We haven't tested this yet.
4. Deploy Llama 2 to Amazon SageMaker
To deploy meta-llama/Llama-2-13b-chat-hf to Amazon SageMaker you create a HuggingFaceModel
model class and define our endpoint configuration including the hf_model_id
, instance_type
etc. You will use a g5.12xlarge
instance type, which has 4 NVIDIA A10G GPUs and 96GB of GPU memory.
Note: This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the Meta website and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days.
After you have created the HuggingFaceModel
you can deploy it to Amazon SageMaker using the deploy
method. You will deploy the model with the ml.g5.12xlarge
instance type. TGI will automatically distribute and shard the model across all GPUs.
SageMaker will now create our endpoint and deploy the model to it. This can takes a 10-15 minutes.
5. Run inference and chat with the model
After our endpoint is deployed you can run inference on it. You will use the predict
method from the predictor
to run inference on our endpoint. You run inference with different parameters to impact the generation. Parameters can be defined as in the parameters
attribute of the payload. As of today the TGI supports the following parameters:
temperature
: Controls randomness in the model. Lower values will make the model more deterministic and higher values will make the model more random. Default value is 1.0.max_new_tokens
: The maximum number of tokens to generate. Default value is 20, max value is 512.repetition_penalty
: Controls the likelihood of repetition, defaults tonull
.seed
: The seed to use for random generation, default isnull
.stop
: A list of tokens to stop the generation. The generation will stop when one of the tokens is generated.top_k
: The number of highest probability vocabulary tokens to keep for top-k-filtering. Default value isnull
, which disables top-k-filtering.top_p
: The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling, default tonull
do_sample
: Whether or not to use sampling ; use greedy decoding otherwise. Default value isfalse
.best_of
: Generate best_of sequences and return the one if the highest token logprobs, default tonull
.details
: Whether or not to return details about the generation. Default value isfalse
.return_full_text
: Whether or not to return the full text or only the generated part. Default value isfalse
.truncate
: Whether or not to truncate the input to the maximum length of the model. Default value istrue
.typical_p
: The typical probability of a token. Default value isnull
.watermark
: The watermark to use for the generation. Default value isfalse
.
You can find the open api specification of the TGI in the swagger documentation
The meta-llama/Llama-2-13b-chat-hf
is a conversational chat model meaning you can chat with it using the following prompt:
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST]
We create a small helper method build_llama2_prompt
, which converts a List of "messages" into the prompt format. We also define a system_prompt
which is used to start the conversation. You will use the system_prompt
to ask the model about some cool ideas to do in the summer.
Lets see, if Clara can come up with some cool ideas for the summer.
Now, run inference with different parameters to impact the generation. Parameters can be defined as in the parameters
attribute of the payload.
6. Clean up
To clean up, you can delete the model and endpoint.
Conclusion
Deploying Llama 2 on Amazon SageMaker provides a scalable, secure way to leverage LLMs. With just a few lines of code, the Hugging Face Inference DLC allows everyone to easily integrate powerful LLMs into applications.
Thanks for reading! If you have any questions, feel free to contact me on Twitter or LinkedIn.