Save up to 90% training cost with AWS Spot Instances and Hugging Face Transformers
notebook: sagemaker/05_spot_instances
Amazon EC2 Spot Instances are a way to take advantage of unused EC2 capacity in the AWS cloud. A Spot Instance is an instance that uses spare EC2 capacity that is available for less than the On-Demand price. The hourly price for a Spot Instance is called a Spot price. If you want to learn more about Spot Instances, you should check out the concepts of it in the documentation. One concept we should nevertheless briefly address here is Spot Instance interruption
.
Amazon EC2 terminates, stops, or hibernates your Spot Instance when Amazon EC2 needs the capacity back or the Spot price exceeds the maximum price for your request. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted.
Amazon SageMaker and the Hugging Face DLCs make it easy to train transformer models using managed Spot instances. Managed spot training can optimize the cost of training models up to 90% over on-demand instances.
As we learned spot instances can be interrupted, causing jobs to potentially stop before they are finished. To prevent any loss of model weights or information Amazon SageMaker offers support for remote S3 Checkpointing where data from a local path to Amazon S3 is saved. When the job is restarted, SageMaker copies the data from Amazon S3 back into the local path.
In this example, we will learn how to use managed Spot Training and S3 checkpointing with Hugging Face Transformers to save up to 90% of the training costs.
We are going to:
- preprocess a dataset in the notebook and upload it to Amazon S3
- configure checkpointing and spot training in the
HuggingFace
estimator - run training on a spot instance
NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances
Development Environment and Permissions
Note: we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed
Permissions
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
Preprocessing
We are using the datasets
library to download and preprocess the emotion
dataset. After preprocessing, the dataset will be uploaded to our sagemaker_session_bucket
to be used within our training job. The emotion dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples.
After we processed the datasets
we are going to use the new FileSystem
integration to upload our dataset to S3.
HuggingFace
estimator
Configure checkpointing and spot training in the After we have uploaded we can configure our spot training and make sure we have checkpointing enabled to not lose any progress if interruptions happen.
To configure spot training we need to define the max_wait
and max_run
in the HuggingFace
estimator and set use_spot_instances
to True
.
max_wait
: Duration in seconds until Amazon SageMaker will stop the managed spot training if not completed yetmax_run
: Max duration in seconds for training the training job
max_wait
also needs to be greater than max_run
, because max_wait
is the duration for waiting/accessing spot instances (can take time when no spot capacity is free) + the expected duration of the training job.
Example
If you expect your training to take 3600 seconds (1 hour) you can set max_run
to 4000
seconds (buffer) and max_wait
to 7200
to include a 3200
seconds waiting time for your spot capacity.
To enable checkpointing we need to define checkpoint_s3_uri
in the HuggingFace
estimator. checkpoint_s3_uri
is a S3 URI in which to save the checkpoints. By default Amazon SageMaker will save now any file, which is written to /opt/ml/checkpoints
in the training job to checkpoint_s3_uri
.
It is possible to adjust /opt/ml/checkpoints
by overwriting checkpoint_local_path
in the HuggingFace
estimator
Next step is to create our HuggingFace
estimator, provide our hyperparameters
and add our spot and checkpointing configurations.
When using remote S3 checkpointing you have to make sure that your train.py
also supports checkpointing. Transformers
and the Trainer
offers utilities on how to do this. You only need to add the following snippet to your Trainer
training script
Run training on a spot instance
The last step of this example is to start our managed Spot Training. Therefore we simple call the .fit
method of our estimator and provide our dataset.
After the training is successful run you should see your spot savings in the logs.
Conclusion
We successfully managed to run a Managed Spot Training on Amazon SageMaker and save 70% off the training cost, which is a big margin. Especially we only needed to define 3 parameters to set it up.
I can highly recommend using Managed Spot Training if you have grace period in between model training and delivery.
If you want to learn more about Hugging Face Transformers on Amazon SageMaker you can checkout our documentation or other examples.
You can find the code here.
Thanks for reading! If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.