Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker
Welcome to this end-to-end task-specific knowledge distillation Text-Classification example using Transformers, PyTorch & Amazon SageMaker. Distillation is the process of training a small "student" to mimic a larger "teacher". In this example, we will use a BERT-base as Teacher and BERT-Tiny as Student. We will use Text-Classification as our task-specific knowledge distillation task and the Stanford Sentiment Treebank v2 (SST-2) dataset for training.
They are two different types of knowledge distillation, the Task-agnostic knowledge distillation (right) and the Task-specific knowledge distillation (left). In this example we are going to use the Task-specific knowledge distillation.
Task-specific distillation (left) versus task-agnostic distillation (right). Figure from FastFormers by Y. Kim and H. Awadalla [arXiv:2010.13382].
In Task-specific knowledge distillation a "second step of distillation" is used to "fine-tune" the model on a given dataset. This idea comes from the DistilBERT paper where it was shown that a student performed better than simply finetuning the distilled language model:
We also studied whether we could add another step of distillation during the adaptation phase by fine-tuning DistilBERT on SQuAD using a BERT model previously fine-tuned on SQuAD as a teacher for an additional term in the loss (knowledge distillation). In this setting, there are thus two successive steps of distillation, one during the pre-training phase and one during the adaptation phase. In this case, we were able to reach interesting performances given the size of the model:79.8 F1 and 70.4 EM, i.e. within 3 points of the full model.
If you are more interested in those topics you should defintely read:
- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
- FastFormers: Highly Efficient Transformer Models for Natural Language Understanding
Especially the FastFormers paper contains great research on what works and doesn't work when using knowledge distillation.
Huge thanks to Lewis Tunstall and his great Weeknotes: Distilling distilled transformers
Installation
This example will use the Hugging Face Hub as remote model versioning service. To be able to push our model to the Hub, you need to register on the Hugging Face.
If you already have an account you can skip this step.
After you have an account, we will use the notebook_login
util from the huggingface_hub
package to log into our account and store our token (access key) on the disk.
Setup & Configuration
In this step we will define global configurations and paramters, which are used across the whole end-to-end fine-tuning proccess, e.g. teacher
and studen
we will use.
In this example, we will use BERT-base as Teacher and BERT-Tiny as Student. Our Teacher is already fine-tuned on our dataset, which makes it easy for us to directly start the distillation training job rather than fine-tuning the teacher first to then distill it afterwards.
IMPORTANT: This example will only work with a Teacher
& Student
combination where the Tokenizer is creating the same output.
Additionally, describes the FastFormers: Highly Efficient Transformer Models for Natural Language Understanding paper an additional phenomenon.
In our experiments, we have observed that dis- tilled models do not work well when distilled to a different model type. Therefore, we restricted our setup to avoid distilling RoBERTa model to BERT or vice versa. The major difference between the two model groups is the input token (sub-word) em- bedding. We think that different input embedding spaces result in different output embedding spaces, and knowledge transfer with different spaces does not work well
Below are some checks to make sure the Teacher
& Student
are creating the same output.
Dataset & Pre-processing
As Dataset we will use the Stanford Sentiment Treebank v2 (SST-2) a text-classification for sentiment-analysis
, which is included in the GLUE benchmark. The dataset is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges. It uses the two-way (positive/negative) class split, with only sentence-level labels.
To load the sst2
dataset, we use the load_dataset()
method from the 🤗 Datasets library.
Pre-processing & Tokenization
To distill our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out chapter 6 of the Hugging Face Course.
We are going to use the tokenizer of the Teacher
, but since both are creating same output you could also go with the Student
tokenizer.
Additionally we add the truncation=True
and max_length=512
to align the length and truncate texts that are bigger than the maximum size allowed by the model.
PyTorch
and DistillationTrainer
Distilling the model using Now that our dataset
is processed, we can distill it. Normally, when fine-tuning a transformer model using PyTorch you should go with the Trainer-API
. The Trainer class provides an API for feature-complete training in PyTorch for most standard use cases.
In our example we cannot use the Trainer
out-of-the-box, since we need to pass in two models, the Teacher
and the Student
and compute the loss for both. But we can subclass the Trainer
to create a DistillationTrainer
which will take care of it and only overwrite the compute_loss method as well as the init
method. In addition to this we also need to subclass the TrainingArguments
to include the our distillation hyperparameters.
Hyperparameter Definition, Model Loading
Evaluation metric
we can create a compute_metrics
function to evaluate our model on the test set. This function will be used during the training process to compute the accuracy
& f1
of our model.
Training
Start training with calling trainer.train()
start training using the DistillationTrainer
.
alpha
& temperature
with optuna
Hyperparameter Search for Distillation parameter The parameter alpha
& temparature
in the DistillationTrainer
can also be used when doing Hyperparamter search to maxizime our "knowledge extraction". As Hyperparamter Optimization framework are we using Optuna, which has a integration into the Trainer-API
. Since we the DistillationTrainer
is a sublcass of the Trainer
we can use the hyperparameter_search
without any code changes.
To do Hyperparameter Optimization using optuna
we need to define our hyperparameter space. In this example we are trying to optimize/maximize the num_train_epochs
, learning_rate
, alpha
& temperature
for our student_model
.
To start our Hyperparmeter search we just need to call hyperparameter_search
provide our hp_space
and number of trials to run.
Since optuna is just finding the best hyperparameters we need to fine-tune our model again using the best hyperparamters from the best_run
.
We have overwritten the default Hyperparameters with the one from our best_run
and can start the training now.
Results & Conclusion
We were able to achieve a accuracy
of 0.8337, which is a very good result for our model. Our distilled Tiny-Bert
has 96% less parameters than the teacher bert-base
and runs ~46.5x faster while preserving over 90% of BERT’s performances as measured on the SST2 dataset.
model | Parameter | Speed-up | Accuracy |
---|---|---|---|
BERT-base | 109M | 1x | 93.2% |
tiny-BERT | 4M | 46.5x | 83.4% |
Note: The FastFormers paper uncovered that the biggest boost in performance is observerd when having 6 or more layers in the student. The google/bert_uncased_L-2_H-128_A-2 we used only had 2, which means when changing our student to, e.g. distilbert-base-uncased
we should better performance in terms of accuracy.
If you are now planning to implement and add task-specific knowledge distillation to your models. I suggest to take a look at the sagemaker-distillation, which shows how to run task-specific knowledge distillation on Amazon SageMaker. For the example i created a script deriving this notebook to make it as easy as possible to use for you. You only need to define your teacher_id
, student_id
as well as your dataset
config to run task-specific knowledge distillation for text-classification
.
In conclusion you can say that it is just incredible how easy Transformers and the Trainer API
can be used to implement task-specific knowledge distillation. We needed to write ~20 lines of custom code deriving the Trainer
into a DistillationTrainer
to support task-specific knowledge distillation with leveraging all benefits of the Trainer API
like evaluation, hyperparameter tuning, and model card creation.
In addition, we used Amazon SageMaker to easily scale our Training with out thinking to much about the infrastructure and how we iterate on our experiments. At the end we created an example, which can be used for any Text-Classification dataset and teacher & student combination for task-specific knowledge distillation.
I believe this will help companies improiving their production performance of Transformers even more by implementing task-specific knowledge distillation as one part of their MLOps pipeline.
You can find the code here and feel free to open a thread on the forum.
Thanks for reading. If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.