Welcome to this end-to-end Named Entity Recognition example using Keras. In this tutorial, we will use the Hugging Faces transformers and datasets library together with Tensorflow & Keras to fine-tune a pre-trained non-English transformer for token-classification (ner).
This example will use the Hugging Face Hub as remote model versioning service. To be able to push our model to the Hub, you need to register on the Hugging Face.
If you already have an account you can skip this step.
After you have an account, we will use the notebook_login util from the huggingface_hub package to log into our account and store our token (access key) on the disk.
Setup & Configuration
In this step we will define global configurations and paramters, which are used across the whole end-to-end fine-tuning proccess, e.g. tokenizer and model we will use.
In this example are we going to fine-tune the deepset/gbert-base a German BERT model.
You can change the model_id to another BERT-like model for a different language, e.g. Italian or French to use this script to train a French or Italian Named Entity Recognition Model. But don't forget to also adjust the dataset in the next step.
To load the germaner dataset, we use the load_dataset() method from the 🤗 Datasets library.
We can display all our NER classes by inspecting the features of our dataset. Those ner_labels will be later used to create a user friendly output after we fine-tuned our model.
Pre-processing & Tokenization
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out chapter 6 of the Hugging Face Course.
Compared to a text-classification dataset of question-answering dataset is "text" of the germaner already split into a list of words (tokens). So cannot use tokenzier(text) we need to pass is_split_into_words=True to the tokenizer method. Additionally we add the truncation=True to truncate texts that are bigger than the maximum size allowed by the model.
process our dataset using .map method with batched=True.
Since we later only need the tokenized + labels columns for the model to train, we are just filtering out which columns have been added by processing the dataset. The tokenizer_columns are the dataset column(s) to load in the tf.data.Dataset
Since our dataset only includes one split (train) we need to train_test_split ourself to have an evaluation/test dataset for evaluating the result during and after training.
Fine-tuning the model using Keras
Now that our dataset is processed, we can download the pretrained model and fine-tune it. But before we can do this we need to convert our Hugging Face datasets Dataset into a tf.data.Dataset. For this we will us the .to_tf_dataset method and a data collator for token-classification (Data collators are objects that will form a batch by using a list of dataset elements as input).
Hyperparameter
Converting the dataset to a tf.data.Dataset
Download the pretrained transformer model and fine-tune it.
Callbacks
As mentioned in the beginning we want to use the Hugging Face Hub for model versioning and monitoring. Therefore we want to push our models weights, during training and after training to the Hub to version it.
Additionally we want to track the peformance during training therefore we will push the Tensorboard logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
Training
Start training with calling model.fit
Evaluation
The traditional framework used to evaluate token classification prediction is seqeval. This metric does not behave like the standard accuracy: it will actually take the lists of labels as strings, not integers, so we will need to fully decode the predictions and labels before passing them to the metric.
Create Model Card with evaluation results
To complete our Hugging Face Hub repository we will create a model card with the used hyperparameters and the evaluation results.
push model card to repository
Run Managed Training using Amazon Sagemaker
If you want to run this examples on Amazon SageMaker to benefit from the Training Platform follow the cells below. I converted the Notebook into a python script train.py, which accepts same hyperparameter and can we run on SageMaker using the HuggingFace estimator
Conclusion
We managed to successfully fine-tune a German BERT model using Transformers and Keras, without any heavy lifting or complex and unnecessary boilerplate code. The new utilities like .to_tf_dataset are improving the developer experience of the Hugging Face ecosystem to become more Keras and TensorFlow friendly. Combining those new features with the Hugging Face Hub we get a fully-managed MLOps pipeline for model-versioning and experiment management using Keras callback API.
Big Thanks to Matt for all the work he is doing to improve the experience using Transformers and Keras.
Now its your turn! Adjust the notebook to train a BERT for another language like French, Spanish or Italian. 🇫🇷 🇪🇸 🇮🇹
You can find the code here and feel free to open a thread on the forum.
Thanks for reading. If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.