Welcome to this end-to-end Financial Summarization (NLP) example using Keras and Hugging Face Transformers. In this demo, we will use the Hugging Faces transformers and datasets library together with Tensorflow & Keras to fine-tune a pre-trained seq2seq transformer for financial summarization.
We are going to use the Trade the Event dataset for abstractive text summarization. The benchmark dataset contains 303893 news articles range from 2020/03/01 to 2021/05/06. The articles are downloaded from the PRNewswire and Businesswire.
More information for the dataset can be found at the repository.
We are going to use all of the great Feature from the Hugging Face ecosystem like model versioning and experiment tracking as well as all the great features of Keras like Early Stopping and Tensorboard.
This example will use the Hugging Face Hub as a remote model versioning service. To be able to push our model to the Hub, you need to register on the Hugging Face.
If you already have an account you can skip this step.
After you have an account, we will use the notebook_login util from the huggingface_hub package to log into our account and store our token (access key) on the disk.
Setup & Configuration
In this step, we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning process, e.g. tokenizer and model we will use.
In this example are we going to fine-tune the sshleifer/distilbart-cnn-12-6 a distilled version of the BART transformer. Since the original repository didn't include Keras weights I converted the model to Keras using from_pt=True, when loading the model.
You can easily adjust the model_id to another Vision Transformer model, e.g. google/pegasus-xsum
Dataset & Pre-processing
We are going to use the Trade the Event dataset for abstractive text summarization. The benchmark dataset contains 303893 news articles range from 2020/03/01 to 2021/05/06. The articles are downloaded from the PRNewswire and Businesswire.
The we will use the column text as INPUT and title as summarization TARGET.
sample
The TradeTheEvent is not yet available as a dataset in the datasets library. To be able to create a Dataset instance we need to write a small little helper function, which converts the downloaded .json to a jsonl file to then be then loaded with load_dataset.
As a first step, we need to download the dataset to our filesystem using gdown.
We should now have a file called evluate_news.jsonl in our filesystem and can write a small helper function to convert the .json to a jsonl file.
We can now remove the evaluate_news.json to save some space and avoid confusion.
To load our dataset we can use the load_dataset function from the datasets library.
Pre-processing & Tokenization
To train our model we need to convert our "Natural Language" to token IDs. This is done by a π€ Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out chapter 6 of the Hugging Face Course.
before we tokenize our dataset we remove all of the unused columns for the summarization task to save some time and storage.
Compared to a text-classification in summarization our labels are also text. This means we need to apply truncation to both the text and title title to ensure we donβt pass excessively long inputs to our model. The tokenizers in π€ Transformers provide a nifty as_target_tokenizer() function that allows you to tokenize the labels in parallel to the inputs.
In addition to this we define values for max_input_length (maximum lenght before the text is trubcated) and max_target_length (maximum lenght for the summary/prediction).
Since our dataset doesn't includes any split we need to train_test_split ourself to have an evaluation/test dataset for evaluating the result during and after training.
Fine-tuning the model using Keras
Now that our dataset is processed, we can download the pretrained model and fine-tune it. But before we can do this we need to convert our Hugging Face datasets Dataset into a tf.data.Dataset. For this, we will use the .to_tf_dataset method and a data collator (Data collators are objects that will form a batch by using a list of dataset elements as input).
Hyperparameter
Converting the dataset to a tf.data.Dataset
to create our tf.data.Dataset we need to download the model to be able to initialize our data collator.
to convert our dataset we use the .to_tf_dataset method.
Create optimizer and compile the model
Callbacks
As mentioned in the beginning we want to use the Hugging Face Hub for model versioning and monitoring. Therefore we want to push our model weights, during training and after training to the Hub to version it.
Additionally, we want to track the performance during training therefore we will push the Tensorboard logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
You can find the the Tensorboard on the Hugging Face Hub at your model repository at Training Metrics. We can clearly see that the experiment I ran is not perfect since the validation loss increases again after time. But this is a good example of how to use the Tensorboard callback and the Hugging Face Hub. As a next step i would probably switch to Amazon SageMaker and run multiple experiments with the Tensorboard integration and EarlyStopping to find the best hyperparameters.
Training
Start training with calling model.fit
Evaluation
The most commonly used metrics to evaluate summarization task is rogue_score short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries
Run Managed Training using Amazon Sagemaker
If you want to run this examples on Amazon SageMaker to benefit from the Training Platform follow the cells below. I converted the Notebook into a python script train.py, which accepts same hyperparameter and can we run on SageMaker using the HuggingFace estimator.
install SageMaker and gdown.
Download the dataset and convert it to jsonlines.
As next step we create a SageMaker session to start our training. The snippet below works in Amazon SageMaker Notebook Instances or Studio. If you are running in a local environment check-out the documentation for how to initialize your session.
Now, we can define our HuggingFace estimator and Hyperparameter.
Opload our raw dataset to s3
After the dataset is uploaded we can start the training a pass our s3_uri as argument.
Conclusion
We managed to successfully fine-tune a Seq2Seq BART Transformer using Transformers and Keras, without any heavy lifting or complex and unnecessary boilerplate code. The new utilities like .to_tf_dataset are improving the developer experience of the Hugging Face ecosystem to become more Keras and TensorFlow friendly. Combining those new features with the Hugging Face Hub we get a fully-managed MLOps pipeline for model-versioning and experiment management using Keras callback API. Through SageMaker we could easily scale our Training. This was especially helpful since the training takes 10-12h depending on how many epochs we ran.
You can find the code here and feel free to open a thread on the forum.
Thanks for reading. If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.