Learn how to use the newest cutting edge computing power of AWS with the benefits of serverless architectures to leverage Google's "State-of-the-Art" NLP Model.
Build a serverless Question-Answering API using the Serverless Framework, AWS Lambda, AWS EFS, efsync, Terraform, the transformers Library from HuggingFace, and a `mobileBert` model from Google fine-tuned on SQuADv2.
Fine-tune non-English, German GPT-2 model with Huggingface on German recipes. Using their Trainer class and Pipeline objects.
Build a serverless question-answering API with BERT, HuggingFace, the Serverless Framework and AWS Lambda.
Build a non-English (German) BERT multi-class text classification model with HuggingFace and Simple Transformers.
Scale your machine learning models by using AWS Lambda, the Serverless Framework, and PyTorch. I will show you how to build scalable deep learning inference architectures.
Built an Object Detection Model with AWS AutoML library AutoGluon
Using the K-Fold Cross-Validation to improve your Transformers model validation by the example of BERT Text-Classification