Deploy LayoutLM with Hugging Face Inference Endpoints
In this blog, you will learn how to deploy a fine-tune LayoutLM (v1) for document-understand using Hugging Face Inference Endpoints. LayoutLM is a multimodal Transformer model for document image understanding and information extraction transformers and can be used form understanding and receipt understanding. LayoutLM (v1) is the only model in the LayoutLM family with an MIT-license, which allows it to be used for commercial purposes compared to other LayoutLMv2/LayoutLMv3.
If you want to learn how to fine-tune LayoutLM, you should check out my previous blog post, “Document AI: Fine-tuning LayoutLM for document-understanding using Hugging Face Transformers”
Before we can get started, make sure you meet all of the following requirements:
- An Organization/User with an active plan and WRITE access to the model repository.
- Can access the UI: https://ui.endpoints.huggingface.co
The Tutorial will cover how to:
- Deploy the custom handler as an Inference Endpoint
- Send HTTP request using Python
- Draw result on image
What is Hugging Face Inference Endpoints?
🤗 Inference Endpoints offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a Hugging Face Model Repository. It supports all the Transformers and Sentence-Transformers tasks and any arbitrary ML Framework through easy customization by adding a custom inference handler. This custom inference handler can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn or can be used to add custom business logic to your existing transformers pipeline.
Tutorial: Deploy LayoutLM and Send requests
In this tutorial, you will learn how to deploy a LayoutLM to Hugging Face Inference Endpoints and how you can integrate it via an API into your products.
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the documentation or go through “Custom Inference with Hugging Face Inference Endpoints”
We are going to deploy philschmid/layoutlm-funsd which implements the following handler.py
1. Deploy the custom handler as an Inference Endpoint
UI: https://ui.endpoints.huggingface.co/
the first step is to deploy our model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy.
The Inference Endpoint Service will check during the creation of your Endpoint if there is a handler.py
available and valid and will use it for serving requests no matter which “Task” you select.
Note: Make sure to check that the “Task” in the Advanced Config is “Custom”. This will replace the inference widget with the custom Inference widget too easily test our model.
After deploying our endpoint, we can test it using the inference widget. Since we have a Custom
task, we can directly upload a form as “file input”.
2. Send HTTP request using Python
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use requests
to send our requests. (make your you have it installed pip install requests
)
3. Draw result on image
To get a better understanding of what the model predicted you can also draw the predictions on the provided image.
Conclusion
That's it we successfully deploy our LayoutLM to Hugging Face Inference Endpoints and run some predictions.
To underline this again, we created a managed, secure, scalable inference endpoint that runs our custom handler, including our custom logic. This will allow Data scientists and Machine Learning Engineers to focus on R&D, improving the model rather than fiddling with MLOps topics.
Now, it's your turn! Sign up and create your custom handler within a few minutes!
Thanks for reading! If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.