AWS Lambda with custom docker images as runtime
It's the most wonderful time of the year. Of course, I'm not talking about Christmas but re:Invent. It is re:Invent time. Due to the current situation in the world, re:Invent does not take place like every year in Las Vegas but is entirely virtual and for free. This means that it is possible for everyone to attend. In addition to this, this year it lasts 3 weeks from 30.11.2020 to 18.12.2020. If you haven´t already registered do it here.
In the opening keynote, Andy Jassy presented the AWS Lambda Container Support, which allows you to use custom container (docker) images as a runtime for AWS Lambda. With that, we can build runtimes larger than the previous 250 MB limit, be it for "State-of-the-Art" NLP APIs with BERT or complex processing.
photo from the keynote by Andy Jassy, rights belong to Amazon
Furthermore, you can now configure AWS Lambda functions with up to 10 GB of Memory and 6 vCPUs.
In their blog post, Amazon explains how to use containers as a runtime for AWS lambda via the console.
But the blog post does not explain how to use custom docker
images with the Serverless Application Model. For these
circumstances, I created this blog post.
Services included in this tutorial
AWS Lambda
AWS Lambda is a serverless computing service that lets you run code without managing servers. It executes your code only when required and scales automatically, from a few requests per day to thousands per second.
Amazon Elastic Container Registry
Amazon Elastic Container Registry (ECR) is a fully managed container registry. It allows us to store, manage, share docker container images. You can share docker containers privately within your organization or publicly worldwide for anyone.
AWS Serverless Application Model
The AWS Serverless Application Model (SAM) is an open-source framework and CLI
to build serverless applications on AWS. You define the application you want using yaml
format. Afterwards, you build,
test, and deploy using the SAM CLI.
Tutorial
We are going to build an AWS Lambda with a docker
container as runtime using the "AWS Serverless Application Model".
We create a new custom docker
image using the presented Lambda Runtime API images.
What are we going to do:
- Install and setup
sam
- Create a custom
docker
image - Deploy a custom
docker
image to ECR - Deploy AWS Lambda function with a custom
docker
image
You can find the complete code in this Github repository.
sam
Install and setup AWS provides a
5 step guide on how to install
sam
. In this tutorial, we are going to skip steps 1-3 and assume you already have an AWS Account, an IAM user with the
correct permission set up, and docker
installed and setup otherwise check out this
link.
The easiest way is to create an IAM user with AdministratorAccess
(but I don´t recommend this for production use
cases).
We are going to continue with step 4 "installing Homebrew". To install homebrew we run the following command in our terminal.
Note: Linux Users have to add Homebrew to your PATH by running the following commands.
Afterwards we can run brew --version
to verify that Homebrew is installed.
The fifth and last step is to install sam
using homebrew. We can install the SAM CLI using brew install
.
After we installed it we have to make sure we have atleast version 1.13.0
installed
To update sam
if you have it installed you can run brew upgrade aws-sam-cli
.
docker
image
Create a custom After the setup, we are going to build a custom python docker
image.
We create a app.py
file and paste the following code into it.
To containerize our Lambda Function, we create a dockerfile
in the same directory and copy the following content.
Additionally we can add a .dockerignore
file to exclude files from your container image.
To build our custom docker
image we run.
and then to test it we run
Afterwards, in a separate terminal, we can then locally invoke the function using curl
.
curl -XPOST "http://localhost:8080/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
docker
image to ECR
Deploy a custom Since we now have a local docker
image we can deploy this to ECR. Therefore we need to create an ECR repository with
the name docker-lambda
.
using AWS CLI V1.x
To be able to push our images we need to login to ECR. We run an output ($()
) from the command we retrieve from
ecr get-login
. (Yes, the $
is intended).
using AWS CLI V2.x
read more here.
Next we need to tag
/ rename our previously created image to an ECR format. The format for this is
{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}
To check if it worked we can run docker images
and should see an image with our tag as name
Finally, we push the image to ECR Registry.
docker
image
Deploy AWS Lambda function with a custom Now, we can create our template.yaml
to define our lambda function using our docker
image. In the template.yaml
we
include the configuration for our AWS Lambda function. I provide the complete template.yaml
for this example, but we go
through all the details we need for our docker
image and leave out all standard configurations. If you want to learn
more about the sam template.yaml
, you can read through the documentation
here.
To use a docker
image in our template.yaml
we have to include the parameters ImageUri
and PackageType
in our
AWS::Serverless::Function
resource. The ImageUri
, as the name suggests is the URL to our docker image. For an ECR
image, the URL looks like this 123456789.dkr.ecr.us-east-1.amazonaws.com/myimage:latest
, and for a public docker image
like that namespace/image:tag
or docker.io/namespace/image:tag
.
PackageType
defines the type we provide to our AWS Lambda function, in our case an Image
.
Afterwards, we can deploy our application again using sam deploy
and thats it.
The Guided deployment will walk through all required parameters and will create a samconfig.toml
afterwards for us.
After the successfull deployment we should see something like this.
We take the URL from our API Gateway from the Outputs
section and use any REST Client to test it.
It worked. 🚀
We successfully deployed and created an AWS Lambda function with a custom docker
image as runtime.
Conclusion
The release of the AWS Lambda Container Support enables much wider use of AWS Lambda and Serverless. It fixes many existing problems and gives us greater scope for the deployment of serverless applications.
Another area in which I see great potential is machine learning, as the custom runtime enables us to include larger machine learning models in our runtimes. The increase of configurable Memory and vCPUs boost this even more.
The future looks more than golden for AWS Lambda and Serverless.
You can find the GitHub repository with the complete code here.
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect with me on Twitter or LinkedIn.