Sitemap

Comparing Container and Zip Lambdaliths with Thin Functions

5 min readFeb 15, 2024

In a world where time costs money, making small changes can give you big savings. In a highly cyclical business you may be left with a large amount of wasted or unutilised resources. A Serverless platform makes a lot of sense here, taking advantage of little to no baseline cost and fast scaling. AWS Lambda is at the heart of this, but it’s still important to make sure we are getting the best value for money. The runtime cost is only one consideration when thinking about the total cost of ownership. Making sure our engineering teams can be productive is another big factor. Reducing the cognitive complexity of our API Gateways and Node function code can make a big difference here, along with the benefit of easily running some things locally. In this post I will look at the comparison of running thin functions versus API frameworks like Express (as a Lambdalith) as well as running our code in a container image, rather than a bundled zip file.

A giant monolithic cartoon racing ahead

Methodology

For my Express functions I’ll be running exactly the same code on Lambda with one deployed as a container image and one deployed as a zip artifact. Both of them are running on Lambda using the AWS Lambda Web Adapter. I’ll be using the Node 20 runtime as it’s the latest Node LTS supported by Lambda. The functions running outside of the framework, the thin functions, perform exactly the same work as the route on Express. The functions will be handling a request, writing some data to DynamoDB and returning a response. This is not a complicated example, but one that may be common across many serverless applications.

To gather my data I’ve used a slightly modified version of the Lambda Power Tuner tool. This allowed me to get result data for purely cold starts, as well as warmed Lambda invocations, over a large array of memory allocations and invocations. Whilst cold starts are an ever decreasing reality of Lambda Functions, for some endpoints where consistent performance is highly desirable, it is something we need to consider.

Results

There are quite a few different things to consider and compare with the results including running a container vs a zip artifact, running a Lambdalith vs thin function and warm vs cold functions, as well as various memory allocations. On Lambda, memory allocation has a direct impact on the amount of CPU you are given access to. This data is available as a CSV here.

Comparing Express Lambdaliths cold starts in zip and container package types
Comparing Express Lambdaliths warm starts in zip and container package types
Comparing thin Lambda Function cold starts in zip and container package types
Comparing thin Lambda Function warm starts in zip and container package types

As we can see container cold start performance is consistently worse than a zip function cold start. Whilst this isn’t documented anywhere on AWS, AJ Stuyvenberg has published some good analysis showing we should expect better performance in zip packages up to around a 30mb zip file. In all these tests our zip archive is 100–300kb, and the container image is 48–50mb. The warm starts are pretty similar, which is to be expected as the functions perform the same task and the routing overhead in Express is very small. At a low size, container cold starts are much worse than zip packaged code.

The comparison of Lambdalith to Thin Function cold starts is more interesting. The Lambdalith overhead adds around 100ms. Depending on your traffic profile this may not have a particularly large impact but may be an acceptable price to pay for improved developer experience and less resources to manage on AWS. The difference is well visualised with this chart from the Lambda Power Tuner comparing the two.

Lambdalith vs thin cold start functions in a zip package
Lambdalith vs thin warm start functions in a zip package

Consistent but marginally worse Lambdalith performance on cold starts and very similar performance for warm functions provides a very compelling argument for a Lambdalith. Many engineers are familiar with Javascript frameworks like Express and NestJS, but possibly less familiar with the request and response schemas of API Gateway and Lambda, as well as configuring the various moving parts you need for a multi route API Gateway. Using a Lambdalith helps limit, or remove, large parts of that configuration.

Conclusion

Lambda as a compute platform doesn’t care what your code is, or how you run it. But as engineers (and bill payers!) we have to weigh up how engineering teams can get the best out of Lambda Functions, whilst ensuring our customers get great performance and engineering teams remain productive. There’s still a lot to be said about splitting up Lambda Functions to small units of code and offloading a lot of routing, authorization, caching and rate limiting to API Gateway (as well as all the other reasons it’s a great product). But full featured frameworks make it quick and easy for engineering teams to use patterns familiar to them without having to pay a large performance penalty.

If you have engineering teams with less API Gateway and Lambda experience then a Lambdalith Function, with the fast and easy to use Lambda Web Adapter is a great way to go. Just make sure you’re minifying your bundles! For smaller sized functions, where zip archives are kept lightweight, then there isn’t a lot of value in deploying as a container image. Container images are also custom runtimes on Lambda, so you pay for the cold start duration. This may or may not be a large amount, but it’s something that may be worth considering.

I hope you found some of this interesting. With some of this evidence I’ll certainly look at using Lambdalith functions much more often when building out my synchronous APIs.

--

--

Ryan Cormack
Ryan Cormack

Written by Ryan Cormack

Serverless engineer and AWS Community Builder working on event driven AWS solutions

Responses (1)