On it’s surface, AWS Lambda seems simple—it merely invokes a function with some arguments. However, the complexity is deceptively high. This is best highlighted when something goes wrong.
Debugging can’t be done remotely so you need to rely on logging, which you need to get from CloudWatch (which is clunky in it’s own way). If you don’t have a log line in just the right place you need to do a full deploy which includes zipping all files that are needed to execute the function—for Node, that includes the entirety of node_modules
which, even for simple things, can be 10s of MBs.
You also need to contend with the async nature of Node. It’s easy to write your handler only to find that it runs, but doesn’t fully execute. Somewhere you are not awaiting a result and so an early return means the event loop is empty and the runtime thinks you are done. In one case, I could invoke the function locally and it worked just fine (using the serverless
cli tool), but in Lambda it failed.
Finally, the arguments passed in (an event and a context object) are themselves opaque. If you invoke a Lambda from a specific context, it may or may not include some authentication information when using AWS Cognito. You need to keep all of these details in your head, most of which can only be discovered by runtime failures (which I already mentioned are clunky to debug).
The overall feedback loop is slow (slowness begets more slowness) and difficult to debug due to incidental complexity.
Links to this note
-
Options for Brotli Compression Using Cloudfront and S3
When using AWS S3 as an origin to serve a static website distributed by AWS Cloudfront you need to choose how to enable brotli compression. These choices are mutually exclusive—you can either a) have Cloudfront compress to brotli at the edge with a 20% ratio with fallbacks for gzip or no compression or b) compress to brotli by pre-compressing assets where you can use a much higher ratio (80%), but with no fallbacks.