AWS released Lambda in late 2014, spreading the buzzword serverless.
With EC2 you didn't have to think about physical servers anymore, but only virtual machines.
With Lambda, there's not even that anymore to maintain.
Just bring your code.
{ 2/31 }
AWS will take care of provisioning the underlying infrastructure and container.
Besides not having to think about operation overhead like you'd have when managing virtual machines or containers, you'll be only billed when your function is actually executed.
{ 3/31 }
What does that mean?
If you're building a spike or an MVP for a business idea that's solely featuring serverless services like Lambda, you won't induce costs when your service is idle.
That's a huge plus in comparison to using ECS or EC2.
{ 4/31 }
What's probably not obvious and I've found that there's often confusion amongst beginners:
A single Lambda instance will only process a single request at a time
If two requests are reaching your Lambda at the exact same time, there's a need for two dedicated instances
{ 5/31 }
Why's that important?
AWS doesn't want to block computation resources for idle functions, so it will regularly de-provision your function.
Even if they are continuously invoked, ๐๐ต๐ฒ๐ ๐๐ถ๐น๐น ๐ฏ๐ฒ ๐ฑ๐ฒ-๐ฝ๐ฟ๐ผ๐๐ถ๐๐ถ๐ผ๐ป๐ฒ๐ฑ ๐ฎ๐ ๐๐ผ๐บ๐ฒ ๐๐ถ๐บ๐ฒ.
{ 6/31 }
This results in having the feared ๐๐ผ๐น๐ฑ ๐ฆ๐๐ฎ๐ฟ๐๐
If your request triggers a new Lambda instance, you'll have a significantly longer delay until your function code is executed.
If you're not using a lightweight framework, the bootstrap will even take more time.
{ 7/31 }
You can work against that by regularly invoking your functions with health checks, but as said, it won't protect you from having cold starts from time to time.
That's why it's important to not run code that needs a lot of spin-up time.
That's always the entry point of your Lambda function.
Everything that's outside this function will be executed first when your function receives a cold start and ๐๐ผ๐ป'๐ ๐ฑ๐ถ๐๐ฎ๐ฝ๐ฝ๐ฒ๐ฎ๐ฟ from memory until it's de-provisioned.
{ 9/31 }
Another ๐ด๐ถ๐ณ๐ we receive from AWS:
the execution of the global code outside of your handler method will be executed with high memory & CPU settings and ๐ถ๐๐ป'๐ ๐ฏ๐ถ๐น๐น๐ฒ๐ฑ for the first 10s.
Make use of this by bootstrapping your core framework outside.
{ 10/31 }
๐ฅ๐๐ป๐๐ถ๐บ๐ฒ๐
Lambda supports everything you can think of, ranging from Node.js, over Ruby & Python to Java.
Not finding your preference?
You can bring your ๐ผ๐๐ป ๐ฟ๐๐ป๐๐ถ๐บ๐ฒ.
When your function is executed, all the dependencies it needs have to be bundled into your deployment unit.
If you're using for example Node.js, your node_modules can easily reach 100MB.
You don't want to package & deploy this every time.
{ 12/31 }
This is surely true, as dependencies in comparison to your business logic code rarely change or get updated
With Layers, you can bundle your dependencies separately and then attach them to ๐ผ๐ป๐ฒ or ๐๐ฒ๐๐ฒ๐ฟ๐ฎ๐น ๐ณ๐๐ป๐ฐ๐๐ถ๐ผ๐ป๐
Next time, only deploy your code!
{ 13/31 }
๐ฆ๐ฒ๐ฐ๐๐ฟ๐ถ๐๐
As with other services, your function is protected via IAM
By default, there's no ingress traffic possible to your function, but all egress to the internet
You can assign your function to a VPC to access other services there, but you don't have to
Assign dedicated reservations of parallel executions for your function.
This means this number will be subtracted from your default account soft limit of 1000 parallel executions (can be increased via the support)
{ 17/31 }
It guarantees that this concurrency level is always possible for your function.
What it also ensures: ๐๐ต๐ถ๐ ๐ฐ๐ผ๐ป๐ฐ๐๐ฟ๐ฟ๐ฒ๐ป๐ฐ๐ ๐น๐ฒ๐๐ฒ๐น ๐ฐ๐ฎ๐ป'๐ ๐ฏ๐ฒ ๐ฒ๐ ๐ฐ๐ฒ๐ฒ๐ฑ๐ฒ๐ฑ!
The wording's not the best, as reserved and provisioned concurrency are often confused.
{ 18/31 }
๐๐ฎ๐บ๐ฏ๐ฑ๐ฎ@๐๐ฑ๐ด๐ฒ
Lambda's not only good for computation workloads or REST backends.
You can also use them with CloudFront.
It enables you to execute code at different times when your CloudFront distribution is called.
{ 19/31 }
By that you can for example easily implement authorization rules or change destinations for your origin.
Generally, you can do a lot as you're also able to use the AWS-SDK and invoke other services.
Another tip: CloudFront functions - the lightweight alternative!
{ 20/31 }
๐๐ฒ๐ป๐ฒ๐ณ๐ถ๐๐ of using Lambda
โข reduce operations: package & run your code
โข out-of-the-box scalability: โก๏ธ-fast horizontal scaling
โข pay-as-you-go: only pay for what you're using
โข agility & development speed: reducing burdens & increasing productivity
{ 21/31 }
๐๐ผ๐๐๐ถ๐ฑ๐ฒ๐
โข cold-starts
โข higher abstraction but lower predictability
โข pricing: depending on your workload & traffic, Lambda can be a cost pitfall
โข vendor-lock: compared to container-based apps, it's more difficult to migrate to another provider
Still, a goliath task is having in-depth observability for your lambda-powered architecture.
Mostly, you'll design event-driven, async architectures that involve a lot of other services like SQS.
So there's not just an HTTP 500 to find.
{ 23/31 }
CloudWatch helps you a lot in the first place with Metrics & Alerts.
A lot of them are predefined, like:
โข Lambda Errors: your function did not finish with exit code 0
โข Throttles: concurrency limit was exceeded
Familiarize yourself with CloudWatch possibilities!
{ 24/31 }
CloudWatch has its limitations and the Console Interface is still painfully to use for certain tasks like log browsing and having a complete solution needs a lot of work.
Third-party tools which are mostly easy to set up are helping a lot.
{ 25/31 }
My biased proposal:
Try out @thedashbird for free - an all-embracing monitoring and debugging tool for serverless applications powered by Lambda.
If you've got questions, feedback or you're missing a feature: send me a DM ๐จ
We can work something out ๐งโ๐ป
Don't just blindly build a new service with Lambda.
Do an in-depth requirements analysis before and think about your use-cases.
Understand what you want & make sure ๐๐ต๐ฒ ๐๐ฒ๐ฟ๐๐ฒ๐ฟ๐น๐ฒ๐๐ ๐ฎ๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต ๐ณ๐ถ๐๐!
{ 27/31 }
Answer these questions:
1. Does the service need to maintain a ๐ฐ๐ฒ๐ป๐๐ฟ๐ฎ๐น ๐๐๐ฎ๐๐ฒ? 2. Does the service need to serve requests ๐๐ฒ๐ฟ๐ ๐ณ๐ฟ๐ฒ๐พ๐๐ฒ๐ป๐๐น๐? 3. Is the architecture rather ๐บ๐ผ๐ป๐ผ๐น๐ถ๐๐ต๐ถ๐ฐ instead built of small, loosely coupled parts?
{ 28/31 }
4. Is it ๐๐ฒ๐น๐น ๐ธ๐ป๐ผ๐๐ป how the service needs to ๐๐ฐ๐ฎ๐น๐ฒ ๐ผ๐๐ on a daily or weekly basis and as well the expected traffic in the future? 5. Are processes mostly revolving around ๐๐๐ป๐ฐ๐ต๐ฟ๐ผ๐ป๐ผ๐๐ ๐ผ๐ฝ๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป๐?
The more ๐ก๐ผ๐, the better!
{ 29/31 }
If you've answered one or more questions with yes, likely a "classical" containerized approach with ECS can be a better solution.
I love Lambda and taking a serverless approach, but ๐ถ๐ ๐ป๐ฒ๐ฒ๐ฑ๐ ๐๐ผ ๐ณ๐ถ๐ ๐๐ผ๐๐ฟ ๐ด๐ผ๐ฎ๐น๐.
{ 30/31 }
Lambda is continuously ๐ถ๐บ๐ฝ๐ฟ๐ผ๐๐ฒ๐ฑ!
Since I've started with Lambda in 2018, AWS introduced among other things:
โข AWS Hyperplane for running Lambda smoothly attached to VPCs
โข fine-grained billing in 1ms periods
โข running Lambda on Gravity ARM processors
It's managed, highly available & scales on-demand with low latencies.
For getting you hooked, at Prime Days 2021 DynamoDB served ๐ด๐ต.๐ฎ ๐บ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฟ๐ฒ๐พ๐๐ฒ๐๐๐/๐๐ฒ๐ฐ๐ผ๐ป๐ฑ at its peak.
I'm still in the early stages & already got a lot of lessons learned โ
๐๐ฎ๐๐ป๐ฐ๐ต ๐ฒ๐ฎ๐ฟ๐น๐
Maybe you've got another dozen ideas for features you think are needed for your MVP.
But until you've launched and you've got actual (paying) users, you've got no guarantee that your business case is even valid.
Intersects with the previous point: don't make the shinest code, with 100% code coverage and the perfect architecture, as it requires way too much effort.
Don't over or underdo it.
Make it work & manageable.
Guarantees to not miss out on new features or services, but also contains interesting statistics and other insights from AWS itself.
Gets updated very regularly, sometimes several times a day.
If you're focusing on keeping up with the new capabilities AWS provides, that's your major source.
You'll learn about improvements to existing services, introductions of new ones as well as region expansions.
A physical server, only utilized by you
โข you have to know or guess the CPU & memory capacities you need
โข high risk of overpaying (underutilized server) or under-provisioning (too much load)
โข you're able to run multiple apps, but need to make sure that you're not causing conflicts by resource sharing
โข you're solely responsible for the security
โข up- or downscaling is tedious & not quickly possible
The concepts are crucial & being confident in them is a necessity.
From basics to advanced concepts ๐งตโ
For seriously working with AWS, there's no way around IAM.
Skipping to understand its core principles will bite you again and again in the future๏ธ ๐ฅ
Take the time to do a deep dive, so you won't be frustrated later.