However, this might not mean much in practice for a lot of you because your Lambda bill is $5/month, so saving even 50% only buys you a cup of Starbucks coffee a month.
Still, that's a FREE cup of coffee!
However...
Before, because Lambda bills in 100ms, there's no point in optimizing if your avg duration is sub 100ms.
NOW, you can micro-optimize and every ms saved is another $0.00000...X saved.
❓ how many invocations a month?
❓ how much time you can save per invocation?
❓ how much memory does func need?
From these, you can work out a ballpark figure for how much you stand to save.
Then estimate how many hours is it gonna take you to optimize the code. Times that by 2 ;-)
Then look at your wage, how much you're paid per hour. Times that by 2 (once bonus, pension and everything is factored in, usually it's 2x salary)
Now, you can work out the ROI.
To invoke my inner Knuth and to make sure you see the 2nd half of his quote.
If you can identify that critical 3% of use cases (based on ROI, not by how excited you feel about optimizing that bit of code), then you should absolutely do it, especially now you have 10GB functions!
2. Lambda supports up to 10GB of memory and 6 vCPU cores.
This is another big one for those of you doing HPC or otherwise compute or memory-intensive applications such as training ML models.
Between the AVX2 instruction set and more CPU and memory, the Lambda team is really starting to look after the Lambda as a supercomputer crowd.
Would love to see more use cases around this, and if anyone fancy talking about what you're doing then join me on @RealWorldSls 🙏
3. Aurora Serverless v2, arguably the biggest announcement from today.
Honestly, it sounds great! But then again, so did Aurora serverless v1, the devil was in the details. So I'll reserve my enthusiasm until I see it in action.
I'm just as curious about what hasn't been said as what has - there was no mention about cold starts, which was the thing that made everyone go "yuck, we can't have that in production!".
And what about data API? I imagine (and hope) that is still available on v2.
4. Lambda supports container images as a packaging format.
Another big announcement and I expect to see lots of excitement and confusion around this.
For better or worse, it's what people have been asking for.
And for orgs with strict compliance and audit requirements, being able to use the same tooling as their container workload is a must. This is designed for them.
What's in it for the rest of us?
Well, it lets you ship a container image up to 10GB, so you can pack a lot of read-only data with your image. Like a giant ML model, which would have been slow and painful to read from EFS.
BUT, for functions that are shipped as zip files, the limit is still 250MB unzipped.
When you use container images, you're responsible for securing and patching the base image. Even if you use an AWS-provided base image, you still have to deploy their updates to your functions regularly.
This is a big set of responsibilities to take on but @ben11kehoe has a great idea on how it can be fixed
It'd address my biggest concern with using container images for sure.
However, you're still much better off by keeping things simple as much as you can and stick to basics. Using zip files = no building docker images = simpler pipeline (no need to publish to ECR) = no need to worry about function needing optimization again after 14 days of idle
I keep saying it, keep it simple for as long as you can, and only reach out for more complex solution when you must.
5. AWS Proton, a service for deploying container and serverless applications.
Honestly, I'm not sure about this one... there are already lots of ways to deploy serverless and container applications on AWS with both official and 3rd tools
it's doing deployment by clicking in the AWS console.
Didn't we learn that was a bad idea 10 years ago?
Maybe I've misunderstood the intended target user for this service?
I mean, you have Amplify for frontend devs who don't wanna know what AWS resources are provisioned.
You have @goserverless, SAM, Terraform, CDK and too many others to list that caters for backend devs who want more say on what's deployed in your AWS account.
But it's not clear to me who is Proton designed for.
Surely not your DevOps team or sys admins, they're gonna want IaC for sure.
And according to this thread, the UX is not quite ready either
I have built this type of materialized views by hand so many times before, in real-time with Kinesis Analytics, or with some sort of batch-job against Athena (where all the raw events are dumped into). This could make all those home-grown solutions redundant.
The fact that it supports S3, ES, RDS, DynamoDB, Aurora and Redshift is pretty insane. I guess one challenge with mashing together all these data sources is that it's impossible to guarantee ordering of data updates in those systems is respected during replication.
I think the devil is gonna be in the details for this one. But I'm excited about what it could do in any case.
7. S3 replication supports multiple destinations
Solving a pretty clear (and specific) problem here, that if you need to fan-out the S3 object from one bucket to multiple buckets.
Hasn't come up a lot in my work, but it's a blocker when you need that. The way I've solved it until now is to use Lambda to do the fan-out, which is a bit wasteful.
8. S3 adds read-after-write consistency for overwrites
This is BIG.
Before, read-after-write consistency was only guaranteed for new objects, not for updates. This causes SOOO MANY eventual consistency problems that you end up solving at the app level.
I think @gojkoadzic is really gonna appreciate this update. We were talking a while back and this was one of the biggest problems he had with S3 buzzsprout.com/877747/4218644
9. Amplify launched admin UI for managing your Amplify project
I had a preview of this last week, and although I don't use the Amplify CLI, I gotta say this looked pretty slick. If you use Amplify, check it out.
I'm honestly not sure about the specific use case here...
I get why there isn't bi-directional data replication, but in what use cases would you be updating metadata in the replicated bucket (the backup/passive bucket, or whatever you'd like to call it)? 🤔
Enlighten me anyone?
That's it for the big serverless-related announcements from today, unless I missed anything. But there was something from preinvent that I missed in part 1
I think this one is great, and not just for serverless applications, and gives you a way to create reusable components for a CloudFormation stack, something that AWS tried (and failed, so far) with SAR. Let's see what happens with modules.
There are some key differences between SAR and modules:
a. modules are private to an account, SARs can be shared publicly.
b. modules are merged into a stack, SARs get compiled into nested stacks
c. SAR has versioning, modules don't
At first glance, it still feels like if your goal is to create reusable components that can be easily shared in your organization, and be able to version control them, then CDK patterns still look the better option here.
Feel free to tell me I'm wrong here.
FWIW I'm not a fan of CDK, but if I were to use it, it will be because I can create reusable components and distribute them in my org IFF every team is using the same prog language, otherwise, I'd have to replicate the same pattern in every language that teams are using...
Alright, and that concludes part 2, let me know if I missed any big announcements from Andy Jassy's keynote.
Plz retweet this thread, if it gets 168 retweets then I will do another batch of these hot takes!
So it makes a lot more sense now, although the execution is still not quite there yet.
What @Rafavallina calls the "platform team" has different meanings in different orgs. But really large orgs often have a "release team" (literally, they're responsible for taking your commits and pushing it live) whose responsibility overlaps with Proton
While the intended target is clear, it still means that it's not something most of you need to be using.
And for those large orgs it's targeting, they would already have something in place. So I guess Proton needs to up its game to justify the switching cost.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Great overview of permission management in AWS by @bjohnso5y (SEC308)
Lots of tools to secure your AWS environment (maybe that's why it's so hard to get right, lots of things to consider) but I love how it starts with "separate workloads using multiple accounts"
SCP for org-wide restrictions (e.g. Deny ec2:* 😉).
IAM perm boundary to stop ppl from creating permissions that exceed their own.
block S3 public access
These are the things that deny access to things (hence guardrails)
Use IAM principal and resource policies to grant perms
"You should be using roles so you can focus on temporary credentials" 👍
Shouldn't be using IAM users and groups anymore, go set up AWS SSO and throw away the password for the root user (and use the forgotten password mechanism if you need to recover access)
Great session by @MarcJBrooker earlier on building technology standards at Amazon scale, and some interesting tidbits about the secret sauce behind Lambda and how they make technology choices - e.g. in whether to use Rust for the stateful load balancer v2 for Lambda.
🧵
Nice shout out to some of the benefits of Rust - no GC (good for p99+ percentile latency), memory safety with its ownership system theburningmonk.com/2015/05/rust-m… great support for multi-threading (which still works with the ownership system)
And why not to use Rust.
The interesting Q is how to balance technical strengths vs weaknesses that are more organizational.
Given all the excitement over Lambda's per-ms billing change today, some of you might be thinking how much money you can save by shaving 10ms off your function.
Fight that temptation 🧘♂️until you can prove the ROI on doing the optimization.
Assuming $50 (which is VERY conservative) per dev per hour, it would have taken them 40 months to break even on just having the meeting, before writing a single line of code!
With the per-ms billing, you're automatically saving on your Lambda cost already, by NOT having your invocation time rounded up to the next 100ms.
Unless you're invoking a function at such high frequency, those micro-optimizations won't be worth the eng time you have to invest.
re:Invent starts tomorrow, so let me round up the biggest #serverless related announcements from the last 2 weeks (I know, crazy!) and share a few thoughts on what they mean for you.
Mega 🧵If this gets 168 retweets then I'll add another batch!
1. Lambda released Logs API which works with the Lambda Extensions mechanism that was released in Oct. This lets you subscribe to Lambda logs in a Lambda extension and ship them elsewhere WITHOUT going through CloudWatch Logs
a. it lets you side-step CloudWatch Logs, which often costs more (sometimes 10x more) than Lambda invocations in production apps.
b. it's possible (although not really feasible right now) to ship logs in real-time
I sat down this weekend and had a look at my finances as I'm almost 5 months into my 2nd year as a full-time solo consultant, and noticed that my revenue streams have changed quite a bit over the last 3 years.
This is the result of a conscious effort to reduce my reliance on a few large clients, and also to offset seasonalities and other factors that can affect revenue and create a healthy mix of active and passive income streams.
Overall revenue has grown over time, and my largest client now accounts for less than 20% of my revenue. And I haven't seen too much seasonality to my work yet - summer was quieter because Europeans went on holiday, but it was still OK.
X: in light of last week's #AWS outage, should I make my app multi-region?
me: it depends.
X: on what?
me: how much did the outage cost you in lost sales, reputation cost, etc.? And how much are you willing to invest in improving your uptime in case of another region-wide outage?
X: erm... I'm not sure...
me: don't get me wrong, if you're a large enterprise, I expect you to be multi-region already! Hell, I expect you to be doing chaos engineering and proactively finding weaknesses in your architecture before disasters strike and force you into reacting.
me: but as we can see from these AWS outages, modern systems are complex, and even companies who like AWS who have invested heavily into resilience and are doing all the right things, 💩still happens