This is part 2 of my #aws #reinvent hot takes on the big #serverless related announcements.

Part 1 is here for anyone who missed it:

Same deal as before, if this gets 168 retweets then I'll do another batch 👍

Alright, here comes the mega 🧵...
1. Lambda now bills you by the ms as opposed to 100 ms. So if your function runs for 42ms you will be billed for 42ms, not 100ms.

This instantly makes everyone's lambda bills cheaper without having to lift a finger. It's the best kind of optimization 😎

aws.amazon.com/about-aws/what…
However, this might not mean much in practice for a lot of you because your Lambda bill is $5/month, so saving even 50% only buys you a cup of Starbucks coffee a month.

Still, that's a FREE cup of coffee!

However...
Before, because Lambda bills in 100ms, there's no point in optimizing if your avg duration is sub 100ms.

NOW, you can micro-optimize and every ms saved is another $0.00000...X saved.

The question is, should you do it?

In 97% of cases, it'd be premature optimization.
As I discussed here don't do it until you know your ROI.

❓ how many invocations a month?
❓ how much time you can save per invocation?
❓ how much memory does func need?
From these, you can work out a ballpark figure for how much you stand to save.
Then estimate how many hours is it gonna take you to optimize the code. Times that by 2 ;-)

Then look at your wage, how much you're paid per hour. Times that by 2 (once bonus, pension and everything is factored in, usually it's 2x salary)

Now, you can work out the ROI.
To invoke my inner Knuth and to make sure you see the 2nd half of his quote.

If you can identify that critical 3% of use cases (based on ROI, not by how excited you feel about optimizing that bit of code), then you should absolutely do it, especially now you have 10GB functions!
2. Lambda supports up to 10GB of memory and 6 vCPU cores.

This is another big one for those of you doing HPC or otherwise compute or memory-intensive applications such as training ML models.

aws.amazon.com/about-aws/what…
Between the AVX2 instruction set and more CPU and memory, the Lambda team is really starting to look after the Lambda as a supercomputer crowd.

Would love to see more use cases around this, and if anyone fancy talking about what you're doing then join me on @RealWorldSls 🙏
3. Aurora Serverless v2, arguably the biggest announcement from today.

Honestly, it sounds great! But then again, so did Aurora serverless v1, the devil was in the details. So I'll reserve my enthusiasm until I see it in action.

aws.amazon.com/about-aws/what…
I'm just as curious about what hasn't been said as what has - there was no mention about cold starts, which was the thing that made everyone go "yuck, we can't have that in production!".

And what about data API? I imagine (and hope) that is still available on v2.
4. Lambda supports container images as a packaging format.

Another big announcement and I expect to see lots of excitement and confusion around this.

aws.amazon.com/about-aws/what…
I've shared some details on how it works, use cases and some of my thoughts on it on the @Lumigo blog already, so go and check it out

lumigo.io/blog/package-y…
For better or worse, it's what people have been asking for.

And for orgs with strict compliance and audit requirements, being able to use the same tooling as their container workload is a must. This is designed for them.

What's in it for the rest of us?
Well, it lets you ship a container image up to 10GB, so you can pack a lot of read-only data with your image. Like a giant ML model, which would have been slow and painful to read from EFS.

BUT, for functions that are shipped as zip files, the limit is still 250MB unzipped.
When you use container images, you're responsible for securing and patching the base image. Even if you use an AWS-provided base image, you still have to deploy their updates to your functions regularly.
This is a big set of responsibilities to take on but @ben11kehoe has a great idea on how it can be fixed

It'd address my biggest concern with using container images for sure.
However, you're still much better off by keeping things simple as much as you can and stick to basics. Using zip files = no building docker images = simpler pipeline (no need to publish to ECR) = no need to worry about function needing optimization again after 14 days of idle
I keep saying it, keep it simple for as long as you can, and only reach out for more complex solution when you must.

5. AWS Proton, a service for deploying container and serverless applications.

Honestly, I'm not sure about this one... there are already lots of ways to deploy serverless and container applications on AWS with both official and 3rd tools

aws.amazon.com/about-aws/what…
And looking at the example in this thread it's doing deployment by clicking in the AWS console.

Didn't we learn that was a bad idea 10 years ago?

Maybe I've misunderstood the intended target user for this service?
I mean, you have Amplify for frontend devs who don't wanna know what AWS resources are provisioned.

You have @goserverless, SAM, Terraform, CDK and too many others to list that caters for backend devs who want more say on what's deployed in your AWS account.
But it's not clear to me who is Proton designed for.

Surely not your DevOps team or sys admins, they're gonna want IaC for sure.

And according to this thread, the UX is not quite ready either (at least not compared to other solutions on the market)
6. AWS Glue Elastic Views

"Combine and replicate data across multiple data stores using SQL"

This sounds like it has some interesting possibilities! I mean, "materialized views" comes right out of the CQRS book, doesn't it?

aws.amazon.com/about-aws/what…
I have built this type of materialized views by hand so many times before, in real-time with Kinesis Analytics, or with some sort of batch-job against Athena (where all the raw events are dumped into). This could make all those home-grown solutions redundant.
The fact that it supports S3, ES, RDS, DynamoDB, Aurora and Redshift is pretty insane. I guess one challenge with mashing together all these data sources is that it's impossible to guarantee ordering of data updates in those systems is respected during replication.
I think the devil is gonna be in the details for this one. But I'm excited about what it could do in any case.
7. S3 replication supports multiple destinations

Solving a pretty clear (and specific) problem here, that if you need to fan-out the S3 object from one bucket to multiple buckets.

aws.amazon.com/about-aws/what…
Hasn't come up a lot in my work, but it's a blocker when you need that. The way I've solved it until now is to use Lambda to do the fan-out, which is a bit wasteful.
8. S3 adds read-after-write consistency for overwrites

This is BIG.

Before, read-after-write consistency was only guaranteed for new objects, not for updates. This causes SOOO MANY eventual consistency problems that you end up solving at the app level.

aws.amazon.com/about-aws/what…
I think @gojkoadzic is really gonna appreciate this update. We were talking a while back and this was one of the biggest problems he had with S3 buzzsprout.com/877747/4218644
9. Amplify launched admin UI for managing your Amplify project

I had a preview of this last week, and although I don't use the Amplify CLI, I gotta say this looked pretty slick. If you use Amplify, check it out.

aws.amazon.com/about-aws/what…
10. S3 supports two-wap replication for object metadata changes

Notice that this is only for replicating object metadata changes (tags, ACL, etc.) and not the actual data objects themselves.

aws.amazon.com/about-aws/what…
I'm honestly not sure about the specific use case here...

I get why there isn't bi-directional data replication, but in what use cases would you be updating metadata in the replicated bucket (the backup/passive bucket, or whatever you'd like to call it)? 🤔

Enlighten me anyone?
That's it for the big serverless-related announcements from today, unless I missed anything. But there was something from preinvent that I missed in part 1
11. Modules for CloudFormation

Like Terraform modules, but for CloudFormation

aws.amazon.com/about-aws/what…
I think this one is great, and not just for serverless applications, and gives you a way to create reusable components for a CloudFormation stack, something that AWS tried (and failed, so far) with SAR. Let's see what happens with modules.
There are some key differences between SAR and modules:

a. modules are private to an account, SARs can be shared publicly.
b. modules are merged into a stack, SARs get compiled into nested stacks
c. SAR has versioning, modules don't
At first glance, it still feels like if your goal is to create reusable components that can be easily shared in your organization, and be able to version control them, then CDK patterns still look the better option here.

Feel free to tell me I'm wrong here.
FWIW I'm not a fan of CDK, but if I were to use it, it will be because I can create reusable components and distribute them in my org IFF every team is using the same prog language, otherwise, I'd have to replicate the same pattern in every language that teams are using...
Alright, and that concludes part 2, let me know if I missed any big announcements from Andy Jassy's keynote.

Plz retweet this thread, if it gets 168 retweets then I will do another batch of these hot takes!

Until then, enjoy #reinvent :-D
So, circling back to Proton for a sec here..

Thanks to @matthieunapoli for pointing me to it, but this thread explains the target demographic and use case for Proton:

So it makes a lot more sense now, although the execution is still not quite there yet.
What @Rafavallina calls the "platform team" has different meanings in different orgs. But really large orgs often have a "release team" (literally, they're responsible for taking your commits and pushing it live) whose responsibility overlaps with Proton
While the intended target is clear, it still means that it's not something most of you need to be using.

And for those large orgs it's targeting, they would already have something in place. So I guess Proton needs to up its game to justify the switching cost.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yan Cui is making the AppSync Masterclass

Yan Cui is making the AppSync Masterclass Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @theburningmonk

3 Dec
Great overview of permission management in AWS by @bjohnso5y (SEC308)

Lots of tools to secure your AWS environment (maybe that's why it's so hard to get right, lots of things to consider) but I love how it starts with "separate workloads using multiple accounts" Image
SCP for org-wide restrictions (e.g. Deny ec2:* 😉).

IAM perm boundary to stop ppl from creating permissions that exceed their own.

block S3 public access

These are the things that deny access to things (hence guardrails)

Use IAM principal and resource policies to grant perms ImageImage
"You should be using roles so you can focus on temporary credentials" 👍

Shouldn't be using IAM users and groups anymore, go set up AWS SSO and throw away the password for the root user (and use the forgotten password mechanism if you need to recover access)
Read 13 tweets
3 Dec
Great session by @MarcJBrooker earlier on building technology standards at Amazon scale, and some interesting tidbits about the secret sauce behind Lambda and how they make technology choices - e.g. in whether to use Rust for the stateful load balancer v2 for Lambda.

🧵 Image
Nice shout out to some of the benefits of Rust - no GC (good for p99+ percentile latency), memory safety with its ownership system theburningmonk.com/2015/05/rust-m… great support for multi-threading (which still works with the ownership system) Image
And why not to use Rust.

The interesting Q is how to balance technical strengths vs weaknesses that are more organizational. Image
Read 20 tweets
1 Dec
Given all the excitement over Lambda's per-ms billing change today, some of you might be thinking how much money you can save by shaving 10ms off your function.

Fight that temptation 🧘‍♂️until you can prove the ROI on doing the optimization.

#serverless #aws #awslambda
The moral of this tweet is as relevant now as ever:

Assuming $50 (which is VERY conservative) per dev per hour, it would have taken them 40 months to break even on just having the meeting, before writing a single line of code!
With the per-ms billing, you're automatically saving on your Lambda cost already, by NOT having your invocation time rounded up to the next 100ms.

Unless you're invoking a function at such high frequency, those micro-optimizations won't be worth the eng time you have to invest.
Read 4 tweets
30 Nov
re:Invent starts tomorrow, so let me round up the biggest #serverless related announcements from the last 2 weeks (I know, crazy!) and share a few thoughts on what they mean for you.

Mega 🧵If this gets 168 retweets then I'll add another batch!

#serverless #aws #reinvent
1. Lambda released Logs API which works with the Lambda Extensions mechanism that was released in Oct. This lets you subscribe to Lambda logs in a Lambda extension and ship them elsewhere WITHOUT going through CloudWatch Logs

aws.amazon.com/about-aws/what…
Why is it important?

a. it lets you side-step CloudWatch Logs, which often costs more (sometimes 10x more) than Lambda invocations in production apps.
b. it's possible (although not really feasible right now) to ship logs in real-time
Read 45 tweets
29 Nov
I sat down this weekend and had a look at my finances as I'm almost 5 months into my 2nd year as a full-time solo consultant, and noticed that my revenue streams have changed quite a bit over the last 3 years.
This is the result of a conscious effort to reduce my reliance on a few large clients, and also to offset seasonalities and other factors that can affect revenue and create a healthy mix of active and passive income streams.
Overall revenue has grown over time, and my largest client now accounts for less than 20% of my revenue. And I haven't seen too much seasonality to my work yet - summer was quieter because Europeans went on holiday, but it was still OK.
Read 7 tweets
29 Nov
X: in light of last week's #AWS outage, should I make my app multi-region?
me: it depends.
X: on what?
me: how much did the outage cost you in lost sales, reputation cost, etc.? And how much are you willing to invest in improving your uptime in case of another region-wide outage?
X: erm... I'm not sure...
me: don't get me wrong, if you're a large enterprise, I expect you to be multi-region already! Hell, I expect you to be doing chaos engineering and proactively finding weaknesses in your architecture before disasters strike and force you into reacting.
me: but as we can see from these AWS outages, modern systems are complex, and even companies who like AWS who have invested heavily into resilience and are doing all the right things, 💩still happens

aws.amazon.com/message/11201
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!