"For amazon.com we found the "above the fold" latency is what customers are the most sensitive to"
This is an interesting insight, that not all service latencies are equal and that improving the overall page latency might actually end up hurting the user experience if it negatively impacts the "above the fold" latency as a result. 💡
It's one of those insights that seems so obvious once it's been pointed out. But if you're just looking at your microservice's latency rather than looking at the whole UX holistically, it's easy to fall into optimizing things that don't move the needle.
"Instrumentation needs to be as easy as possible"
That's music to my ears, need to make it easy for people to do the right thing. That's why folks at @Lumigo take a lot of pride at its auto-instrumentation capability.
This is a great tip! 💰💰💰
Create separate metrics based on query cost, so you remove noise from your latency metrics 🤯
And here's another one - use CloudWatch contributor insights to look at per-client error rate to filter out the noise in client errors.
You can get an approximation of % of clients who are experiencing any kind of errors. It's a stable metric and suitable for alarming.
haha, this has happened to me so many times before! Took me a while to eventually learn 😂
tip - separate latency metrics for success and failures
This is far more complex than the most complex CD pipeline I have ever had! Just cos it's complex, doesn't mean it's over-engineered though. Given the blast radius, I'm glad they do releases carefully and safely.
If you look closely, beyond all the alpha, beta, gamma environments, it's one-box in a region first then the rest of the region, I assume starting with the least risky regions first.
I've gotten a few questions about Aurora Serverless v2 preview, so here's what I've learnt so far. Please feel free to chime in if I've missed anything important or got any of the facts wrong.
Alright, here goes the 🧵...
Q: does it replace the existing Aurora Serverless offering?
A: no, it lives side-by-side with the existing Aurora Serverless, which will still be available to you as "v1".
Q: Aurora Serverless v1 takes a few seconds to scale up, that's too much for our use case where we get a lot of spikes. Is that the same with v2?
A: no, v2 scales up in milliseconds, during preview the max ACU is only 32 though
Great overview of permission management in AWS by @bjohnso5y (SEC308)
Lots of tools to secure your AWS environment (maybe that's why it's so hard to get right, lots of things to consider) but I love how it starts with "separate workloads using multiple accounts"
SCP for org-wide restrictions (e.g. Deny ec2:* 😉).
IAM perm boundary to stop ppl from creating permissions that exceed their own.
block S3 public access
These are the things that deny access to things (hence guardrails)
Use IAM principal and resource policies to grant perms
"You should be using roles so you can focus on temporary credentials" 👍
Shouldn't be using IAM users and groups anymore, go set up AWS SSO and throw away the password for the root user (and use the forgotten password mechanism if you need to recover access)
Great session by @MarcJBrooker earlier on building technology standards at Amazon scale, and some interesting tidbits about the secret sauce behind Lambda and how they make technology choices - e.g. in whether to use Rust for the stateful load balancer v2 for Lambda.
🧵
Nice shout out to some of the benefits of Rust - no GC (good for p99+ percentile latency), memory safety with its ownership system theburningmonk.com/2015/05/rust-m… great support for multi-threading (which still works with the ownership system)
And why not to use Rust.
The interesting Q is how to balance technical strengths vs weaknesses that are more organizational.
However, this might not mean much in practice for a lot of you because your Lambda bill is $5/month, so saving even 50% only buys you a cup of Starbucks coffee a month.
Given all the excitement over Lambda's per-ms billing change today, some of you might be thinking how much money you can save by shaving 10ms off your function.
Fight that temptation 🧘♂️until you can prove the ROI on doing the optimization.
Assuming $50 (which is VERY conservative) per dev per hour, it would have taken them 40 months to break even on just having the meeting, before writing a single line of code!
With the per-ms billing, you're automatically saving on your Lambda cost already, by NOT having your invocation time rounded up to the next 100ms.
Unless you're invoking a function at such high frequency, those micro-optimizations won't be worth the eng time you have to invest.