Fun #AWS project at work this afternoon: calculating the configuration values for #DynamoDB provisioned concurrency with autoscaling.

Background: we have a DDB Table with a very seasonal load pattern. It's currently configured for on-demand pricing, which is expensive.
Provisioned capacity with auto scaling requires us to define the minimum WCUs, the maximum WCUs, and the target utilization levels. But what values should we use?

The first two numbers are easy - just set them to a baseline and some safe maximum (we chose 3x the current peak).
The target utilization is more complex. A post on SO described how to calculate it:

> What is the biggest change in capacity usage you see over a time period of 15 minutes expressed as a percentage? Leave this amount of room in your target utilization.
stackoverflow.com/a/50018158/160…
This leaves the question how to calculate the biggest change in throughput capacity. Luckily CloudWatch Metrics Math supports the `DIFF(metric)` function, which contains the difference between any data point and its predecessor. The function `(m2-DIFF(m2))/m2` yields this graph.
This is a good start, but it also shows a target utilization above 100%, which doesn't make sense. The configuration limit is 90% anyway.

So let's add a max limit at 90%, which starts to look useful. The lowest value in the blue chart could be the right threshold value.
Last problem though: the threshold is very spiky at night, when load is low.

By taking the minimum WCUs into account, which is set to the horizontal line in this diagram, we can filter out any auto scaling at night.
The lowest dip of the red line is now our accurate target utilization number. If we configure it at 60%, the provisioned capacity would be 16667 WCUs when the consumed capacity is 10000 WCUs.
This leaves exactly the right amount of room to scale with our workloads, leading to a big cost savings and no throttling at any time.
PS. This entire thread is also available on Mastodon, where it's much easier to read because of the 500 character limit there. mastodon.online/@donkersgoed/1…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🇺🇦 Luc van Donkersgoed

🇺🇦 Luc van Donkersgoed Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @donkersgood

Aug 4
55.4% of 523 votes knows how to improve Lambda performance! Good! For the rest of you, here are the three answers I was looking for (thread):
1. At memory configurations under 3008MB, the CPU is throttled. Increase memory to gain more CPU power. From the documentation:

> At 1,769 MB, a function has the equivalent of one vCPU.

At 3,008 MB, a function has the equivalent of two vCPUs

docs.aws.amazon.com/lambda/latest/…
2. To improve cold starts under predictable loads: add provisioned concurrency. This solution does not work for peaky loads. Also: function warmers don't work. They only 'warm' a single execution environment, while the rest is still cold.

docs.aws.amazon.com/lambda/latest/…
Read 4 tweets
Dec 10, 2021
My perspective on the top Serverless features and cost reductions announced at re:Invent, in one giant thread. Enjoy!
1. Lambda now supports event filtering for SQS, DDB & Kinesis Data Streams.

It's always been inefficient to process irrelevant messages in Lambda, only to drop them immediately. Native filtering removes this responsibility from our code.

Announcement: aws.amazon.com/about-aws/what…
1. (cont.) You can filter both on metadata (e.g. partitionKey or kinesisSchemaVersion), or on the data in the actual event.
Read 24 tweets
Sep 1, 2021
I recently found a new enumeration vulnerability in AWS. It allows me to identify valid account IDs and any IAM principal in it. I had a call with AWS security, and they say it’s by design. Well then, let’s take a look!
Short summary: we can use API Gateway resource policies to verify if any AWS account ID, IAM user or IAM role exists. In any account, without a trace in the target's CloudTrail or any other (user accessible) log.
So how does it work? Well, it's stupidly simple. When you create or update an API Gateway resource policy, you can specify which principal can access the API GW. When you fill in an account ID or principal that doesn't exist, it errors out.
Read 11 tweets
Mar 8, 2021
I’m building a tool that collects every @awscloud resource through List* and Describe* calls. This could have been easy with consistent APIs. After many hours of suffering, let me tell you a few of the many ways AWS APIs are everything but.

A thread 👇
1/ Let’s start with the basics: you would expect that a List operation returns a list of resources and a Describe operation returns the resource’s details. Right? Wrong. For example, in RDS you list resources with DescribeDBClusters, DescribeDBInstances, DescribeDBSnapshots…
2/ But at least most List and Describe calls return the ARNs and details of the resources. However, S3:ListBuckets and DynamoDB:ListTables only return the identifier, after which you will need to call S3: GetBucket* and DynamoDB:DescribeTable operations to get further details.
Read 11 tweets
Mar 1, 2021
OMG I did it! I got all 12 AWS certs 🤓🚀 💻 🎉 💯

To celebrate, a thread with some thoughts on each exam👇
1/12 Cloud Practitioner. The only cert in the Foundational category, its questions cover topics like 'what is the difference between block and object storage' and 'should you host MySQL on EC2'.
2/12 AWS Solutions Architect Associate. I remember being completely overwhelmed by the amount of services (with unintuitive names) I had to memorize. The key to this exam is breadth, not depth.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(