Very cool #reInvent session by @sliedigaws about building out distributed applications using different #EventBridge patterns. Let's recap this quickly to see how we can use them to build out our #serverless apps. 🧵
The "single-bus, single-account" pattern is a super simple way to get started, especially if you have a small team. Create logical service boundaries and use a single event bus to decouple the services. This is one of my favorite ways to prototype #serverless applications.
The "single-bus, multi-account" pattern is my go-to for larger apps, and works great for multi-team orgs w/ service-level ownership reqs. Create a *global* event bus, grant access to service accounts for putEvents, and forward events to service-owned buses for rules & routing. 👍
I've never come across this "multi-bus, single account" pattern, but I can see where it could make sense. Major downside to this is that there requires a lot of collaboration between service teams. I prefer working with a centralized bus, but it does become a SPOF.
The "multi-bus, multi-account" pattern is another head scratcher for me, but again, maybe it makes sense for some people. This seems like a massive pain to manage rules across service-level teams, and additional duplication of rules from Service A's bus to Service B's. 🤷♂️
There's a really interesting "feature" that I didn't realize existed, but also never thought to try. Receiver accounts will not forward messages to a 3rd account in order to avoid cross-account loops. Just think of your AWS bill if you got a message stuck in an infinite loop! 😬
There are upsides and downsides to all of these patterns, and there's probably not a "right way" that would fit all use cases. The single bus, multi-account pattern is my favorite, but you need to be careful about just forwarding ALL events to every service & routing them there.
If you only have a few services, the cost is minimal, but cross account invocations are charged at $1 per million, so if you have lots of services, you might want to route them (or at least restrict to categories) at the shared bus level, and fine-grain them at the service level.
I find these discussions fascinating, so I'd love to know which pattern you prefer and why.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Context is *extremely* important. The repeating line in the musical of "I'm going to reduce your… ops!" is in the context of setting up and maintaining a Kubernetes cluster. If it's not obvious the first time with "Hey yo, I'm unlike containers, no patching, no maintainers"...
…then the second time around adding "No pods or orchestrators" should make it abundantly clear. Since I consider myself a #serverless purist, I try to avoid the "K" word, but the line "I know all this stuff with K8s is excitin'" should settle any further misunderstanding.
.@alexbdebrie has another excellent post that details the benefits and downsides of using a Single-Table design with @DynamoDB. While I completely agree with him on the “benefits”, I have some thoughts on his “downsides” that I’d like to address. 🧵 alexdebrie.com/posts/dynamodb…
Downside #1: “The steep learning curve to understand single-table design”
There is no doubt that “thinking in #NoSQL” is a complete departure from traditional #RDBMS, but understanding how to correctly denormalize data is applicable to both single- AND multi-table designs.
If you are using a multi-table design in @DynamoDB that implements 3NF, then just STOP! Seriously, this is *beyond* wrong (I think presidents have been impeached for this). This is not what #NoSQL was designed for and you will get ZERO benefit from doing this. Spin up an RDBMS.
Yes, "pay-per-use" is very attractive, but I'm okay with AWS's "pay-for-value" model in #serverless environments. I don't expect the cloud to dedicate resources to me for free. I may get the benefit of warm invocations, but that's an internal optimization, not a guarantee.
Steady workloads benefit from "provisioned" capacity, not just from a pricing standpoint, but for performance as well. The "why-not-do-this-high-volume-workload-on-EC2" argument is "sometimes" valid. Provisioned Concurrency w/ Lambda is a step towards negating that logic.
Kicked off @AWSreInvent 2019 by attending @houlihan_rick’s @DynamoDB modeling session. As expected, it was a 60 minute firehose of #NoSQL knowledge bombs. There was *A LOT* to take away from this, but here are some really interesting lessons that stuck out to me. #reInvent
DynamoDB performance gets BETTER with scale. Yup, you read that correctly, the busier it gets, the faster it gets. This is because, eventually, every event router in the fleet caches your information and doesn’t need to look up where your storage nodes are.
Big documents are a bad idea! It’s better to split data into multiple items that (if possible) are less than 1 WCU. This will be a lot cheaper and cost you less to read and write items. You can join the data with a single query on the partition key.
EventBridge _should_ become the glue that ties together all your cloud services (eventually w/ two-way message bindings). This means that the number of event types (don't forget SaaS partners) is going to increase exponentially...
Security best-practices will evolve to prefer multiple event buses so you'll have fine-grained access control across SaaS vendors, AWS service events, and custom messaging. The cognitive load to understand all those nuances, limitations, and event structures is overwhelming...
I've been spending a lot of time lately with @dynamodb in my #serverless applications, so I thought I'd share my surefire guide to migrating to it from #RDBMS. So here is…
How to switch from RDBMS to #DynamoDB in *20* easy steps… (a thread)
STEP 1: Accept the fact that Amazon.com can fit 90% of their retail site/system’s workloads into DynamoDB, so you probably can too. 🤔
STEP 2: Create an Entity-Relationship Model, just like you would if you were designing a traditional relational database. 👩💻