I was recently helping a client plan out a new platform capability - a new core capability to replace 3+ existing implementations across their various products.
We used a thinking model I'm calling Platform Capability Mapping, and it ended up worked quite nicely
We start by identifying the Consumers - which systems would use this new capability. Pretty much any platform capability is going to have multiple consumers. These are the customers for your internal product.
We then identify Use Cases - what do these Consumer systems need this platform capability for. Importantly, we also connect Use Cases back to Consumers, showing which consumers have which use cases. Note that often more than one consumer will share the same use case.
The most common feature-flagging pain I hear of is "feature flag debt" - stale flags clogging up your codebase with conditionals and dead code.
Uber just open-sourced Piranha, an internal tool for detecting and cleaning up stale feature flags.
Let's talk about it a bit...
Piranha is a tool specifically for managing feature flag debt in Uber's mobile apps: eng.uber.com/piranha/
They also have a really interesting academic paper describing it, along with lots of interesting details on feature flagging @ Uber in general: manu.sridharan.net/files/ICSE20-S…
Besides being an interesting approach to a very common problem, their discussion of Piranha also provides some very interesting insights into an organization that's *heavily* invested in feature flagging...
After 19 years of writing automated tests in a bunch of tech stacks I have developed an opinion or two on what makes a good test runner.
What would my ideal test runner look like? Here's a list of features.
🧵"Strap in", as the kids say.
1) Tagging: an extensible way to annotate tests with metadata. This allows external tooling to implement features like quarantining (with expiration dates), marking a subset of tests as running before commit, and so on.
2) Pended/muted/ignored tests: Out-of-the-box support for marking a test or suite of tests as pending. Ideally this would just be a convention built on top of the general tagging system, rather than a special case.