Pete Hodgson Profile picture
22 Nov, 12 tweets, 3 min read
I was recently helping a client plan out a new platform capability - a new core capability to replace 3+ existing implementations across their various products.

We used a thinking model I'm calling Platform Capability Mapping, and it ended up worked quite nicely

🧵...
We start by identifying the Consumers - which systems would use this new capability. Pretty much any platform capability is going to have multiple consumers. These are the customers for your internal product.
We then identify Use Cases - what do these Consumer systems need this platform capability for. Importantly, we also connect Use Cases back to Consumers, showing which consumers have which use cases. Note that often more than one consumer will share the same use case.
Next, we identify Features - what functionality would our platform capability provide to our consumers to help them achieve those use cases. Again, we connect the Features up to the Use Cases they support.
Finally, we did some clustering of those features together into feature sets - chunks of functionality that would likely be implemented within the same service, or share the same data source.
This mapping activity turned out to be really helpful, for a number of reasons.

For one thing, it helped to get everyone on the same page as to what was in scope for this capability.
The completed map also surfaces which features are doing the "heavy lifting" - supporting a lot of use cases (or a particularly valuable use case), or delivering functionality to a lot of consumers.
Inversely, the map helps to identify features which are really only serving one consumer, and thus might be better implemented within that consumer itself.
My hope is that seeing this mapped out in one place will also be valuable when formulating a phased release plan. We can use these connections to identify a sequence of coherent releases, each targetted at specific use cases and specific consumers.
My description here is a somewhat simplified version of what we did at my client. We made a few tweaks, such as distinguishing between current state and target state. Our final map was also (unsurprisingly) a lot more complex than the neat example at the top of this thread 😆
I drew quite a lot of inspiration from Wardley Mapping when coming up with this model. If what I've described is interesting and you haven't investigated Wardley mapping then you really should. medium.com/wardleymaps
Finally, I might write this up in more detail in a blog post if there's enough interest. Add a "like" to this tweet if you'd interested.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Pete Hodgson

Pete Hodgson Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ph1

12 Aug
If you allow every product delivery team to choose which type of bolt they want to use, you might end up with half the teams using hex heads and half using phillips.

This means that now every infrastructure engineer has to carry a set of screwdrivers AND a set of allen keys. 😟
Giving delivery teams autonomy is great and all, but there exist a set of decisions which:
a) don't really impact a single team either way, but
b) have non-trivial repercussions in the aggregate

There's leverage in replacing these decisions with one standard approach.
Some examples I've seen:

- Every service serves HTTP on the same standard port
- Every service gets its DB connection string via the same mechanism (e.g. a DB_CONN env var)
- Don't have services on MySQL 5.5, 5.6, and 8
Read 4 tweets
20 May
The most common feature-flagging pain I hear of is "feature flag debt" - stale flags clogging up your codebase with conditionals and dead code.

Uber just open-sourced Piranha, an internal tool for detecting and cleaning up stale feature flags.

Let's talk about it a bit...
Piranha is a tool specifically for managing feature flag debt in Uber's mobile apps: eng.uber.com/piranha/

They also have a really interesting academic paper describing it, along with lots of interesting details on feature flagging @ Uber in general: manu.sridharan.net/files/ICSE20-S…
Besides being an interesting approach to a very common problem, their discussion of Piranha also provides some very interesting insights into an organization that's *heavily* invested in feature flagging...
Read 20 tweets
16 Apr
I’ve noticed that high-performance engineering orgs have a clear preference towards deep-stack product delivery teams. Teams oriented around areas of a product, rather than around tech lines.

But where do you draw these team boundaries? I’ll list a few patterns I’ve seen...
1/ Lifecycle Teams

Different delivery teams focus on different stages of the user's lifecycle within the product.

For example, an e-comm site might have teams focused on different phases of the shopping experience, from browsing through to purchase and delivery.
2/ Audience Teams

Delivery teams focused on the different audiences (personas) of the product.

A food-delivery app might have a team that serve the needs of food couriers, a team for restaurant workers, a team for hungry consumers, and so on.
Read 10 tweets
4 Mar
I've had a few conversations recently where people see a Service Mesh sidecar/library (e.g. Istio) as some sort of general alternative to a Service Chassis.

That seems misguided - there are a lot of cross-cutting concerns that a service mesh *won't* provide. For example:
1) configuration - How does your service discover general configuration values, as well as updates to those values
2) feature flagging - a special case to configuration, but one which in my experience is worthy of treating as a first-class cross-cutting concern.
Read 8 tweets
2 Nov 19
After 19 years of writing automated tests in a bunch of tech stacks I have developed an opinion or two on what makes a good test runner.

What would my ideal test runner look like? Here's a list of features.

🧵"Strap in", as the kids say.
1) Tagging: an extensible way to annotate tests with metadata. This allows external tooling to implement features like quarantining (with expiration dates), marking a subset of tests as running before commit, and so on.
2) Pended/muted/ignored tests: Out-of-the-box support for marking a test or suite of tests as pending. Ideally this would just be a convention built on top of the general tagging system, rather than a special case.
Read 33 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!