Pete Hodgson Profile picture
Independent consultant helping engineering teams tackle thorny problems. Formerly Earnest, ThoughtWorks.
22 Nov 20
I was recently helping a client plan out a new platform capability - a new core capability to replace 3+ existing implementations across their various products.

We used a thinking model I'm calling Platform Capability Mapping, and it ended up worked quite nicely

🧵...
We start by identifying the Consumers - which systems would use this new capability. Pretty much any platform capability is going to have multiple consumers. These are the customers for your internal product.
We then identify Use Cases - what do these Consumer systems need this platform capability for. Importantly, we also connect Use Cases back to Consumers, showing which consumers have which use cases. Note that often more than one consumer will share the same use case.
Read 12 tweets
12 Aug 20
If you allow every product delivery team to choose which type of bolt they want to use, you might end up with half the teams using hex heads and half using phillips.

This means that now every infrastructure engineer has to carry a set of screwdrivers AND a set of allen keys. 😟
Giving delivery teams autonomy is great and all, but there exist a set of decisions which:
a) don't really impact a single team either way, but
b) have non-trivial repercussions in the aggregate

There's leverage in replacing these decisions with one standard approach.
Some examples I've seen:

- Every service serves HTTP on the same standard port
- Every service gets its DB connection string via the same mechanism (e.g. a DB_CONN env var)
- Don't have services on MySQL 5.5, 5.6, and 8
Read 4 tweets
20 May 20
The most common feature-flagging pain I hear of is "feature flag debt" - stale flags clogging up your codebase with conditionals and dead code.

Uber just open-sourced Piranha, an internal tool for detecting and cleaning up stale feature flags.

Let's talk about it a bit...
Piranha is a tool specifically for managing feature flag debt in Uber's mobile apps: eng.uber.com/piranha/

They also have a really interesting academic paper describing it, along with lots of interesting details on feature flagging @ Uber in general: manu.sridharan.net/files/ICSE20-S…
Besides being an interesting approach to a very common problem, their discussion of Piranha also provides some very interesting insights into an organization that's *heavily* invested in feature flagging...
Read 20 tweets
16 Apr 20
I’ve noticed that high-performance engineering orgs have a clear preference towards deep-stack product delivery teams. Teams oriented around areas of a product, rather than around tech lines.

But where do you draw these team boundaries? I’ll list a few patterns I’ve seen...
1/ Lifecycle Teams

Different delivery teams focus on different stages of the user's lifecycle within the product.

For example, an e-comm site might have teams focused on different phases of the shopping experience, from browsing through to purchase and delivery.
2/ Audience Teams

Delivery teams focused on the different audiences (personas) of the product.

A food-delivery app might have a team that serve the needs of food couriers, a team for restaurant workers, a team for hungry consumers, and so on.
Read 10 tweets
4 Mar 20
I've had a few conversations recently where people see a Service Mesh sidecar/library (e.g. Istio) as some sort of general alternative to a Service Chassis.

That seems misguided - there are a lot of cross-cutting concerns that a service mesh *won't* provide. For example:
1) configuration - How does your service discover general configuration values, as well as updates to those values
2) feature flagging - a special case to configuration, but one which in my experience is worthy of treating as a first-class cross-cutting concern.
Read 8 tweets
2 Nov 19
After 19 years of writing automated tests in a bunch of tech stacks I have developed an opinion or two on what makes a good test runner.

What would my ideal test runner look like? Here's a list of features.

🧵"Strap in", as the kids say.
1) Tagging: an extensible way to annotate tests with metadata. This allows external tooling to implement features like quarantining (with expiration dates), marking a subset of tests as running before commit, and so on.
2) Pended/muted/ignored tests: Out-of-the-box support for marking a test or suite of tests as pending. Ideally this would just be a convention built on top of the general tagging system, rather than a special case.
Read 33 tweets