Dragan Stepanović Profile picture
Trying hard not to think about small batches, bottlenecks, and systems. In the meantime: XP, #tocot, Lean, Systems Thinking
Aug 19, 2023 5 tweets 1 min read
You don't need more speculative design (overengineering), you need faster feedback loops. 🧵

I've seen too many developers trying to address slow feedback loops with more speculative design. Also known as “let's do it properly this time!”. Image The link between these two is that longer feedback loops drive a higher cost of change, which incentivizes us to speculate upfront more and increase the batch size.

It's important to recognize the urge for more speculation as a probable indication of too slow feedback loops.
May 15, 2023 11 tweets 2 min read
Some of the things I think about when influencing change in orgs/teams as a Principal/Staff engineer 👇 Invest time in building relationships before (big) course-correcting. In the start: a productive relationship is more important than being right.
Dec 13, 2022 12 tweets 2 min read
The paradox of automated tests as a safety net: as the testability of a codebase goes up, the ROI for test automation goes down?! 🧵

Automated tests as a safety net are most valuable in codebases with high risk when making a change. Those are conflated, coupled codebases where: 1) the average rate of change per element (method/class) is high and

2) the risk of introducing problems with a change is high (too much coupling).
The latter is a consequence of the former, and codebases with long methods and classes inherently exhibit these characteristics.
Feb 17, 2022 22 tweets 4 min read
When it comes to PRs and code reviews, if you are optimizing for these parameters (and I believe you should), then this is the difference (on a natural log scale!) between the worlds of async code reviews and pairing/mobbing.

buckle-up 🧵 This is a 500 PRs, sample data set from a typical product development teams using async code reviews and PRs. I did this analysis across tens of thousands of PRs and the results are pretty much the same.

/2
Jan 11, 2022 6 tweets 1 min read
Estimating effort in a fully loaded system is a poor man's try of achieving predictability while eroding trust and jeopardizing psychological safety 🧵

/1
If an orgs is fully loaded with work, i.e. has a very high utilization rate (almost all of them are, unfortunately) then most of the work's lead time is actually spent in wait time, instead of being worked on.

/2
Dec 24, 2021 4 tweets 1 min read
If it takes me 5 minutes to rename a method and 1 hour to get a review and PR approval, that means wait to processing time ratio is 60/5=12, and flow efficiency is only 7.7%.

Do you really think that a system this inefficient is incentivizing refactoring and small steps?

1/4
And yet, people still think they are bragging when they say 'We review PRs in one hour'.

Wait time matters only in relation to the processing/touch time and thus as part of the flow efficiency.

2/4
Dec 12, 2021 6 tweets 2 min read
Teams that have higher levels of psychological safety tend to co-create more, but what I find even more important is that teams that co-create tend to have higher levels of psychological safety driving that reinforcing feedback loop.

1/6 With Pull Requests you see just the end, very thought through, and polished result.

With co-creation, you see every single mistake made while navigating the solution space and incrementally solving a problem.

2/6
Jan 26, 2021 7 tweets 2 min read
It's impossible to have small batches and thus Continuous Delivery if there's a part of the (tech) value stream that has high transaction cost.
At this place in the system batch size starts to build up and clogs downstream part.

Like an elephant moving through a boa constrictor. Conversely, looking for the place in the system where there's the highest transaction cost, is a great leverage point in order to increase the throughput of the whole system.

Yup, you got it, #tocot way of continuous improvement.
Jun 19, 2019 7 tweets 1 min read
Organizations with a lack of psychological safety are incentivized for big batches (a.k.a. waterfall).

Let me unwrap that.

Any incremental effort is exposing temporal lack of consistency because it optimizes for the improvement through learning not the upfront perfection.

1/7
If you want to go incremental, you have to accept that you'll be imperfect and not consistent, at least temporarily. In order to be able to be imperfect you have to expose yourself through your work for feedback to colleagues, bosses, users etc.
2/7