A developer (in a team or a multi-team environment) can never know how long it takes to develop a feature a #NoEstimates thread 🧵 explaining why...
There are many reasons why Estimates can never work (as input to reliable, and money/time sensitive decision making). But there's one that goes against the very basic beliefs we have in software development #NoEstimates
The basic premise that estimates rely on is that a "Developer" can reliably know how long it takes to develop a feature (let's call it that for now) that they have been presented with in the form of a spec or user story #NoEstimates
This premise is false because of a number of reasons. Let's explore some of those reasons below: 1. The feature description is never final (more details emerge later) or fully specified (it would be the code itself otherwise) #NoEstimates
2. Even if we would accept that the specification were final and full (not possible), the developer will not be the one testing the feature. Testing takes time, possibly even more than developing the feature itself #NoEstimates
3. When a feature is tested, it may generate rework (bugs, small improvements, etc.), which could not have been in the original estimate. #NoEstimates
4. The developer does not know when the feature will start to be tested, as they are not the ones testing it, and cannot know the workload on the testing side of the work. #NoEstimates
5. When the tester picks up the feature, they will have had time to understand aspects that the developer did not think about (for example how it interacts with other features in the same software). This may lead to changes #NoEstimates
6. The developer is likely not the architect (at least in multi-team environments), therefore will make assumptions that might be wrong once the architect makes decisions to account for the other features being developed #NoEstimates
7. Once the tester is done with feature testing (exploratory or feature testing), the software moves to system/end-to-end testing, which may again originate rework #NoEstimates
And the number of reasons for the original estimate to be unreliable go on, and on, and on! It is little wonder that average project delays in some environments hover around 60% and with some up to 200% or more as reported in some literature #NoEstimates
We have much better alternatives, one of which I describe in the noestimatesbook.com, but many more are out there!
If you've read this far, and would like to read a blog post about this topic, retweet this thread and tag me, I'll write a longer form blog post if we reach 100 retweets.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There are 3 critical logical fallacies that people make when they think about estimates. And they are quite easy to debunk too! A #NoEstimates Thread...🧵
The First: people think "better" estimates get you more predictability. If this were true, all transportation systems in the world would spend MILLIONS on estimators! Instead, what they do: measure, repeat.
They measure past performance, and assume similar future performance. A great example of this is the drawing of bus/train/air traffic timetables.
In #Agile software development, this can be easily done by measuring cycle time for Epics/Features/Stories, and using that to plan!
Thanks to @DrAgilefant and friends, just got my hands on a thesis that shares some enlightening insights into how common and impactful estimation errors are #NoEstimates
I will be publishing more of what I read in this thread.
"Outliers are so frequent that the noise drowns out the signal in the data"
In other words: even if you have data from "actuals", you don't really know if you will be late because the outliers are only visible too late and have a huge impact on delays #noestimates
Kmart may have gone bankrupt (at least in part) due to a failed 1.4bn USD IT project.
One more case where estimation, did not - at all - save a company. Indeed, the errors were so large (both business estimates and SW estimates) that the company went bust #NoEstimates