Software development estimates are frequently *way* off.
Why? Because many aspects of software development are nearly impossible to estimate.
Here are 9 reasons software development estimates fail:
👇
1. “Done” is debatable.
Aspects of acceptable quality like performance, code quality, security, accessibility, reusability, readability, and usability are hard to specify and quantify. This leads to time-consuming arguments and negotiations over when code is truly done.
2. Merge conflict overhead is unpredictable.
The frequency and complexity of conflicts varies based on team size, code coupling, ticket size, branching strategy, tech, and merge frequency.
3. Dev environment issues are unpredictable.
Examples include hardware issues, framework and library bugs, slow servers, service interruptions, internet problems, access control issues, third party outages, VPN issues.
4. Untestable code is hard to identify up front.
If a new feature interacts with code that isn’t friendly to testing, refactoring the code may be required. This is hard to detect up front when estimating effort.
5. Bad requirements are hard to detect early.
Incomplete, vague, conflicting, outdated, or incorrect requirements are often hard to detect until implementation, deploy, or usability testing.
6. Requirements are “lossy”.
No document or tool can convey ideas with perfect clarity. This leads to misunderstandings, time-consuming scope negotiations, and clarifications.
7. Developer velocity varies daily.
Developer effectiveness, and efficiency varies widely. It depends upon tech expertise, existing code quality, domain knowledge, competing priorities, dev environment stability, turnover, domain knowledge, sickness, time off, and more.
8. Communication overhead is dynamic and unpredictable.
Overhead varies based on team size, solution complexity, documentation needs, turnover, coupling, and approach.
Every extra human adds overhead that’s hard to quantify.
9. Cross-team dependencies reduce control and increase the risk of delays.
Cross-team projects require multiple teams to deliver on time. That’s hard to do given all the previous variables listed above.
In summary, this is why I avoid fixed bid development projects.
Fixed bids presume predictability, autonomy, certainty, and control that rarely exist in the world of custom software development.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
And yes, many of these use useEffect behind the scenes. But that’s the point - instead of calling useEffect directly, you should probably use a mature abstraction at this point.
Just learned a new monorepo pattern from @rwieruch: Incubate and hatch.
Goal: Compose a separate repo within a monorepo. This is useful when the repo will be developed initially in a monorepo (incubated), and handed to a separate team (hatched).
1/4👇
Here's the incubate and hatch approach:
1. Create a separate repo.
2. Close the new repo into your existing monorepo. (Called incubation). Ignore the repo via .gitignore. This allows rapid dev by referencing local versions of relevant monorepo dependencies.
2/4
3. When the project is ready to be handed to the separate team, the repository is "hatched". Since the project had a dedicated repo all along, this is easy. Relevant dependencies are set to the current published version. The new team can upgrade deps over time, as desired.
3/4
1. Unit tests (testing functions and components in isolation, often via @fbjest)
2. In-browser tests (testing the app in the browser, often via @Cypress_io)
The struggle: How do we avoid testing the same things twice?
Two approaches I've seen:
1. Focus mostly on unit testing, and create a small number of in-browser "happy path" tests.
2. Focus mostly on in-browser testing, and only create unit tests when desired.
Trying to cover all scenarios in both leads to a lot of duplicated effort.
That said, I'm not saying redundant coverage is bad. It's often impractical to exercise all code via the browser, especially since in-browser tests are slower.
So comprehensive unit tests are often useful. I'm just searching for a practical balance between the two approaches.
The impact:
Each line must be carefully reviewed in hopes of catching all the mistakes.
We can’t assume anything works reliably, makes sense, does what it claims, or matches requirements.
We must question every line.
That’s a big problem.
Reviewing untrusted, poor quality code is time-consuming and demoralizing.
We're unlikely to catch all the problems.
It's impractical to "review our way to quality" when starting with code that's low quality, or solving the wrong problem.
Thankfully, most my career, my teams have worked with developers we could trust. But when we couldn't, PR’s required HUGE amounts of time, multiple rounds of comments, and occasionally, a complete rewrite.
✅ Keep state as local as possible. Start by declaring state in the component that uses it. Lift as needed.
✅ Store data that doesn't need to render in refs
✅ Minimize context usage
1/x...
✅ Avoid putting data that changes a lot in context
✅ Separate contexts based on when they change
✅ Place context providers as low as possible
✅ Memoize expensive operations via useMemo
✅ Avoid needless renders via React.memo
2/x...
✅ Put content that renders frequently in a separate component to minimize the amount that's rendered
✅ Split complex controlled forms into separate components
✅ Consider uncontrolled components for large, expensive forms