Habit: When declaring REST API response types via #TypeScript, I only declare properties for the fields we use.
Benefits: 1. The type is simpler. 2. The type contains no noise. All properties are relevant. 3. The type is handy for mocks. It declares only the properties we use.👍
This tip applies to GraphQL too. I don't want to work with a generated response type that contains 100 optional properties if we only use 5. So when in GraphQL, I create a type for each unique query.
To clarify, generated types are great. 👍 I just don't want to use them directly because they often contain properties I don't need or use.
So, I use TypeScript's Pick or Omit utility functions to derive my own more narrow types.
Example:
• • •
Missing some Tweet in this thread? You can try to
force a refresh
✅ Keep state as local as possible. Start by declaring state in the component that uses it. Lift as needed.
✅ Store data that doesn't need to render in refs
✅ Minimize context usage
1/x...
✅ Avoid putting data that changes a lot in context
✅ Separate contexts based on when they change
✅ Place context providers as low as possible
✅ Memoize expensive operations via useMemo
✅ Avoid needless renders via React.memo
2/x...
✅ Put content that renders frequently in a separate component to minimize the amount that's rendered
✅ Split complex controlled forms into separate components
✅ Consider uncontrolled components for large, expensive forms
7 things that keep teams from doing Continuous Delivery (deploying daily or even hourly):
1. Non-atomic PRs.
Solution: Each PR must be ready for a prod deploy before it can be merged to `develop`. To separate deployment from release, use a feature flag.
1/x (thread)
2. Ad-hoc release notes.
Solution: Declare release notes in CHANGELOG.md. Require an entry in this file in each PR. Validate this file has changed on each CI build. This assures the release notes are customer friendly, accurate, and complete.
2/x
3. Flakey tests.
Solution: Most tests should be unit and integration tests. Mock the API. Simplify E2E tests. E2E should merely assure each section loads. Anything more granular may lead to flakiness due to changing data.
3/x
When a measure becomes a target, it ceases to be a good measure."
— Goodhart’s Law
Example 1: Story points measure difficulty. But, if you set a target for story points completed per week, developers inflate their story point estimates to assure they hit the target.
Example 2: Tracking code coverage helps the team understand what code is tested. But if you target a certain code coverage percentage, developers game the system by writing useless tests.
Example 3: Burn down charts help estimate the completion date. But if you set a goal of burning down to zero every sprint, developers commit to less work in each sprint. And they avoid proposing any work that’s hard to estimate. This assures they can burn down to zero.
Let’s talk about the implications of researching decisions (Thread)
If I quickly make a decision, it feels unimportant. I go with my gut. I’m typically happy.
Occasionally, I decide to research a decision. More research leads to better outcomes, right?
Not necessarily...
If I heavily research a decision, the decision feels more important. I want to justify my research time. So, I search for every tradeoff. I optimize for perfection. But, because my expectations are now so high, I’m more likely to be disappointed! 🤦♂️
So, here’s the trap: The more I research a decision, the more important the decision seems. This leads to problems:
1. Overspending due to over-valuing minor differences and fear of missing out.