Businesses where people do the minimum at work are more stable and productive. When the org culture expects everyone to be a hero, you get:
- a ton of wasted work/time
- difficulty distinguishing between work for outcomes & work for visibility
- single-point-of-failure people
"I made 10 variations of this cool mockup" - premature design decisions, stakeholder confusion, probably going to burn out.
"I did this one sketch" - facilitating the right conversations more quickly, level of effort is sustainable indefinitely.
When the expectation is "you should be doing a lot" it's very difficult to see that you are doing the *wrong things* and you end up spending exponentially more time on prioritization, planning, coordinating things that do not matter.
This complexity creates enormous cognitive load for anyone working in the organization to just do their jobs. Critical business functions end up being performed as, effectively, somebody's hobby - unrewarded and competing for "air time" with 10 other things.
So when that person is tired of 80-hour weeks and quits (or "quiet quits" aka does their job) everything breaks down, nobody can tell why, and nobody has the context to pick up the pieces.
Setting heroic expectations as the baseline is lazy management:
- we don't know what's important
- we don't know how much effort it takes
- we don't know how to measure productivity
- so we'll make it YOUR problem instead, good luck
Meanwhile it's much harder to figure out:
- what is the goal
- what is the most efficient path to meet it
- how much effort is required
- whether people have the support they need to move forward
but it's actually management's literal job to be doing these things
The irony is that "people can just step up & take on more work" as your resilience strategy is actually far, FAR more brittle. By normalizing it, you set expectations that everyone should be working at >100% capacity...so when you actually need to flex, you have no flexibility.
Meanwhile in a minimal work culture, everyone is guaranteed to have excess capacity. So when you really have to, you can ask someone to take that work on…
…temporarily
…with reward for doing extra
…only by people who self-assess as willing & able to do it
That's resilience.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Before making a decision, ask yourself
1: what information do we already have?
2: what is our plan for getting the information we still need?
If you try to do 1 without 2 you will waste more time in meetings than it would have taken to just do the research.
In an environment that doesn't support identifying missing information and creating a plan to gather it, you'll always get two camps:
- We don't know enough to commit
- How dare you question the Deep Knowledge of the Experts
The longer these camps argue, the more the stakes of the decision shift from "decision about the product" to "decision about who to favor in the political struggle." So much reputation becomes staked on the outcome that it warps the team's perception of reality.
Bad research is designed to "validate" ideas AKA get a simple "yes" answer to "would you like..." questions. But when you ask someone "would you like a new feature?" they will always say yes.
Hypotheticals give meaningful results only when the decision is associated with a cost.
People are notoriously unreliable when they have to extrapolate their own behavior into the future. When you give them a choice between "have this new thing" and "have nothing" the answer is even more unreliable. Who would say they want nothing when they could get something?
If you *must* ask a hypothetical question, associate it with a cost.
"You can have feature X or feature Y, which one do you like better?" or "You can have feature X in a premium tier that costs $5 more, would you pay?"
Overheard: "the backend is already done, now we just need to skin it" 😬
From my experience, the "backend is already done" project timeline goes something like this:
-system data structure is a complete mismatch with the user's mental model
-spend several release cycles trying to fix data structure on the UI layer
-end up redoing backend anyway
This is why the "UX works 2 sprints ahead of dev" system never works. Devs need to know the constraints of the UX (such as the necessary information architecture) before they begin working on any part of the system. "2 sprints ahead" is far, far too late.
The worst approach to a method (JTBD, story mapping, whatever) is studying its steps "by the book."
Study how to bring about the circumstances in which the method will be useful: making its prerequisites achievable and its outcomes desirable. The "how" will flow naturally.
If you only study the "how" you will develop rote expertise. Useful only when someone else tells you "do the thing". And a source of pointless arguments when collaborating with people who learned the method differently.
If you instead learn:
- what conditions (knowledge, prerequisite work, buy-in) are required
- what value the method adds to that situation
then you can decide when to apply it yourself. The individual execution differences no longer matter, only the outcomes.
There's a tendency among legacy companies to get "all their ducks in a row" before trying something new. That is a sure sign that the new thing will fail.
You will never know what you'll need until you need it. Rather than trying to guarantee success, reduce the cost of failure.
You will not achieve transformative outcomes through the same processes that got you here today. If your business-as-usual is to gather stakeholders like a Katamari ball and plan deliverables years into the future, you need to *change your process*—not just outputs—to succeed.
Rather than try to predict every eventuality, build a team and process that can handle the uncertainty:
- single-threaded ownership of the problem
- room to research/experiment
- flexible definition of success (metrics over features)
- one stakeholder sponsor