Not sure who first decided to model design as a linear process (research-design-test-build) but it's created an expectation among stakeholders that this is how it's supposed to go - each step must be done Exactly Once and if you have to loop back, you messed up.
🧵
This could not be further from my experience. I've yet to see a design effort that could confidently say "our testing bore out every hypothesis, no changes, send it to the dev team."
And yet timelines and expectations keep being set as though this is the norm.
🧵
Part of the problem is the secret missing first step in the design process - "stakeholder assumptions."
They will assume that their mental model of the world is the correct one, and therefore if the research, design, or testing did not conform to it - You Did It Wrong.
🧵
Savvier stakeholders understand this, and will stymy research and testing efforts because they know that their assumptions will not be borne out. They are "comfortable with ambiguity" because they know that the clarity that emerges out of it will not be the one they like.
🧵
Being the "champion of the user" is not enough to handle this issue. Designers must understand that much software is built *for the product team* first & foremost, and for the user second (or third after Sales) and develop their toolkits accordingly.
🧵
"We need to build X" is not a user-centered statement. There are many factors involved such as the high internal/external profile of working on a "sexy problem" or the resources that come with a problem (and the power from being a leader with lots of resources)
🧵
When an exec has already decided that this product will be the cornerstone of their empire-building, or a PM has already promised a bunch of features to Sales, or Engineering wants to adopt resume-driven development - research is not part of that conversation.
🧵
Truly "being comfortable with ambiguity" requires 3 things to be true:
- establishing clear criteria for what we need to move forward (not just deadline or outputs)
- creating processes that can be expected to fulfill that criteria
- giving those processes the space to get there
Usually you end up with the reverse:
- there's no clear established "definition of good"
- the goal is entirely output- and time-based (have wireframes ready by the end of this sprint)
- there's no opportunity for iteration ("be wrong on your own time")
🧵
The irony is that doing it the right way gets project managers what they want!! A clear Definition of Good and an established process help estimate the effort and reduce wasted time.
"Test by shipping" and "I'll know it when I see it" are the most time-consuming strategies.
🧵
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Thinking back to my education, we were taught to try out quick sketches of many different ideas and gradually winnow them down. I'm realizing that most other professions aren't trained to think that way, which makes it hard for them to understand the value of the design process.
There is much less pressure to defend any one idea when you developed 20 other ideas. But when you went ahead with the first thing that works, it's natural to feel like you have to fight any critique with tooth and nail - so even if the idea was good, it'll never get better.
I've always been a fan of everyone learning some design skills. But early in my career I thought that the important skills were fonts and layouts. Now I believe that they are:
-structured ideation
-iterative process
-not taking feedback as a personal attack
One thing to check out if you are joining a company as an early design hire & expecting to build design culture:
- does the org recognize that this will be extra work on top of delivering outputs?
- does it recognize the value that work and commit to support it?
Commitment to support looks like:
- Budget, tools, and other resources
- Invitation to rebuild processes across the org
Commitment to support does not look like:
- "Sure, if you can submit a proposal proving the ROI"
- "We actually hired an Agile coach to do that already"
Can an IC be a leader and build culture? Yes, absolutely.
Can an IC do this without existing executive buy-in, if they were only hired to produce outputs in support of existing processes? Yes, but it's a miserable slog and you can find a better place in the industry.
Engagement metrics: When you don't know why people use your product, but want to make up a metric to measure your PMs on because you read an article about how numbers are objective
Ostensibly customers want to use your product to fill a need, then put it down. But engagement doesn't measure filling of needs. In fact, it measures the opposite - time spent using the tool, with the need going unfilled (what @jmspool calls tool time) articles.uie.com/dividing-user-…
@jmspool Setting a metric to track implicitly or explicitly drives the team to improve the metric. By selecting Engagement as a key metric, you run the risk of telling your teams "we want you to add more things to click, and delay the user reaching their goal for as long as possible."