1) PMs and designers focus on the "next thing" 2) Developers work on the "current project"
What's wrong with this?
Outcomes suffer, even if it feels more efficient.
Why ...?
2/n While seemingly more efficient -- it causes problems:
1) information loss 2) "resetting costs" during transfer of knowledge 2) distances developer from "the problem" 3) higher work in progress (WIP), less flow 4) split focus for PMs/designer
So...
3/n When thinking about starting together, teams get a little paralyzed because they somehow can't imagine all focusing on research/discovery
You have...
A: The status quo
B: How they imagine starting together
C: How it happens in practice
4/n. The important thing about starting together is that they kickoff the effort together and really "open the door" together. And set working agreements.
At that point, they figure out what will work for the effort. It often looks like this:
But there's one huge trap that I see teams fall into.
Start with the Why not the Way
Visualizing work is not the goal. ___ is the goal.
What do I mean?
1/n: Imagine if you emptied out all of your messy drawers just for fun. Well..
2/n: You would have succeeded in making a big mess and reminding yourself how messy you are, and how much you like collecting old subway cards, but you wouldn't have really achieved anything.
Now say you....
3/n: Started by committing to a powerful mission of making it easier to find things. You spend valuable time every day checking multiple drawers.
Or committing to a more public display of keepsakes and caring for your things better?
was asked recently how I would go about "benchmarking self-service analytics performance".
Some thoughts:
1/n: You can't *just* look at the experience of the end-user. There are many humans involved in making self-service work. Their experience matters.
2/n: A great example is telemetry/instrumentation.
Someone has to figure out how to capture that data. What is the developer's instrumentation experience? What is the experience of deciding which events to track? How fragile is the process?
That's part of performance.
3/n: You can't measure performance by focusing solely on accessibility to the data or insights. Or even the timeliness of the data.
At the end of the day the goal is better decisions and business outcomes.
Is stop trying to optimize for developers "being busy"
...prepping work to give them
...doing small group discovery upstream from their work
..."topping up" every sprint (or quarter, or whatever)
The hardest part? ... 1/n
...often the push to keep people "loaded up" comes from the engineering org itself. People want to feel useful. People want to have something to do
Output is rewarded.
If there is "no work", that is the product managers fault. People get grumpy
So how do you address this? 2/n
For many teams, it may mean having a list of small things people are passionate about. Things that preserve optionality, and that are relatively quick and low risk
When you need slack, you can draw from this list
This helps for the ppl who don't care for doing discovery
3/n
making instrumentation/telemetry just "how we work" vs. some bolt-on process, telephone game, as-a-PM-I-can-see user story craziness.
I see this with super successful customers at @Amplitude_HQ . It is just how they work.
How do you do it? 1/n
First, you have to get the people who 1) understand the decision domain, 2) understand the customer domain, 3) understand the product surface area, and 4) understand "data" ... in one place.
Sometimes you get lucky and that is one or two people. Sometimes not.
Gotta do it 2/n
Usable data starts with domain knowledge!
It is so tempting to just instrument abstract clicks or page views and hope to clean it up later.
But instrumenting clean, understandable, domain relevant events from the start is SO MUCH BETTER.