, 18 tweets, 4 min read Read on Twitter
0) Gee, that was an okay thread. What happened to it? Well, the first sentence contained a totally dopey error AND I screwed up the tweet numbers. So...let's start over. (If ONLY there were a tester helping me to review this stuff and spot problems BEFORE I put it in production.)
1) Testers: managers ask for numbers and metrics when (A) they don't know how to talk about test coverage and evaluate it AND (B) when testers don't either. Since that's a logical AND there are a couple of things we testers can do to address that problem and thereby eliminate it.
2) The trouble is that coverage consists of a bunch of factors, most of which are not usefully quantifiable. But you CAN discuss them. When managers ask for numbers, try offering *descriptions*. Offer lists or outlines. Offer *evidence*. Offer assessments based on that evidence.
3) Stuck on how to model and describe coverage? Think of it as different angles from which you might observe and interact with the product. Consider SFDIPOT. "San Francisco Depot" to keep it in your head, despite the misspelling: San Francisco DIPOT. satisfice.com/tools/htsm.pdf…
4) You can get traction on the idea of *test coverage* by thinking about it like this: *how much testing have we done with respect to some model of the product*. Functional coverage: how much testing have we done with respect to some model of the product's functions?
5) Code coverage: how much testing have we done with respect to some model of the code? (And to say that we've executed every line of it is only one way of modeling the code. After all, the product consists of our code PLUS framework, browser, OS code that we haven't tested.)
6) It's OK to model code like that — unless you want to test Linux and Apache and Chrome and MacOS too. Mind, you might also extend your model for code coverage to covering branches in your code, or paths. The point is, at best, you're always modeling it.
7) Risk coverage: how much testing have we done with respect to some model of risks? Modeling risk is inevitable too — unless you want to claim that you've identified every conceivable risk, in every conceivable circumstance. Phhhpt! No you haven't... and that's OK!
8) Structural coverage: how much testing have we done with respect to some model of the product's structure? Draw a diagram of the parts, and how they interconnect. How well have we covered that diagram with testing? Oooh! — and what might the diagram be leaving out?
9) Operational coverage: how much testing have we done with respect to models of the way people use the product, and with respect to models of the conditions under which it will be used? There are other elements in SFDiPOT; working through them is an exercise for the reader.
10) I'm pretty sure that you, as a tester, can see how much richer and more descriptive this is than counting freakin' test cases. And with practice, you too can describe it and map it and list it and visualize it. And you can even apply numbers where appropriate.
11) Testers keep saying "but managers want numbers!" Well, sure... and you'd want bread and water and gruel every day, too, if no one had ever given you a delicious, nutritious, well-prepared meal. A story about your testing will sate your manager's appetite for information.
12) Just remember that numbers are *illustrations* to the story. They are not the story in and of themselves. Describe your work, and use numbers when they help to aid in the description, but not when they have the side effect of hiding important information.
13) "We've done 34 sessions of testing this week, each one about 90 minutes, give or take half an hour. Most of that has been focused on covering this part of the product's structure and these operations, using stress testing to look for reliability problems. Here are the bugs."
14) "And here's a mind map that shows what we've covered. See? Those green nodes are the ones that have received a lot of attention and deep testing this week. These yellow ones not so much. These red ones... essentially no coverage at all yet. How do we feel about that?"
15) "Also, here's a table of performance data that we were able to obtain from the log files (hey, thanks for the testability, developers!). You'll notice that we're seeing a lot of inconsistency in these three functions. Some weird buffer limits, or exception handling, maybe?"
16) Coverage (and what we haven't covered) is right at the centre of the three-part testing story. developsense.com/blog/2018/02/h… Learn to tell THAT, through study and practice, and I bet your manager won't be asking for absurd numbers (like pass/fail test case ratios) so often.
17) And now a word from our sponsor. Managers: want better, more actionable information about your product and project? Testers: want help in learning how to tell the testing story? Feel free to get in touch: michael@developsense.com.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Michael Bolton
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!