, 21 tweets, 5 min read Read on Twitter
1) Thinking about counting things to measure quality? You might be able to measure *some things* *that bear on* quality. By contrast, you can’t measure quality itself (as @jamesmarcusbach has said), but you can discuss it.

Consider this: s/how many/let’s talk about each/g /
2) When you suggest "let’s talk about each bug", you might hear (or think) "No way! We have too many bugs to talk about each one! Let’s just count them instead!" If so, you can already infer some crucial things about the product and project, with no need to bother counting. /
3) Of course, those inferences are only inferences, not facts. So investigate. When you do, you might be tempted to start counting bugs. But you'll probably want to make sure that your count is appropriately accurate, precise, valid, reliable... So you need to examine each one. /
4) Examining and evaluating each bug sounds like a pain. It is, to a degree. Few people like washing or repairing dirty linen in public. Yet a bug is not just a problem; it’s also an opportunity to learn some things. When you count instead of study, you lose that opportunity. /
5) I love studying bugs. When I study bugs, I can become aware of certain things that go wrong, and some of those things get embedded into tacit knowledge. I can apply that knowledge, maybe consciously, maybe sub-, while testing, pairing with a developer, or coding myself. /
6) In Rapid Software Testing, we suggest this: when someone asks for a number or a measurement, avoid misleading them by giving them a scalar. Consider offering a description, an assessment, a report, or a list. If you can describe and summarize, you might not NEED a number. /
7) When you *are* offering a number, it had better be a valid number. When you count items, each item being counted had better be /commensurate/. That is, you must know the difference between "one of these" and "NOT one of these". You must know how to count to one. /
8) For a count to make sense, items must be commensurate—of describable size, weight, duration, significance, value, etc. etc., on a scale that people agree upon, accept, *and understand*. Otherwise communication will go pear-shaped in no time. /
9) To go seriously about the business of getting a *valid* count, you'll need to examine every bug. To do good analysis work, there's no getting out of that. The same general principle applies to counting test cases, or "defect escapes", or "invalid bug reports". All of them. /
10) "But management wants numbers!" I doubt that. Management almost certainly wants *to know things*—and from testers, knowledge about the status of the product and problems that threaten its value. Numbers might help to illustrate a story. They don't, can't TELL it. Words can. /
11) Don't be cowed into giving numbers without context. When asked for them, consider replying "misleading you is not a service that I offer," and immediately offering a summarized, meaningful description of the state of factors that matter to people who are important. /
12) All this applies to reports about the status or quality of the product, of the testing, of the project. And it applies to the work of individual testers, too. As an alternative to *measuring* something, analyze it, describe it, assess it, discuss it. Don't just keep score. /
13) What might we evaluate a tester's work? Here's an example set of elements of excellent testing: satisfice.com/download/eleme…. It may not be complete, comprehensive, or tailored to your context. If it isn't, revise it; fix it to fit. /
14) Evaluating testers' work? Go through the list and ask "are we happy with the tester's work with respect to this element?" If Yes, great. If it's outstanding, considering analyzing and then sharing that tester's approaches with others; point out positive deviance from norms. /
15) Unhappy with some element of the tester's work? Talk about it. Discuss it. Maybe the tester needs to improve it through focus and deliberate practice; maybe the tester needs pairing and collaboration; or maybe others on the team can handle that element just fine. /
16) As testers, we (supposedly) specialize in evaluating the quality of things via interaction, observation, experience with them. We consider quality criteria: capability, reliability, usability, charisma, security, scalability, compatibility, performance,... /
17) People aren't products, of course. And there are patterns common to evaluating the quality of anything: factors that make people happy or bring them value, or that in their absence trigger disappointment, loss, harm, or diminished value. But "Capability: 6" tells us little. /
18) I was a program manager for a best-selling product. I would never have conceived of shipping a product (or not) by reading a scoring table. I didn't care about metrics, test case counts, or bug counts. I needed relevant, concise stories about testing and bugs. /
19) So: avoid agonizing about "measuring quality". Consider instead learning to tell the product story, the testing story, and the quality-of-testing story. Talk about what's OK, and move quickly to problem that threaten the product or project. developsense.com/blog/2018/02/h…
Postscript to this thread: in the middle of my writing it, the Twitter client on my iPad got into a state where it was accepting additions to the thread, but when it came time to send them out, the "Tweet All" button was greyed out. Anticipating a problem, I took screen shots. /
Predictably, the active "Cancel" button DID work, and the text was all lost. But, thanks to screen shots, for once I had a backup and was able to recover my work. It took time, but at least I could do it.

A user in this position doesn't care about bug COUNTS. Only about the bug.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Michael Bolton
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!