Testers: feel like there’s too much to test? Start by surveying the product and creating a coverage outline. Next, try quick testing that finds shallow bugs. This highlights product areas and failure patterns that prompt suspicion about the risk of deeper, more systemic problems.
Others on the project may identify bugs and risks. The difference in the testing role is that probing for problems and investigating them is at the *centre* of our work. For everyone else, that’s a part-time job; a distraction; an interruption of the primary work; a side hustle.
Just as people doing development work don't typically do sales and marketing, HR, or accounting, they don’t do deep testing either. That can be totally reasonable; they've got productive work to do. But if there's no testing expertise on the team, expert testing won't happen.
It’s also totally reasonable for testers to do any kind of productive work at any time. But in the moment when testers are doing that work, testing work isn’t happening. Be alert to that. Announce it. Otherwise it’s easy for people to be fooled into believing testers are testing.
Some testers say "But I don’t want people to think 'I'm only doing testing'!" I'd ask "Why not?" If the reply is "Because people don’t value testing work", maybe it’s time for everyone to pause and reflect. If there are problems in a product, isn’t it better to be aware of them?
Do you feel like people are try to micromanage testing work? A long time ago, @jamesmarcusbach galvanised me by saying "I want to find so many bad problems that managers and developers are spending all their time trying to figure out how to fix them. Then they leave me alone."
You can achieve substantial freedom to captain your own ship of testing work when you consistently bring home the gold to developers and managers. The gold is awareness of problems that make managers say "Ugh… but thank heavens that tester found that problem before we shipped."
You might think you don’t have time to look for little problems that lead you to big problems. "Management wants me to finish all these test cases!" "Management needs me to fix all these automated checks that got out of sync with the product when it changed!" But you have time.
You have *disposable time*—time when management isn’t actually watching what you’re doing. If they were observing closely, they’d be horrified at the time you’re spending on paperwork, or on trying to teach a machine to recognise buttons on a screen, only to push them aimlessly.
If you’re using a small fraction of that time to explore more valuable approaches to find problems, no one will notice on those rare occasions when you’re not successful. But if you are successful, by definition you’ll be accomplishing something valuable and/or impressive.
The trouble with excessively constrained testing is that bugs ignore constraints. So use little bursts of disposable time for spontaneous, quick, unscripted testing; to explore around for new risks; to develop little tools to accelerate your work. developsense.com/blog/2010/01/d…
Testers often plaintively ask "How can I convince management that testing is valuable?" My answer might ruffle some feathers, but it’s true: when you’re not helping the team recognize problems that matter—problems that no one else would find—you’re vulnerable to that question.
But if you ARE finding problems that matter—deep, rare, hidden, subtle, intermittent, emergent, surprising, bone-chilling problems that have got past the review and testing that designers and developers have done—then you won’t need to do much convincing. Your work will do that.
Finding those subtle, startling, deep problems starts with shallow testing that gets deeper over time. Quick, cheap little experiments that find problems point to deeper problems. Quick study of the product space builds your mental models and points to areas for deeper study.
Deep testing is something that developers will rarely do. That’s not because they’re bad people or bad testers. (Developers tend to be pretty fabulous testers with respect to developer-focused stuff and OTHER developers' code.) But deep testing really messes with their workflow.
It’s a very good thing for developers to do quick but not-too-deep testing and review that helps them notice problems near the coal face; to automate output checks that give them fast feedback about undesired changes. But why should testers be recapitulating that work?
Worse yet; why should testers be recapitulating developers' checks while pointing machinery at the machine-unfriendly GUI? That doesn’t "save time for exploratory testing". Development and maintenance of GUI checks gobbles up time like a dog eating breakfast.
Some shops say "We need GUI checks! Our developers don’t check their work! They don’t have time for that!" Why not? Usually because they’re fixing bugs that might have been found had they taken time for review and quick testing. On this, testers and developers can stand together.
Two big destroyers of deep testing time: 1) Developing and maintaining shallow GUI checks—arguably made worse by intractable "codeless" GUI automation tools that are riddled with limitations and bugs; and 2) investigating and reporting and retesting relatively shallow bugs.
We could instead seek and discover shallow bugs, treat them as clues that point us towards deeper problems, find them, and then report responsibly on how productive independent bursts of experimentation can be. That will earn us more time and freedom to to deep, valuable testing.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton

Michael Bolton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

5 Oct
1) Still having trouble logging in to Facebook, but for mundane reasons. See, apps with 2FA send an email or a text message when you ask for a password reset. But unlike machines, people are impatient, and mash that "request reset code" button multiple times.
2) As a consequence, several reset codes get sent. Because of email latency, who knows when the most recent request has been fulfilled? So the most recent code in the email box might not be the most recent one sent, so things get out of sync.
3) This gets a richer when Messenger notices trouble. I get email from Facebook: "We noticed you're having trouble logging into your account. If you need help, click the button below and we'll log you in.” Then there’s a one-click button that will allow me to log in to Facebook.
Read 39 tweets
24 May
18. Learning about problems that will threaten value to customers certainly requires scrutiny from the builder/insiders' perspective. The code shouldn't be inconsistent with builders' intentions. And among themselves, the builders can be pretty good at spotting such problems. /19
19. But to be really good at spotting problems that threaten customer value requires builders' savvy PLUS a significant degree of estrangement from the builders' set and setting, and requires immersion in the consumer/outsiders' form of life. And there's a problem here. /20
20. The problem here is that, with a few exceptions, *deep* immersion in the user/consumter/outsider form of life isn't easy to come by for most testers. Some testers have come from the product's intended domain; that can help a ton. Others study the domain deeply; also good. /21
Read 6 tweets
24 May
5. This is not to say that testers can't be helpful with or participants in checking. On the contrary; we likely want everyone on the team looking for the fastest, most easily accessible interfaces by which we can check output. Testers know where checking is slow or expensive. /6
7. But here's something odd: testers don't always point out where checking is slow, difficult, or expensive—and, just as bad, maybe worse—where checking is misaligned with product risk. I suspect there are some fairly gnarly social reasons for this goal displacement. /8
8. In some organizations, testers prestige is based on programmers' prestige. Do you write code? Then you're one of the cool people. You don't? Then who needs you, really? This is a mistake, of course, but it's true that testers don't produce revenue. /9
Read 6 tweets
24 May
The tester’s mission is not the builder’s mission. The builder's mission is to help people's troubles go away, envisioning success.

The tester's mission is to see trouble wherever s/he looks, anticipating failure. The tester’s mission helps to serve the builder’s mission. /2
2. The tester's mission helps to serve the builder's mission in at least two ways: a) in noticing where initial problems persist; where the builder's work might not be done yet; b) in recognizing new problems that have been created while attempting to solve the initial ones. /3
3. Some problems can be anticipated, and then noticed by performing checks in a rote or mechanistic way. That kind of checking is part of a disciplined development and building process; very good to do, but it doesn't hold much hope for identifying many unanticipated problems. /4
Read 4 tweets
22 May
20) If you present testing as a complex, cognitive, *social*, *self-directed*, *engineering*, *problem-solving* task, I guarantee more programmers will happily throw themselves into it. And, if you have testers, MORE TESTERS WILL TOO. So what is the problem to be solved here?
21) One big problem is: we have a new, complex, technological product that we intend to help solve a problem; and that we may not understand the problem or our solution as well as we'd like; and whatever we don't know about the all that could bite our customers and could bite us.
22) Finding problems that matter in that product is greatly aided by access to rich models of the product itself; of customers, how they use it, and what they value; of who else might be affected (support, ops, sales...); of coverage; of recognizing possible trouble; and of risk.
Read 5 tweets
22 May
15) There are ways of addressing those problems, but I don't think an appeal to quality is enough. Developers are already satisfying lots of quality criteria—it's just that they're specific quality criteria that are important to some managers: SOMETHING, ON SCHEDULE, ON BUDGET.
16) When programmers are satisifying those quality criteria, it's not helpful to suggest that they "learn about quality", or worse "learn to care about quality". They already care plenty about quality; but maybe they rate some dimensions of quality different from your priorities.
17) If testers and managers treat testing as a rote task of confirming that something works, it's inevitable that programmers will find it tedious and boring: they KNOW it works. They built it, right? Why would they build something and then give it to others if it didn't work?
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(