My Authors
Read all threads
1) Want to evaluate the relevance of your testing? How much of it is focused on what the designers and builders intended? Now ask how much of it is focused on the intentions, needs, and desires of the *actual* people who use the *actual* product—and the people who support them.
2) One of the seven principles of the context-driven school of software testing is that the product is a solution to a problem, and if the problem isn’t solved, *the product doesn’t work*. (Cem Kaner gets credit for that one; the emphasis is mine.)
3) Checking that the product does something reasonably close to what we—as a development group—intended is a really good idea. Doing that is part of the discipline of software development work; of any kind of product development. Is what we’re building consistent with its design?
4) That the product does what we think it does, or what we hope it does, or what we designed it to do is—to some degree—beside the point. What really matters is the experience of people who use it. Does the product solve problems for *them*?
5) Does the product have problems of its own that get in the way of helping users solving their problems? Does the product introduce new problems on its way to trying to help solve users’ problems? Answering those questions is critical—and requires *experiencing* the product.
6) Getting experience with the product; interacting with it directly; exploring it; experimenting with it; challenging it; looking for problems that matter to people; all of these require complex, skilled social judgement focused on discovery and insight, not just confirmation.
7) To do this responsibly and efficiently, we must put skilled testers at the centre of testing. We must consider testing as social, analytical, cognitive, investigative work. The common notion of testing as button-finding and button-pushing done by humans or machines must go.
8) Want to create simple checks for a given button launching a given web page, or for a given calculation producing a desired result? Go ahead—although it’s a mystery to me as to why programmers would not insist on the time and resources to do that themselves, as craftspeople do.
9) As a TESTER, though, it’s critical (pun intended) to interact directly with the product from the perspective of people who actually use it. On that basis, we can plan, perform, and evaluate AND technical and social analysis of the product AND of our testing.
10) These kinds of interaction and analysis can be powerfully aided by tools. We can observe things that would otherwise be invisible. We can generate masses of input data, and analyze masses of output. We can use tools to visualize coverage of the test space—not just the code.
11) Build-checking tools can help us make sure that the right versions of the right files are in the right place. This is a good thing. Such tools, though, are aimed at the discipline of the development group, and aimed away from the experience of the users and support groups.
12) I fear many friends in the testing craft are making this mistake: confusing *checking the build* with *testing the product*. It’s an easy mistake to make, since some problems in the build do lead to some problems in the product. Noticing such problems is important, of course.
13) However, no matter how much testing you've done so far, a checked build is not the end of product development, and it's not the end of the testing that matters most of all. All that work brings you to a product that is ready for *realization* (pun intended there too).
14) All the way along, you and the development team have been working with ideas of the product; a set of abstract intentions for it. Some of those have been represented in artifacts: requirement documents, designs, specs, sketches, narratives... All of these represent *ideas*.
15) Once you've got a checked build, you're at a point of realization in two senses: first, you've realized your intentions in the sense that you have a real product, not an imaginary one; second, you have an opportunity to realize (that is, recognize) problems with the product.
16) This is crucial, because up to this point, everything that you know about the product is based on theories, intentions, and assumptions that deal with models of it. Those are important, but not the same as facts based on experience, experiments, with the *actual* product.
17) Remember: no matter how well you think you know the product, *someone will be surprised* by something about it. That's because everyone on the team has a set of ideas about the product; there are lots of artifacts related to those ideas; and then there's the actual product.
18) Ideas, descriptions and artifacts, and the actual product overlap to some degree, and differ to some degree. Moreover, the focus of development work is mostly on building, and on the intentions of the builders, which almost always differ to some degree from users' intentions.
19) It is the work of interactive, experiential testing to find out where the product might be surprising us; where we might have been fooling ourselves; where our beliefs and assumptions about the product—no matter how well grounded—don't fit with users' intentions and desires.
20) It seems to me grossly dismissive to call this "manual testing"—and even worse, to even suggest that it could be replaced by "automated testing". This kind of work requires social competence, awareness, empathy, judgement, and engagement. These things can't be mediated away.
21) Testing that people call "manual" is, it seems to me, mostly *experiential*. We don't talk about "manual learning", nor about "manual exploration", "manual experimentation", "manual analysis", "manual thinking". So what part of testing IS manual? DATA ENTRY. SOME data entry.
22) When people observe testers typing, they call it "manual testing". Lots of people do typing, but no one talks about "manual programming", no one refers to "manual journalism" and no one refers to customers using your product "manually". They DO talk about *user experience*.
23) Why do people believe that there's no time for experiential testing? One reason might be overstructure: preparing elaborate, procedurally scripted instructions for testers to follow, as though testers were machines made of flesh. Not only expensive; it *inhibits* experience.
24) We can have far less expensive, yet far richer testing. Here's how: prepare testers by giving them rich, expansive mental models for covering factors in and around the product. Immerse them in the world, values, and quality criteria of actual users. Train them in analysis.
25) Provide testers with tools AND skills to build tools. Avoid focus on tools that do little more than type text and click buttons; promote focus on tools that accelerate and intensify analysis of the product, the product space, our testing, and our engagement with all these.
26) Train and encourage testers to be skilled observers, analysts, investigators. Focus testing on risk to the business that comes from problems in the product. Avoid thinking of testing in terms of test cases; start thinking of it in terms of investigation, looking for problems.
27) "Wow, that sounds expensive." "Wow, that sounds like it would take a lot of time." It isn't, and it doesn't, when you actually bother to try it with skilled, motivated people. Give people a mission to find problems that would really matter to people, and they'll do that.
28) You could give testers a mission to write programs that type text and click buttons on an interface designed not for machines but for people; and then check some output against some prescribed result. Or you could ask them to experience the product and report on problems.
29) As testers, I believe we are less valuable when checking the build (something builders should reasonably require time and resources for). We're far more valuable when we answer this question: *are there problems that threaten the value of the product to people who matter?*
@threadreaderapp Unroll, please.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Michael Bolton

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!