1) Why have testers? Because in some contexts, in order to see certain problems, we need a perspective different from that of the builders—but we also need a perspective different from that of users. First, some people are neither builders nor users. Some are ops or support folk.
2) Others are trainers or documentors. Some people are affected by users, or manage users, but are not themselves users. Some are users, but forgotten users.

Another reason it might be important to have testers: everyone mentioned so far is focused mostly on *success*.
3) A key attribute of the skilled and dedicated tester is a focus on risk, problems, bugs, and the possibility of failure. Most builders can do that to a significant and valuable degree, but the mental gear shifting isn’t automatic; it requires skillful use of the mental clutch.
4) You *know* this. You know how easy it is for someone else to see things—good and bad—about your work that you haven’t noticed. And if you’ve worked with a group of creators, you know how easy it is for that certain one of you to think radically differently from the rest.
5) Being a dedicated tester inside a development group is sometimes hard. There’s a tension between near social distance (we ARE all on the same team) and farther social distance (we need diversity, which by definition requires some degree of distance and valuable differences).
6) To be a good tester requires critical proximity to the builders (technical skill can be super valuable) but also critical distance (our customers do not necessarily have the same kinds or degrees or domains of technical skill). Meanwhile...
7) To be an excellent tester in another sense requires proximity to the clients and customers—and not just the utterance or claim “I speak for the customer”. Developers often have more customer domain savvy than testers do. Testers must seriously strive to cultivate that too.
8) Sometimes that’s hard. Sometimes management isn’t supportive of testers immersing themselves in the world of the customer. Sometimes management, passively or actively, erects or maintains obstacles to testers connecting with the customers’ world. So we need determination.
9) The designers and developers and managers AND customers are mostly envisioning success. They enact the essential, fundamentally optimistic task of solving problems for people, which requires believing that those problems can be solved, and building those solutions.
10) Developers act as agents between the world of humans and the world of machines. This is wonderful.

Here’s the socially hard part for serious testers: we must focus on acting as agents between the world of technological solutions and the world of *skepticism* and *doubt*.
11) Skepticism is not the rejection of belief; it’s the rejection of certainty about belief. It is our job as testers to remain professionally and responsibly uncertain that there are no problems, even when everyone around us is sure there are no problems.
12) It is our job as testers to remember that when the magical AI can classify things at high degree of accuracy, there remains some degree of inaccuracy—and that inaccuracy can have real consequences for real people. People whom we haven’t met; who may not look or sound like us.
13) It is our job as testers to remember that “the user” or “the customer” is a *monumental* abstraction. There are users, and customers, and others affected by software who neither but nor use it. All of those are individuals, with needs and desires and obligations to others.
14) It is our job as testers to remember that intentions and desires of builders are significant and important. And that that’s not all there is to it. Misinterpretations and errors can elude even the smartest people, and the most diligent and disciplined development processes.
15) It’s our job as testers to note that it’s okay to be checking the build for errors, but on a disciplined team, checking the build must be a primary responsibility of builders. We can help with that, but doing so comes with opportunity cost: less time for testing deeply.
16) Of course, maybe our products are low risk. Maybe our product doesn’t have serious consequences; doesn’t affect money, human health and safety, the environment, or social relationships. Maybe our product can’t mislead or fool or rip off anyone, intentionally or accidentally.
17) But if there’s a risk that it can, maybe we need to develop the skills to do that well. Maybe, even though all can develop those skills to some degree, we need people who professionally inhabit a disciplined, critical mindset, like Ignaz Semmelweis, Ivan Illich, Socrates.
18) Maybe we need people who study, in depth and detail, how we can fool ourselves, as individuals and groups. Maybe we need people who examine, experience, explore, and experiment with the product from that perspective, on social, technical, and domain levels as a speciality.
19) Why as a speciality? Because when something is not someone’s focus, it will not be someone’s core competence. It will not be someone’s responsibility. It will not be someone’s focused commitment. It will not be someone’s aspiration. It will be a part-time job; a side hustle.
20) There’s an important message here for testers. Our role is suffering from benign neglect from some quarters; from active attack from others. We can’t rest on our laurels here. For one thing, in many places, we’re not necessarily earning many laurels to rest upon.
21) It’s good to be helpful to the team by checking for problems that are near the surface, near the coal face. It’s good to develop technical skills to help with that. But we must also alert our teams to the fact that deeper, subtler, worse problems won’t all yield to that.
22) To find those deeper problems means challenging the product with complex testing: investigating for problems, not just confirming that everything seems okay. It requires effort, determination, and negotiation; deepening our skills, our craft, and our testing.

—fin—

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton

Michael Bolton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

10 Sep 20
1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
Read 88 tweets
2 Sep 20
1) Why do we test? A primary reason is to discover problems in our products before it's too late.

A problem is a difference between the world as experienced and the world as desired. It's a problem if someone experiences loss, harm, pain, bad feelings, or diminished value.
2) The degree to which something about a software product is perceived a problem is the degree to which someone suffers loss, harm, pain, bad feelings, or diminished value. That is: a problem about a product is a relationship between product and some person.
3) But the degree to which something is perceived to be a problem also depends on how someone, and someone's suffering, is important to the perceiver. That is a social issue, not merely a technical one. That barely gets mentioned in most testing talk, or so it seems to me.
Read 17 tweets
21 Aug 20
1) Printer won't print because of a "paper jam". There's no paper; there's no jam. Disconnecting the power and reconnecting doesn't clear the jam that isn't there. An elaborate series of moves, with a restart does. Printer loses all of its non-factory configuration. Reset that.
2) Now the printer starts up fine. Gee, this would be a good time to download and update the firmware. Download complete. Process starts. Note that the machine shouldn't be turned off during the process. Stuff happens, sounds, machinery resetting, etc. Progress bar increments.
3) 90% of the way through the firmware upgrade, the progress bar stops moving. Hmmm, this is taking a while. Check the control touchpad on the printer. Guess what? "Paper jam." No way to clear it or ignore it... so we've got a race condition here.
Read 28 tweets
23 Jun 20
Here is why experiential and usability testing are important: after over 30 years, it's still hard as fuck to use Microsoft Word to create a simple, unadorned, #10 envelope with a recipient and a return address. Designers should be forced to watch films of people trying this.
Here are some of the aspects of the problem. 1) You'd THINK that "envelope" would be one of the options immediately available from "File/New Document". Nope. A "World's Best" award certificate is offered, but not a damned envelope. Which template do YOU need more often?
2) Try searching online for "plain envelope". Reply: "We couldn't find any Word templates that matched what you were looking for." OK. "#10 envelope". That yields four results, of varying fanciness, but nothing straightforward and plain. Pick "Red" as the cleanest one.
Read 17 tweets
26 May 20
1) Want to evaluate the relevance of your testing? How much of it is focused on what the designers and builders intended? Now ask how much of it is focused on the intentions, needs, and desires of the *actual* people who use the *actual* product—and the people who support them.
2) One of the seven principles of the context-driven school of software testing is that the product is a solution to a problem, and if the problem isn’t solved, *the product doesn’t work*. (Cem Kaner gets credit for that one; the emphasis is mine.)
3) Checking that the product does something reasonably close to what we—as a development group—intended is a really good idea. Doing that is part of the discipline of software development work; of any kind of product development. Is what we’re building consistent with its design?
Read 30 tweets
14 May 20
1) An epiphany a year ago informs part of our definition of testing: Testing is the process of evaluating a product by learning about through *experiencing*, exploring and experimenting, which includes to some degree questioning, studying, modeling, observation, inference, etc.
2) That stuff that people call "exploratory" testing (all testing is exploratory) might also—and arguably, better—be considered as *experiential* testing. Direct, interactive, and to a large degree unmediated experience with aspects of the product *as people would experience it*.
3) We might use tools to help with aspects of exploration, or with extending or accelerating aspects of gaining experience, but experience with the product is essential. Too often, tool use in testing these days is directed entirely *away* from gaining experience of the product.
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!