1) Documentation people and testers are like admins and secretaries. Companies came to think of them as an extravagance, and got rid of them in the name of "efficiency". But this is a completely bogus efficiency, because the cost of NOT having them is largely invisible.
2) Everybody is now swamped with doing the work that support people used to do, but that's invisible even to the people who are now performing that work. It's "just the way things (don't) get done around here". I notice this as I'm programming; most API documentation *sucks*.
3) Of course, when I'm programming (even for myself), my job is to make trouble go away; to get the problem solved. When something gets in the way of that, I'm disinclined to talk about it. "I'm a problem-solver!" So I'll buckle down and push on through. Gotta get it done!
4) When something finally does get done, my pain of having had to deal with problems dissipates, because "Hey, it works!" And who wants to talk about the problems any more? Plus, there's another issue: the details of those problems are BORING to anyone who isn't, or wasn't me.
5) The problems are even pretty boring to me. And I notice this, because... I've been logging them. In retropect, they're not *exciting*. But there is something kind of exciting about logging them: I find out how much time feels wasted because people aren't documenting stuff.
6) Working programmers and testers don't have the time to get their work done AND record what took them time, made things harder or slower. Plus they don't want to record obstacles because all the *capable* people *never* run into obstacles. (Well, they don't *say* they do.)
7) So what we have is this feedback loop: everything is horribly inefficient, but people are not prepared to say so, because inefficiency allows others to suspet some degree of incompetence, AND logging problems gets in the way of efficiency! This becomes the Secret Life of Tech.
8) Any kind of "finished" project is a ship in a bottle. No one sees the effort that went into it; no one sees the spilled paint or snapped hinges or tangled string or broken glass. (h/t Simon Shaffer and Harry Collins for those images). PLUS: the ship is delicate and fragile.
9) People look at the ship in the bottle, wonder for a moment how it got in there, and then shrug. There is *no way* for them to see the effort that went in, and they're also disinclined (for reasonable social reasons) to find out just how fragile the ship is. Don't poke it!
10) Part of the antidote to that, for testers, is to help our clients to become aware of aspects the Secret Life. The testing story cannot and must not be about the number of scripted tests that are passing, or the number of automated checks that are running. It's GOT to be more.
11) The testing story has to be about the actual, real status of the product, warts and all. Testers *must* focus on problems, because no one else wants to do that — but bad problems threaten the value of the product. Better to experience them and learn about them in the lab.
12) But for that story to have a warrant, we must tell a story about how we tested, and how we observed problems. But we must also tell a story about important testing not yet done, and what we won't test at all unless things change. Important untested stuff is where risk lurks.
13) So those strands in the larger testing story need to get wrapped around another strand: what made testing harder, slower, less efficient, less valuable—and what we need or recommend to make some aspects of testing faster, easier, more efficient, or even unnecesssary.
14) Wait, what? When would testing be unnecessary? Testing is much less necessary when you know THIS well enough that more testing probably won't reveal more problems — which means you can now test THAT, or FOR THAT; places or conditions where deeper or more subtle problems lurk.
15) It seems that many testers are being pressured to Get Stuff Done with goals that don't help very much: showing that everything is okay. Even better if we can WRITE PROGRAMS to show that everything is okay—which means a big investment in time and effort to miss the point.
16) Testing and running automated checks never show that the product *works*. They can only show that the product CAN work. That's not the point of testing. We test to find out where the product DOESN'T work, and where it MIGHT NOT work, so that those problems can get addressed.
17) What's worse, the bulk of the time and effort needed to demonstrate that the product CAN work gets swept under the rug because of things mentioned earlier. What's the matter with the testers? They take all that time, and can't even write a program to show our product works!
18) There are LOTS of possible reasons for that. One, of course, is that the product doesn't work, which really messes with writing a program to show that it works. Another is that the product is not understood comprehensively by anyone; the product is like Rashomon; fragmented.
19) Even when there is a reasonably coherent product and story,—clean, shared mental models, good access to its elements—it takes time and effort to learn a product well enough to do deep testing on it. Just like the invisible work of admins, that's the invisible work of testers.
20) So one important part of testing, little discussed by the certification mills, is the complex social, political, and economic task of making invisible testing work more visible and legible to our clients, to keep them informed about risk. developsense.com/blog/2018/02/h…
21) Another cool service that testers *could* provide is either informing or outright producing documentation about the product. In a sense, skilled testers do (or can and should be doing) that anyway; that's the Honest Manual Writer Heuristic. (developsense.com/blog/2016/05/t…).
22) When someone asks "So why have testers anyway?" a typical answer, provided hesitantly with a rising tone is "To... provide... information?" Sure, but what kind? Let's be specific: valuable information about the actual status of the product, what it does, and problems in it.
23) The "what it does" and "problems in it" part could be really valuable, because companies and groups (and open source projects!) are often oblivious to THEIR clients' needs to know about how the product works. This is utterly clear from looking at most current documentation.
24) Testing is fundamentally learning about the product, thinking critically about it, exploring it, experimenting with it, *experiencing* it, and reporting on those experiences—especially the painful parts—to help companies become aware of the pain they might inflict on clients.
25) Some of that learning work can be aided by machinery, but applying tools to testing in a powerful way requires us to learn about and study the product deeply. That learning process requires us to have *and develop* good models of the product AND of testing itself.
26) If you're still with me so far, a word from our sponsor: Rapid Software Testing is all about this stuff. Please spread the word: RST Explored runs April 12-15 in time zones friendly to Europe, the UK, the Middle East, and India. eventbrite.ca/e/rapid-softwa…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton

Michael Bolton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

8 Mar
1) Heuristic: When X is a noun: "X testing" is "testing focused on X-related risk".

Heuristic: When Y is an adjective or adverb, "Y testing" is "testing in a Y-ish way".

Heuristic: X testing can be done in ways modified or not by Y; and Y testing may be focused on X or not.
2) So: "Performance testing" means "testing focused on risk related to performance". "Usability testing" means "test focused on usability-related risk". "Function testing": "testing focused on risk related to functions". "Unit testing": testing focused on problems in the units.
3) Now: let's look at "regression testing". Regression testing means "testing focused on risk related to regression". (Regression means "going backwards", presumably getting worse in some sense.") *Repetitive* is an adjective, modifying something; not really something in itself.
Read 7 tweets
26 Feb
No one ever sits in front of a computer and accidentally compiles a working program, so people know (intuitively, correctly) that programming must be hard. Almost anyone can sit in front of a computer and stumble over bugs, so they believe (incorrectly) that testing must be easy!
There is a myth that if everyone is of good will and tries really, really hard, then everything will turn out all right, and we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk.
The trouble is that to produce a novel, complex, product, you need an enormous amount of optimism; a can-do attitude. But (@FionaCCharles quoting Tom DeMarco here, IIRC), in a can-do environment, risk management is criminalized. I'd go further: risk acknowledgement is too.
Read 27 tweets
1 Feb
1) Why have testers? Because in some contexts, in order to see certain problems, we need a perspective different from that of the builders—but we also need a perspective different from that of users. First, some people are neither builders nor users. Some are ops or support folk.
2) Others are trainers or documentors. Some people are affected by users, or manage users, but are not themselves users. Some are users, but forgotten users.

Another reason it might be important to have testers: everyone mentioned so far is focused mostly on *success*.
3) A key attribute of the skilled and dedicated tester is a focus on risk, problems, bugs, and the possibility of failure. Most builders can do that to a significant and valuable degree, but the mental gear shifting isn’t automatic; it requires skillful use of the mental clutch.
Read 22 tweets
10 Sep 20
1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
Read 88 tweets
2 Sep 20
1) Why do we test? A primary reason is to discover problems in our products before it's too late.

A problem is a difference between the world as experienced and the world as desired. It's a problem if someone experiences loss, harm, pain, bad feelings, or diminished value.
2) The degree to which something about a software product is perceived a problem is the degree to which someone suffers loss, harm, pain, bad feelings, or diminished value. That is: a problem about a product is a relationship between product and some person.
3) But the degree to which something is perceived to be a problem also depends on how someone, and someone's suffering, is important to the perceiver. That is a social issue, not merely a technical one. That barely gets mentioned in most testing talk, or so it seems to me.
Read 17 tweets
21 Aug 20
1) Printer won't print because of a "paper jam". There's no paper; there's no jam. Disconnecting the power and reconnecting doesn't clear the jam that isn't there. An elaborate series of moves, with a restart does. Printer loses all of its non-factory configuration. Reset that.
2) Now the printer starts up fine. Gee, this would be a good time to download and update the firmware. Download complete. Process starts. Note that the machine shouldn't be turned off during the process. Stuff happens, sounds, machinery resetting, etc. Progress bar increments.
3) 90% of the way through the firmware upgrade, the progress bar stops moving. Hmmm, this is taking a while. Check the control touchpad on the printer. Guess what? "Paper jam." No way to clear it or ignore it... so we've got a race condition here.
Read 28 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!