My Authors
Read all threads
1) Printer won't print because of a "paper jam". There's no paper; there's no jam. Disconnecting the power and reconnecting doesn't clear the jam that isn't there. An elaborate series of moves, with a restart does. Printer loses all of its non-factory configuration. Reset that.
2) Now the printer starts up fine. Gee, this would be a good time to download and update the firmware. Download complete. Process starts. Note that the machine shouldn't be turned off during the process. Stuff happens, sounds, machinery resetting, etc. Progress bar increments.
3) 90% of the way through the firmware upgrade, the progress bar stops moving. Hmmm, this is taking a while. Check the control touchpad on the printer. Guess what? "Paper jam." No way to clear it or ignore it... so we've got a race condition here.
4) We can anticipate the probably outcome from interrupting a firmware update: a large, printer-shaped brick.

Since I can't imagine that a product manager would say of this, "Cool... this just what we wanted," I have to assume poor development, testing, and product management.
5) I can't be sure, but I am pretty sure this shit never gets tested; not really. Maybe a suite of confirmatory checks gets run, and the testers determine that the product CAN work. Yes, it can print in portrait, in landscape; yes it can print plain documents and colourful ones.
6) But I bet the organization desperately wants to leave things at that, because of various perverse incentives associated with quick releases. The PM gets the bonus and moves to a new project or position before the product spends much time in the field. Support gets to clean up.
7) The crazy thing is that we've had consumer-grade printers for at least 40 years. The development environments, libraries, and operating systems, provide a ton more support than they used to. Much programming has been abstracted away. This stuff shouldn't be this hard.
8) But maybe it IS really hard, and there are subtle and emergent problems when the product meets the real world. If so, instead of merely confirming that the product CAN do something, it would be a really good idea for testers to try to USE THE DAMNED THING.
9) When I look around, I see lots of fascination with building the software, and building the infrastructure to check the software efficiently, supposedly so "there's more time for exploratory testing". Trouble is, building product and infrastructure takes time. Lots of time.
10) That building work always encroaches on time that could be spent interacting directly with the product. That's fine, as long as testers don't spend so much time on building work that it drowns out the opportunity for testing work in which we obtain experience of the product.
11) I believe that this, at least in part, explains those moments that we *all* experience in which we wonder "Did anyone actually try to USE this thing? As though they were trying to get work done with it? Such that they could notice these obvious problems?"
12) "Maybe somebody did some confirmatory checks to see if the developers missed an obvious bug. Did anyone bother to ask if the customer's problem has been solved? Did anyone test this, not *manually*, but *experientially*?"
13) The testing craft needs to learn to speak more clearly about its work. "Manual" testing doesn't cover the territory of what's actually happen when we're interacting with a product. The *hands* aren't the important part; the *experience* is. We're doing *experiential testing*.
14) Experience both includes and requires interaction, observation, immersion. It involves entering the user's form of life to some degree. Experiences trigger feelings relevant to people's perceptions of quality. They provide material for reflection, discussion, and analysis.
15) Experience affords the opportunity to learn about the product and test it more deeply. Experience easily gets diluted when it is mediated—that is, when something (like a test tool) gets in between the product and the tester. Tools can help us amplify *aspects* of experience.
16) A medium is something between something and something else. Radar can help us sit in a lab and anticipate the oncoming blizzard. Microscopes can help us understand the nature of snow. Thermometers can tell us about the temperature. That stuff is great, and super powerful.
17) Radar and microscopes and thermometers are handy, available, already made. But there's something they don't tell us: they don't tell us what it's like to be out in the blizzard. They don't convey the feel of the wind. We don't learn from microscopes how to drive in snow.
18) My colleague @jamesmarcusbach describes overfocus on automated checking as leading to a vitamin deficiency. I'm coming to the see the problem as outright malnutrition; like there's a whole food group missing. If you're not experiencing the product, testing is malnourished.
@jamesmarcusbach 19) Testers often complain that interacting with the product directly is slow and frustrating. Hey, guess what: you've probably found a bug. If you find the product to be slow and frustrating when you only have to deal with it for a few days or weeks, what will customers think?
@jamesmarcusbach 20) Finding problems that matter often feels somewhat less than productive. It takes time to learn about the product and problems in it and deepen your experience with it—especially when you have *some* experience with it already. There's a good chance bad bugs are deeper still.
@jamesmarcusbach 21) The process of gaining experience with a product is largely *invisible*. The visibility part is relatively easy for programmers: they produce code. Experiencing a product through direct interaction doesn't look like much. It's fingers on mice and keyboards to an observer.
22) Experiential testing happens in our minds and feelings, in interaction with our mental models, over time. Calling it "manual testing" leaves out *everything* important, drawing attention only to the input method —often also dismissing the possibility of using helpful tools.
23) That is why, dear testers, I implore you to stop talking about "manual testing", and start talking about *experiential* testing. "Have you done manual tests?" "I'd say that we've done enough experiential testing of the product to suspect more deeply buried risk."
24) "Do you mean exploratory testing?" "In a sense, yes; all testing is exploratory. We always need to explore to some degree; with experimentation, exploration is part of gaining experience with the product, and finding deeply hidden, rare, subtle, emergent, elusive problems."
25) We're not simply selling bits, files, or CDs. It's like the old joke about the customer wanting a 1/4 drill bit, to which some say the customer really wants a quarter-inch hole. I say the customer wants to hang the danged picture up so it looks nice in her living room.
26) Or, as David Platt memorably put it, customers don't want to use your software; they want to *have used* your software. There's the rub: whatever the designers' and programmers' and BAs' intentions might be, they're not necessarily the same as the customers' desires.
27) It can be a Good Thing to check to see if there are relatively easy-to-find functional bugs, and to help the developers do that. But it shouldn't displace a more significant goal: recognizing problems that frustrate the *customers'* needs and desires about the product.
28) We can identify problems as we're developing and refining our ideas and intentions about the product. But we're not going to deliver intentions; we're going to deliver the *actual* product to the customer. And to recognize problems in it, we need experiential testing. -fin-
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Michael Bolton

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!