No one ever sits in front of a computer and accidentally compiles a working program, so people know (intuitively, correctly) that programming must be hard. Almost anyone can sit in front of a computer and stumble over bugs, so they believe (incorrectly) that testing must be easy!
There is a myth that if everyone is of good will and tries really, really hard, then everything will turn out all right, and we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk.
The trouble is that to produce a novel, complex, product, you need an enormous amount of optimism; a can-do attitude. But (@FionaCCharles quoting Tom DeMarco here, IIRC), in a can-do environment, risk management is criminalized. I'd go further: risk acknowledgement is too.
@FionaCCharles There's a terrific documentary, "General Magic", about the eponymous development shop that in the early 1990s (!!) was working on a device that—in terms of capability, design, and ambition— was virtually indistinguishable from the iPhone 15 years later. It's well worth watching.
@FionaCCharles "There was a fearlessness and a sense of correctness; no questioning of 'Could I be wrong?'. None. ... that's what you need to break out of Earth's gravity. You need an enormous amount of momentum ... that comes from suppressing introspection about the possibility of failure."
That's from Marc Porat, the project's leader, much more recently, talking about why it flamed out without ever getting anywhere near the launchpad. And that persists all over software development, to this day: systematic resistance to thinking critically about problems and risk.
That resistance plays out in many false ideas: that the only kind of bugs are coding errors; that the only thing that matters is meeting the builders' intentions for the product; that we can find all the important problems by writing mechanistic checks of the build. And more.
Another is the unhelpful division of testing into "manual" and "automated", where no other aspect of software development (or indeed of any human social, cognitive, intellectual, critical, analytical, or investigative work) is divided that way. There are no "manual programmers".
Testing cannot be automated. Period. Certain tasks within and around testing can benefit A LOT from tools, but having machinery punch virtual keys and comparing product output to specificed output is not more "automated testing" than spell-checking is "automated editing". Enough.
It's unhelpful to lump all non-mechanistic tasks in testing under "manual", as though all of the elements of cooking (craft, social, cultural, aesthetic, chemical, nutritional, economic) were "manual" and unimportant; all that matters is the food processors and blenders. Geez.
If you want to notice important things in testing, consider some things that get swept under the carpet of "manual testing": *experiential* testing, in which the tester's actions are indistinguishable from those of the contemplated user. Contrast that with *instrumented* testing.
Instrumented testing is testing wherein some medium (tool or technology) gets in between the tester and the naturalistic encounter with and experience of the product. Instrumentation alters, or accelerates, or reframes, or distorts; in some ways helpfully, in other ways less so.
Are you saying "manual"? You might be referring to *attended* or *engaged* aspects of testing, wherein the tester is directly and immediately observing and analyzing aspects of the product and its behaviour in the moment— and contrast that with mechanistic, unattended activity.
Are you saying "manual"? You might be referring to testing activity that's *transformative*, wherein something about performing the test changes the tester in some sense, inducing epiphanies or learning or design ideas. Contrast that with *transactional*, a box-checking activity.
Did you say "manual"? You might be referring to "exploratory" work, which is interestingly distinct from "experiential". "Exploratory"—in Rapid Software Testing at least)—refers to agency; who or what is in charge of making choices about the testing. How is that not experiential?
You could be exploring—making unscripted choices—in a way entirely unlike the user's normal encounter with the product—generating mounds of data and interacting with the product to stress it out, to starve it of resources, to investigate like a *tester*, rather than like a user.
And you could be doing experiential testing in a highly scripted, much-less-exploratory kind of way; for instance, following a user-targeted tutorial and walking through each of its steps to observe inconsistencies between the tutorial and the product's behaviour.
For a while, we considered "speculative testing" as something that people might mean when they spoke of "manual testing"; "what if?" We contrasted that with "demonstrative" testing—but then we reckoned that demonstration is not really a test at all. Not intended to be, at least.
The thing is: part of the bullshit that testers are being fed is that "automated" testing is somehow "better" than "manual" testing because the latter is "slow and error prone" — as though people don't make mistakes when they use automated checks. They do. At colossal scale.
Sure, automated checks *run* quickly; they have low execution cost. But they can have enormous development cost; enormous maintenance cost; very high interpretation cost (figuring out what went wrong can take a lot of work); high transfer cost (explaining them to non-authors).
There's another cost, related to these others. It's very well hidden and not reckoned. A sufficiently large suite of automated checks is impenetrable; it can't be comprehended without very costly review. Do those checks that are always running green even do anything? Who knows?
Checks that run RED get frequent attention, but a lot of them are, you know, "flaky"; they should be running green when they're actually running red. Of the thousands that are running green, how many should be actually running red? Cognitively costly to know that—so we ignore it.
And ALL of these costs represent another hidden cost: *opportunity cost*—the cost of doing something such that it prevents us from doing other equally or more valuable things. That cost is immense when we contrast trying to automate GUIs with interacting with the damned product.
Something even weirder is going on: instead of teaching non-technical testers to code and get naturalistic experience with APIs, we put such testers in front of GUIish front-ends to APIs. So we have skilled coders trying to automate GUIs, and Cypress de-experientializing API use!
And none of these testers are encouraged to analyse the cost and value of the approaches they're taking. Technochauvinism (great word; read @merbroussard's book) enforces the illusion that testing software is a routine, factory-like, mechanistic task. It isn't that at all.
@merbroussard Testing must be seen as a social (and socially challenging), cognitive, risk-focused, critical (in several senses), analytical, investigative, skilled, technical, exploratory, experiential, experimental, scientific, revelatory, honourable craft. Not "manual" or "automated". FFS.
@merbroussard Testing has to be focused on FINDING PROBLEMS THAT HURT PEOPLE OR MAKE THEM UNHAPPY. Why? To help the optimists who are building the products to be aware of those problems, and to address them. Whereby they make themselves look good, make money, and help people have better lives.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton @michaelbolton@mastodon.social

Michael Bolton @michaelbolton@mastodon.social Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

Jul 24, 2022
@RingofromN @FullSnackTester @huibschoots @conorfi @alanpage @RomaniaTesting @bolton @Page The Modern Testing Principles claim from the get-go that they’re “not that modern, and not that much about testing”. That’s the bit I agree with most strongly. They appear to be, to my eye, relatively time-honoured and reasonable aspirations for teams, mostly project management.
@RingofromN @FullSnackTester @huibschoots @conorfi @alanpage @RomaniaTesting @bolton @Page Pretty much everyone can test, just as pretty much everyone can drive *something*. This is uncontroversial. Not everyone can drive a truck, race car, bus, train, or airplane safely and skillfully; nor do most people dedicate themselves to developing expertise in these fields.
@RingofromN @FullSnackTester @huibschoots @conorfi @alanpage @RomaniaTesting @bolton @Page It’s not testing’s job to improve the business; that’s properly the business’ job. Contributions from all can help somewhat. The business might gain actionable insight thanks to testing; but there’s a role for those who aspire to manage and improve the business: managers.
Read 12 tweets
Jun 11, 2022
1) I believe I first heard this from Jerry Weinberg in 2008: never try to automate a process that you don't understand. (I'll add an exception: you might want to try to automate a process if your goal is to understand something about that process.)
2) There is an important testing skill that in Rapid Software Testing we call "test framing": the capacity to understand and describe the logical connections that link the purpose, activities, outcomes, and interpretations of a test; that relate a test to the mission and to risk.
3) One common application of automation in testing is to check output from particular functions, given specific inputs. A quick framing for that would be "there's a risk of errors in the product code, such that given inputs will product incorrect output, reckoned by some oracle."
Read 32 tweets
Jun 8, 2022
Rikard Edgren at EuroSTAR: testing is about understanding and exploring relationships and connections; learning and re-learning rapidly; experimenting and discussing; reading and writing. Everything changes. This will go with my talk like peas and carrots. #EuroSTARconf
As an add-on to Rikard’s “potato model” metaphor, I’d add: in addition to seeing it as a three-dimensional potato, you can peel it, too.
Rikard suggests serendipity—successful stumbling, I’d say—can be accelerated and intensified by preparation. #EuroSTARconf
Read 7 tweets
Jun 7, 2022
The world used to know how to airport, but has forgotten.
There’s a cascade: not enough Customs agents in Toronto, because attrition and furloughs due to COVID; incoming flight gets held; airplane isn’t ready for the groomers; grooming crew gets shifted; catering gets messed u; loading gets messed up. Flight lands four hours late.
Now people who were supposed to go out on our plane will be three hours late. This is what a fragile system looks like.
Read 5 tweets
May 21, 2022
1) Time for a few words on *experiential testing*, a positive replacement for one aspect of "manual testing". In the RST Namespace, experiential testing is *testing via an encounter with the product that is practically indistinguishable from that of the contemplated user*.
2) Experiential testing emphasizes direct, naturalistic interaction with the product as a human would interact with it. That's important, because much of what goes on in testing may be *instrumented*; not direct, not the way real people really use it. That's both feature and bug.
3) Instrumented testing can be feature, super-useful when we want to extend or accelerate or enhance SOME ASPECT of what we can to operatate and/or observe the product, so that we can learn stuff about it that's hard or slow to learn by normal, human interaction with the product.
Read 16 tweets
May 12, 2022
8) You can write programs to help you

- overwhelm (or, sometimes, starve) the system;
- probe the internal state of the system or the data while a test is being performed...

Using code in these ways can make testing much deeper, richer and more powerful.
9) As you're developing your testing code, you'll be steered towards interacting with the product and analyzing it in ways that emphasize discovery over confirmation. The key is to remember that the task is to find problems that matter, using tools to help you *investigate*.
10) And finally: if you want to learn to write code, do it. But if you don't want to learn to write code? Don't worry about it. Develop your testing skills, and ask for help from programmers or other testers who can help you out with developing code to help you test.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(