1) Heuristic: When X is a noun: "X testing" is "testing focused on X-related risk".

Heuristic: When Y is an adjective or adverb, "Y testing" is "testing in a Y-ish way".

Heuristic: X testing can be done in ways modified or not by Y; and Y testing may be focused on X or not.
2) So: "Performance testing" means "testing focused on risk related to performance". "Usability testing" means "test focused on usability-related risk". "Function testing": "testing focused on risk related to functions". "Unit testing": testing focused on problems in the units.
3) Now: let's look at "regression testing". Regression testing means "testing focused on risk related to regression". (Regression means "going backwards", presumably getting worse in some sense.") *Repetitive* is an adjective, modifying something; not really something in itself.
4) "Regression testing", therefore, is "testing focused on the risk that things have got worse". There are several implications here. One is that "regression testing" *doesn't* necessarily mean "repetitive testing". Another is that a repeated test isn't always a regression test.
5) A repeated test might provide evidence that things have got worse. But a new test can do that too. In fact, a new test can do something a repeated test might not do: a new test might provide evidence that things have been bad all along in ways the repeated tests didn't detect.
6) After a new test suggests that there might be a problem, what will we do? We'll probably repeat the test! We'll do that to try to refine our understanding of the potential problem—or perhaps to conclude that our test result was in error, due to bad procedures or assumptions
7) The point: a repeated test is not necessarily a regression test; a regression test is not necessarily a repeated test. This is important, because *repeated* automated checks, intended to catch regressions, often lose their power to do so; they're overfocused on the familiar.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bolton

Michael Bolton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @michaelbolton

9 Mar
1) Documentation people and testers are like admins and secretaries. Companies came to think of them as an extravagance, and got rid of them in the name of "efficiency". But this is a completely bogus efficiency, because the cost of NOT having them is largely invisible.
2) Everybody is now swamped with doing the work that support people used to do, but that's invisible even to the people who are now performing that work. It's "just the way things (don't) get done around here". I notice this as I'm programming; most API documentation *sucks*.
3) Of course, when I'm programming (even for myself), my job is to make trouble go away; to get the problem solved. When something gets in the way of that, I'm disinclined to talk about it. "I'm a problem-solver!" So I'll buckle down and push on through. Gotta get it done!
Read 26 tweets
26 Feb
No one ever sits in front of a computer and accidentally compiles a working program, so people know (intuitively, correctly) that programming must be hard. Almost anyone can sit in front of a computer and stumble over bugs, so they believe (incorrectly) that testing must be easy!
There is a myth that if everyone is of good will and tries really, really hard, then everything will turn out all right, and we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk.
The trouble is that to produce a novel, complex, product, you need an enormous amount of optimism; a can-do attitude. But (@FionaCCharles quoting Tom DeMarco here, IIRC), in a can-do environment, risk management is criminalized. I'd go further: risk acknowledgement is too.
Read 27 tweets
1 Feb
1) Why have testers? Because in some contexts, in order to see certain problems, we need a perspective different from that of the builders—but we also need a perspective different from that of users. First, some people are neither builders nor users. Some are ops or support folk.
2) Others are trainers or documentors. Some people are affected by users, or manage users, but are not themselves users. Some are users, but forgotten users.

Another reason it might be important to have testers: everyone mentioned so far is focused mostly on *success*.
3) A key attribute of the skilled and dedicated tester is a focus on risk, problems, bugs, and the possibility of failure. Most builders can do that to a significant and valuable degree, but the mental gear shifting isn’t automatic; it requires skillful use of the mental clutch.
Read 22 tweets
10 Sep 20
1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
Read 88 tweets
2 Sep 20
1) Why do we test? A primary reason is to discover problems in our products before it's too late.

A problem is a difference between the world as experienced and the world as desired. It's a problem if someone experiences loss, harm, pain, bad feelings, or diminished value.
2) The degree to which something about a software product is perceived a problem is the degree to which someone suffers loss, harm, pain, bad feelings, or diminished value. That is: a problem about a product is a relationship between product and some person.
3) But the degree to which something is perceived to be a problem also depends on how someone, and someone's suffering, is important to the perceiver. That is a social issue, not merely a technical one. That barely gets mentioned in most testing talk, or so it seems to me.
Read 17 tweets
21 Aug 20
1) Printer won't print because of a "paper jam". There's no paper; there's no jam. Disconnecting the power and reconnecting doesn't clear the jam that isn't there. An elaborate series of moves, with a restart does. Printer loses all of its non-factory configuration. Reset that.
2) Now the printer starts up fine. Gee, this would be a good time to download and update the firmware. Download complete. Process starts. Note that the machine shouldn't be turned off during the process. Stuff happens, sounds, machinery resetting, etc. Progress bar increments.
3) 90% of the way through the firmware upgrade, the progress bar stops moving. Hmmm, this is taking a while. Check the control touchpad on the printer. Guess what? "Paper jam." No way to clear it or ignore it... so we've got a race condition here.
Read 28 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!