Testers: feel like there’s too much to test? Start by surveying the product and creating a coverage outline. Next, try quick testing that finds shallow bugs. This highlights product areas and failure patterns that prompt suspicion about the risk of deeper, more systemic problems.
Others on the project may identify bugs and risks. The difference in the testing role is that probing for problems and investigating them is at the *centre* of our work. For everyone else, that’s a part-time job; a distraction; an interruption of the primary work; a side hustle.
Just as people doing development work don't typically do sales and marketing, HR, or accounting, they don’t do deep testing either. That can be totally reasonable; they've got productive work to do. But if there's no testing expertise on the team, expert testing won't happen.
1) Still having trouble logging in to Facebook, but for mundane reasons. See, apps with 2FA send an email or a text message when you ask for a password reset. But unlike machines, people are impatient, and mash that "request reset code" button multiple times.
2) As a consequence, several reset codes get sent. Because of email latency, who knows when the most recent request has been fulfilled? So the most recent code in the email box might not be the most recent one sent, so things get out of sync.
3) This gets a richer when Messenger notices trouble. I get email from Facebook: "We noticed you're having trouble logging into your account. If you need help, click the button below and we'll log you in.” Then there’s a one-click button that will allow me to log in to Facebook.
18. Learning about problems that will threaten value to customers certainly requires scrutiny from the builder/insiders' perspective. The code shouldn't be inconsistent with builders' intentions. And among themselves, the builders can be pretty good at spotting such problems. /19
19. But to be really good at spotting problems that threaten customer value requires builders' savvy PLUS a significant degree of estrangement from the builders' set and setting, and requires immersion in the consumer/outsiders' form of life. And there's a problem here. /20
20. The problem here is that, with a few exceptions, *deep* immersion in the user/consumter/outsider form of life isn't easy to come by for most testers. Some testers have come from the product's intended domain; that can help a ton. Others study the domain deeply; also good. /21
5. This is not to say that testers can't be helpful with or participants in checking. On the contrary; we likely want everyone on the team looking for the fastest, most easily accessible interfaces by which we can check output. Testers know where checking is slow or expensive. /6
7. But here's something odd: testers don't always point out where checking is slow, difficult, or expensive—and, just as bad, maybe worse—where checking is misaligned with product risk. I suspect there are some fairly gnarly social reasons for this goal displacement. /8
8. In some organizations, testers prestige is based on programmers' prestige. Do you write code? Then you're one of the cool people. You don't? Then who needs you, really? This is a mistake, of course, but it's true that testers don't produce revenue. /9
The tester’s mission is not the builder’s mission. The builder's mission is to help people's troubles go away, envisioning success.
The tester's mission is to see trouble wherever s/he looks, anticipating failure. The tester’s mission helps to serve the builder’s mission. /2
2. The tester's mission helps to serve the builder's mission in at least two ways: a) in noticing where initial problems persist; where the builder's work might not be done yet; b) in recognizing new problems that have been created while attempting to solve the initial ones. /3
3. Some problems can be anticipated, and then noticed by performing checks in a rote or mechanistic way. That kind of checking is part of a disciplined development and building process; very good to do, but it doesn't hold much hope for identifying many unanticipated problems. /4
20) If you present testing as a complex, cognitive, *social*, *self-directed*, *engineering*, *problem-solving* task, I guarantee more programmers will happily throw themselves into it. And, if you have testers, MORE TESTERS WILL TOO. So what is the problem to be solved here?
21) One big problem is: we have a new, complex, technological product that we intend to help solve a problem; and that we may not understand the problem or our solution as well as we'd like; and whatever we don't know about the all that could bite our customers and could bite us.
22) Finding problems that matter in that product is greatly aided by access to rich models of the product itself; of customers, how they use it, and what they value; of who else might be affected (support, ops, sales...); of coverage; of recognizing possible trouble; and of risk.
15) There are ways of addressing those problems, but I don't think an appeal to quality is enough. Developers are already satisfying lots of quality criteria—it's just that they're specific quality criteria that are important to some managers: SOMETHING, ON SCHEDULE, ON BUDGET.
16) When programmers are satisifying those quality criteria, it's not helpful to suggest that they "learn about quality", or worse "learn to care about quality". They already care plenty about quality; but maybe they rate some dimensions of quality different from your priorities.
17) If testers and managers treat testing as a rote task of confirming that something works, it's inevitable that programmers will find it tedious and boring: they KNOW it works. They built it, right? Why would they build something and then give it to others if it didn't work?
1) When managers say "testing is everyone's responsibility", ask if they're supporting or mandating developers to perform experiential testing, read testing books, take testing classes, study critical thinking, read bug reports from the field, set up test environments...
2) Ask also how many developers are hurling themselves towards these activities. Some developers (interestingly, the very best ones, in my experience) will be quite okay with all this. Other developers won't be so enthusiastic, and that might be both explicable and okay.
3) It might be okay for developers not to be too interested or engaged in deep testing, because certain kinds of testing are a) a side job for developers and b) really disruptive to the developers' forward flow. For developers, shallow testing might be good and sufficient.
Testers could be investigating products to reveal problems that matter to customers and risk that could lead to loss for the business. Yet many testers write scripts to demonstrate that all is okay, or struggle to find locators on a web page. Wondering why testing isn’t valued?
There IS an antidote to all this. It is simple in one sense, but it’s not easy: deliver the goods. When you clearly report problems that matter to managers and developers, they become too busy arguing with each other how to fix problems before the deadline to hassle you. Or…
…they act like responsible professional people and work things out (sometimes with your help), and thank you for your clear report. Trouble is, there are testers—lots of them—who have been gulled into the idea that their job is to demonstrate that everything is okay. It isn’t.
1) In its earliest days, API stood for "Application Program Interface"; now, mostly "Application Programming Interface". We might build and test APIs far better if we think Application *Programmers'* Interfaces. Programs alone never use APIs; people writing and using programs do.
2) It might be easy to think that programs use APIs, or that programs call APIs. But that's like thinking that drill bits use chucks, or that lamps use switches, sockets, and extension cords. *People* use drills and lamps—and their elements—as parts of integrated systems.
3) APIs—like everything else that gets built—are built from the perspective of an insider. That’s inevitable; the act of building something automatically puts you inside the builder’s perspective. Escaping that perspective is essentially impossible, until you forget building it.
1) Yet another thing claiming that "testing a product manually doesn’t scale". That's exactly like saying "programming a product manually doesn’t scale", "editing a book manually doesn’t scale", or "teaching manually doesn’t scale". Let’s unpack that claim.
2) This point will be painfully familiar to some; a surprise to others: Premise: testing is evaluating a product by learning about it through experiencing, exploring and experimenting, which includes to some degree: questioning, study, modeling, observation, inference, etc.
3) Testing incorporates more than that, too: risk analysis; critical thinking; raising doubt; collaborating with others; probing mysteries; *designing* tests. These are things that only humans can do. Testing is a cognitive, intellectual, investigative, social process.
1) Documentation people and testers are like admins and secretaries. Companies came to think of them as an extravagance, and got rid of them in the name of "efficiency". But this is a completely bogus efficiency, because the cost of NOT having them is largely invisible.
2) Everybody is now swamped with doing the work that support people used to do, but that's invisible even to the people who are now performing that work. It's "just the way things (don't) get done around here". I notice this as I'm programming; most API documentation *sucks*.
3) Of course, when I'm programming (even for myself), my job is to make trouble go away; to get the problem solved. When something gets in the way of that, I'm disinclined to talk about it. "I'm a problem-solver!" So I'll buckle down and push on through. Gotta get it done!
1) Heuristic: When X is a noun: "X testing" is "testing focused on X-related risk".
Heuristic: When Y is an adjective or adverb, "Y testing" is "testing in a Y-ish way".
Heuristic: X testing can be done in ways modified or not by Y; and Y testing may be focused on X or not.
2) So: "Performance testing" means "testing focused on risk related to performance". "Usability testing" means "test focused on usability-related risk". "Function testing": "testing focused on risk related to functions". "Unit testing": testing focused on problems in the units.
3) Now: let's look at "regression testing". Regression testing means "testing focused on risk related to regression". (Regression means "going backwards", presumably getting worse in some sense.") *Repetitive* is an adjective, modifying something; not really something in itself.
No one ever sits in front of a computer and accidentally compiles a working program, so people know (intuitively, correctly) that programming must be hard. Almost anyone can sit in front of a computer and stumble over bugs, so they believe (incorrectly) that testing must be easy!
There is a myth that if everyone is of good will and tries really, really hard, then everything will turn out all right, and we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk.
The trouble is that to produce a novel, complex, product, you need an enormous amount of optimism; a can-do attitude. But (@FionaCCharles quoting Tom DeMarco here, IIRC), in a can-do environment, risk management is criminalized. I'd go further: risk acknowledgement is too.
1) Why have testers? Because in some contexts, in order to see certain problems, we need a perspective different from that of the builders—but we also need a perspective different from that of users. First, some people are neither builders nor users. Some are ops or support folk.
2) Others are trainers or documentors. Some people are affected by users, or manage users, but are not themselves users. Some are users, but forgotten users.
Another reason it might be important to have testers: everyone mentioned so far is focused mostly on *success*.
3) A key attribute of the skilled and dedicated tester is a focus on risk, problems, bugs, and the possibility of failure. Most builders can do that to a significant and valuable degree, but the mental gear shifting isn’t automatic; it requires skillful use of the mental clutch.
1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
1) Why do we test? A primary reason is to discover problems in our products before it's too late.
A problem is a difference between the world as experienced and the world as desired. It's a problem if someone experiences loss, harm, pain, bad feelings, or diminished value.
2) The degree to which something about a software product is perceived a problem is the degree to which someone suffers loss, harm, pain, bad feelings, or diminished value. That is: a problem about a product is a relationship between product and some person.
3) But the degree to which something is perceived to be a problem also depends on how someone, and someone's suffering, is important to the perceiver. That is a social issue, not merely a technical one. That barely gets mentioned in most testing talk, or so it seems to me.
1) Printer won't print because of a "paper jam". There's no paper; there's no jam. Disconnecting the power and reconnecting doesn't clear the jam that isn't there. An elaborate series of moves, with a restart does. Printer loses all of its non-factory configuration. Reset that.
2) Now the printer starts up fine. Gee, this would be a good time to download and update the firmware. Download complete. Process starts. Note that the machine shouldn't be turned off during the process. Stuff happens, sounds, machinery resetting, etc. Progress bar increments.
3) 90% of the way through the firmware upgrade, the progress bar stops moving. Hmmm, this is taking a while. Check the control touchpad on the printer. Guess what? "Paper jam." No way to clear it or ignore it... so we've got a race condition here.
Here is why experiential and usability testing are important: after over 30 years, it's still hard as fuck to use Microsoft Word to create a simple, unadorned, #10 envelope with a recipient and a return address. Designers should be forced to watch films of people trying this.
Here are some of the aspects of the problem. 1) You'd THINK that "envelope" would be one of the options immediately available from "File/New Document". Nope. A "World's Best" award certificate is offered, but not a damned envelope. Which template do YOU need more often?
2) Try searching online for "plain envelope". Reply: "We couldn't find any Word templates that matched what you were looking for." OK. "#10 envelope". That yields four results, of varying fanciness, but nothing straightforward and plain. Pick "Red" as the cleanest one.
1) Want to evaluate the relevance of your testing? How much of it is focused on what the designers and builders intended? Now ask how much of it is focused on the intentions, needs, and desires of the *actual* people who use the *actual* product—and the people who support them.
2) One of the seven principles of the context-driven school of software testing is that the product is a solution to a problem, and if the problem isn’t solved, *the product doesn’t work*. (Cem Kaner gets credit for that one; the emphasis is mine.)
3) Checking that the product does something reasonably close to what we—as a development group—intended is a really good idea. Doing that is part of the discipline of software development work; of any kind of product development. Is what we’re building consistent with its design?
1) An epiphany a year ago informs part of our definition of testing: Testing is the process of evaluating a product by learning about through *experiencing*, exploring and experimenting, which includes to some degree questioning, studying, modeling, observation, inference, etc.
2) That stuff that people call "exploratory" testing (all testing is exploratory) might also—and arguably, better—be considered as *experiential* testing. Direct, interactive, and to a large degree unmediated experience with aspects of the product *as people would experience it*.
3) We might use tools to help with aspects of exploration, or with extending or accelerating aspects of gaining experience, but experience with the product is essential. Too often, tool use in testing these days is directed entirely *away* from gaining experience of the product.