I solve testing problems that other people can't solve, and I teach people how they can do it too.
15 subscribers
Jul 24, 2022 • 12 tweets • 24 min read
@RingofromN@FullSnackTester@huibschoots@conorfi@alanpage@RomaniaTesting@bolton@Page The Modern Testing Principles claim from the get-go that they’re “not that modern, and not that much about testing”. That’s the bit I agree with most strongly. They appear to be, to my eye, relatively time-honoured and reasonable aspirations for teams, mostly project management.
@RingofromN@FullSnackTester@huibschoots@conorfi@alanpage@RomaniaTesting@bolton@Page Pretty much everyone can test, just as pretty much everyone can drive *something*. This is uncontroversial. Not everyone can drive a truck, race car, bus, train, or airplane safely and skillfully; nor do most people dedicate themselves to developing expertise in these fields.
Jun 11, 2022 • 32 tweets • 7 min read
1) I believe I first heard this from Jerry Weinberg in 2008: never try to automate a process that you don't understand. (I'll add an exception: you might want to try to automate a process if your goal is to understand something about that process.)
2) There is an important testing skill that in Rapid Software Testing we call "test framing": the capacity to understand and describe the logical connections that link the purpose, activities, outcomes, and interpretations of a test; that relate a test to the mission and to risk.
Jun 8, 2022 • 7 tweets • 2 min read
Rikard Edgren at EuroSTAR: testing is about understanding and exploring relationships and connections; learning and re-learning rapidly; experimenting and discussing; reading and writing. Everything changes. This will go with my talk like peas and carrots. #EuroSTARconf
As an add-on to Rikard’s “potato model” metaphor, I’d add: in addition to seeing it as a three-dimensional potato, you can peel it, too.
Jun 7, 2022 • 5 tweets • 1 min read
The world used to know how to airport, but has forgotten.
There’s a cascade: not enough Customs agents in Toronto, because attrition and furloughs due to COVID; incoming flight gets held; airplane isn’t ready for the groomers; grooming crew gets shifted; catering gets messed u; loading gets messed up. Flight lands four hours late.
May 21, 2022 • 16 tweets • 3 min read
1) Time for a few words on *experiential testing*, a positive replacement for one aspect of "manual testing". In the RST Namespace, experiential testing is *testing via an encounter with the product that is practically indistinguishable from that of the contemplated user*.
2) Experiential testing emphasizes direct, naturalistic interaction with the product as a human would interact with it. That's important, because much of what goes on in testing may be *instrumented*; not direct, not the way real people really use it. That's both feature and bug.
May 12, 2022 • 7 tweets • 3 min read
8) You can write programs to help you
- overwhelm (or, sometimes, starve) the system;
- probe the internal state of the system or the data while a test is being performed...
Using code in these ways can make testing much deeper, richer and more powerful.
9) As you're developing your testing code, you'll be steered towards interacting with the product and analyzing it in ways that emphasize discovery over confirmation. The key is to remember that the task is to find problems that matter, using tools to help you *investigate*.
May 12, 2022 • 7 tweets • 2 min read
1) Here's a heuristic for your career as a tester: avoid learning about "test automation".
(A heuristic is a means for solving a problem that can work and that might fail. This heuristic might help you advance in your test career, and it might not.
2) The key is not to follow a heuristic, but to apply it at your own discretion. That requires you to using your own knowledge, judgment, and wisdom, and to exercise responsibility.)
Mar 12, 2022 • 5 tweets • 2 min read
21) The mission of building the product takes focus, and as Marc Porat says in /General Magic/, that focus to some degree requires the builders to push the possibility of failure aside. Allowing that possibility to enter one's headspace can be disruptive to making progress.
22) That's why we belive having some person(s) on the project focused on testing, focused on looking for trouble rather than addressing it, is a powerful heuristic for addressing the possibility that there are problems not obvious and/or not very important to the builders.
Mar 12, 2022 • 10 tweets • 2 min read
11) Every now and again, while you're testing a product (feature, function, story, code change,...) it's worthwhile to pause and reflect. The pause is one of the subtle, too-often-ignored tools that we can apply to sharpen and deepen our testing. (therockertester.wordpress.com/2021/11/05/the…)
12) A pause offers a moment to reconsider the questions above AND the questions we've asked AND the experiments we've performed so far. We might be getting the right answers, but are we (still) asking the most important, relevant, risk-focused questions?
Mar 12, 2022 • 8 tweets • 2 min read
1) I've been in software development for about 35 years. It's a little discouraging that to this day, many people still believe that the point of testing is to get the right answers to prescribed questions, rather than to reveal the actual state of the product, problems and all.
2) Finding problems in the product requires us to learn the product, raising new questions every step of the way. The most important questions are not "are these outputs correct?", but questions that foster the discovery of problems that threaten product and business value.
Dec 21, 2021 • 20 tweets • 4 min read
1) Again this happens. "If you want your problem addressed, it would be better to contact the company directly. They don't listen to us here in support." These people have my sympathy. But hey,
a) I DID call the company directly. I called customer support. They tried to help.
2) b) Forgive my intemperance here. As a former program manager I say: If you are a manager or developer, and you *ignore* the rich trove of information available to you from customer support, I say you aren't doing your damned job and you are not yet qualified to do it well.
Nov 23, 2021 • 18 tweets • 9 min read
1) I find customer service surveys awful when they ask for binary choices. I don't know where or how the data will be used, but I am positive that it's subject to misinterpretation, because both the checked and unchecked status is wrong in some sense. Today's example: @eventbrite@eventbrite 2) I contacted customer service today because of a bug: as so often happens, US designers and US developers can't cope with the idea that a US citizen (as I am; Canadian citizen too) might not live in the US. Irrespective of where I live, though, I must file US tax returns.
Nov 6, 2021 • 6 tweets • 2 min read
Instead of asking “What tests should I automate?" consider asking "What specific fact do I want to check? What’s a really good way to check that fact? Is that the fastest, earliest, easiest, least expensive way to check it? Do I really need to check this fact? Is it worth it?"
"What risk am I addressing by checking this fact? Is it a serious risk? Why do we think it is? Is that foreboding feeling trying to tell us something?
"What problems, near here or far away, might I be missing as a consequence of focusing on this fact, and checking it this way?"
Oct 9, 2021 • 20 tweets • 4 min read
Testers: feel like there’s too much to test? Start by surveying the product and creating a coverage outline. Next, try quick testing that finds shallow bugs. This highlights product areas and failure patterns that prompt suspicion about the risk of deeper, more systemic problems.
Others on the project may identify bugs and risks. The difference in the testing role is that probing for problems and investigating them is at the *centre* of our work. For everyone else, that’s a part-time job; a distraction; an interruption of the primary work; a side hustle.
Oct 5, 2021 • 39 tweets • 8 min read
1) Still having trouble logging in to Facebook, but for mundane reasons. See, apps with 2FA send an email or a text message when you ask for a password reset. But unlike machines, people are impatient, and mash that "request reset code" button multiple times.
2) As a consequence, several reset codes get sent. Because of email latency, who knows when the most recent request has been fulfilled? So the most recent code in the email box might not be the most recent one sent, so things get out of sync.
May 24, 2021 • 6 tweets • 1 min read
18. Learning about problems that will threaten value to customers certainly requires scrutiny from the builder/insiders' perspective. The code shouldn't be inconsistent with builders' intentions. And among themselves, the builders can be pretty good at spotting such problems. /1919. But to be really good at spotting problems that threaten customer value requires builders' savvy PLUS a significant degree of estrangement from the builders' set and setting, and requires immersion in the consumer/outsiders' form of life. And there's a problem here. /20
May 24, 2021 • 6 tweets • 2 min read
5. This is not to say that testers can't be helpful with or participants in checking. On the contrary; we likely want everyone on the team looking for the fastest, most easily accessible interfaces by which we can check output. Testers know where checking is slow or expensive. /67. But here's something odd: testers don't always point out where checking is slow, difficult, or expensive—and, just as bad, maybe worse—where checking is misaligned with product risk. I suspect there are some fairly gnarly social reasons for this goal displacement. /8
May 24, 2021 • 4 tweets • 1 min read
The tester’s mission is not the builder’s mission. The builder's mission is to help people's troubles go away, envisioning success.
The tester's mission is to see trouble wherever s/he looks, anticipating failure. The tester’s mission helps to serve the builder’s mission. /22. The tester's mission helps to serve the builder's mission in at least two ways: a) in noticing where initial problems persist; where the builder's work might not be done yet; b) in recognizing new problems that have been created while attempting to solve the initial ones. /3
May 22, 2021 • 5 tweets • 1 min read
20) If you present testing as a complex, cognitive, *social*, *self-directed*, *engineering*, *problem-solving* task, I guarantee more programmers will happily throw themselves into it. And, if you have testers, MORE TESTERS WILL TOO. So what is the problem to be solved here?
21) One big problem is: we have a new, complex, technological product that we intend to help solve a problem; and that we may not understand the problem or our solution as well as we'd like; and whatever we don't know about the all that could bite our customers and could bite us.
May 22, 2021 • 5 tweets • 1 min read
15) There are ways of addressing those problems, but I don't think an appeal to quality is enough. Developers are already satisfying lots of quality criteria—it's just that they're specific quality criteria that are important to some managers: SOMETHING, ON SCHEDULE, ON BUDGET.
16) When programmers are satisifying those quality criteria, it's not helpful to suggest that they "learn about quality", or worse "learn to care about quality". They already care plenty about quality; but maybe they rate some dimensions of quality different from your priorities.
May 21, 2021 • 14 tweets • 3 min read
1) When managers say "testing is everyone's responsibility", ask if they're supporting or mandating developers to perform experiential testing, read testing books, take testing classes, study critical thinking, read bug reports from the field, set up test environments...
2) Ask also how many developers are hurling themselves towards these activities. Some developers (interestingly, the very best ones, in my experience) will be quite okay with all this. Other developers won't be so enthusiastic, and that might be both explicable and okay.