Profile picture
Detritivore Biome @noahsussman
, 36 tweets, 11 min read Read on Twitter
The place where "testing vs. checking" starts to really leak (as all metaphors do but still…) is the Cartesian division of "things a human does" and "things a computer does."

First, Safety-II and therefore #devops, explicitly reject the Cartesian view of complex systems.
For instance the idea that there exists a computer activity called "checking" and what "checking" does is it validates assumptions.

There is a problem right there. Validation assumes some kind of goal-oriented behavior — telos — which computers do not on their own, have.
As covered pragmatically in the classic paper "The Ironies Of Automation" and further explored by Donna Haraway: computers on their own are not capable of "validation" because validation implies an understanding of some set of moral *values.*

But computers can't do that.
Basic computer science teaches that there is one-and-only-one computer activity and that is flipping bits. Propositional Logic, Turing Machines, Celluar and Finite Automata all illustrate the *power* of flipping bits. But at the end of the day computers are bit flipping machines
Cellular Automata in particular are good for illustrating how "just flipping bits" can result in systems that are able to remember past events, respond to novel events, self-reproduce and carry out complex behavior.

However. "Quality" is a moral value and that's a *human* thing.
Many, arguably all animals have thoughts similar to ours. And many non-living systems carry out complex behavior (e.g. the fluid dynamics sustaining Niagra Falls, or the balance of gravitational forces that keep the planets from flying away from the sun).

But so far as we know…
So far as we know, only human beings have *morals.*

This is important as it means "checking" cannot possibly be an activity that is only of the computer.

Checking is the validation of assumptions.

Validation is a moral function.

Morals are found only in humans. Nowhere else.
There is no validation of assumptions without a human interlocutor in the loop around AUT and automated test suite.

"The Ironies Of Automation" covers this… irony, canonically and Donna Haraway has made a career of deconstructing the Cartesian view of networked software systems
Without a human somewhere in the feedback loop, computer activity is just flipping bits.

It's not "validation" because that requires human (moral) values.

It's not "checking assumptions" because assumptions require telos and computers — being teleonomic — don't have that.
The results of an automated test literally are without any meaning whatsoever, until a human enters the loop and starts making moral judgements about the results.

Automated test results means both "pass" and "fail" (and lots of other values too) until *you decide* what they mean
All of the above aside there is a relatively simple proof that it is impossible to construct a computer program that can test whether another computer program is functioning correctly.

The proof I'm thinking of is called The Halting Problem but there are others as well.
And I guess that's the elephant in the room: there are mathematical proofs that software cannot generally test itself.

We *know with mathematical certainty* that any such claims *must* be snake oil.

So why. Don't. We. Ever. Talk. About. Any. Of this in the #testing community?
Why the weak argument: "#testing isn't checking" and why not the mathematically rigorous argument that software-that-tests-other-software is a fundamental impossibility vis a vis the halting problem?

Not a rhetorical question.
This is distinct from what a software product is.

A software product is a shared narrative about a problem and the people (and machines and critters too) who come together to solve that problem.
Why the weak argument of "well there's some stuff the framework can't do so you'll still need us" and why not the robust argument: "the framework cannot possibly deliver on the promises you have made please explain yourself."

?

Not a rhetorical question either.
I guess this is an unexplored branch of "what can #testing learn from dev?"

How about how to call bullshit on patently impossible technical claims that violate basic computer science scenarios like the Halting Problem?
For instance there is SO MUCH discourse about play-and-record testing and can-it-or-can-it-not replace some-other-form-of-testing.

Play-and-record testing doesn't scale. Because of the Halting Problem. Conversation over.

So why aren't we talking about things in this way?
I'm gonna go way out on a limb and assume there are a few people reading this thread who have thought deeply about "checking vs. testing" but have never. Heard. Of the Halting Problem nor of the other NP-Complete problems.

Y'all really need to reflect on how you spend your time
This is professional advice not snark. (Disclaimer: still snark.) 😆

Many early-and-mid-career testers have an arsenal of high-level product heuristics but haven't yet learned to view software systems through the lens of their mathematically demonstrable constraints.

Why???
Trust me one "to do that you'd have to solve the halting problem" is worth a hundred high-level customer-centric heuristics.

In fact idk how valuable I'd consider the high level heuristics without the basic computer science constraints taken into account first. 🤔
"Learn to code" is a good message for testers but I think what I've done in my career that makes me allegedly good at #testing is that I've learned the *limitations* of code.
The great "non-coding" testers I've known — and I've known many, this is one of the nice things about working in the #NYC area your whole career — the great "non-technical" testers all understand deeply what software systems just can't do.
Does every great "manual" tester articulate perfectly the principles behind the Halting Problem?

Surely not. But they all fundamentally have confidence that certain things in any system are *just plain impossible.* Knowledge of these constraints guide their deeper explorations.
For instance: "Database migrations are inherently error-prone. It doesn't matter what the team's process is nor does it matter whether the technology involved is robust or not. If you observe enough database migrations over time you will find an error."
Any seasoned "non-technical" tester of my experience would agree with the above statement, and would go about their business with absolute confidence that database migrations *in general* are likely to yield interesting bugs.

There's nothing obvious or simple about that.
However there's a difference between knowledge that is obtuse and knowledge that for whatever reason just doesn't seem to be taught in the #testing community at least as I've come to know it.

Why?
Why isn't the inevitability of data migration bugs commonly discussed in the #testing community? To me it is a heuristic at least as useful as any of the popular product-and-customer-centric heuristics.

Why do we talk about "data bugs" but not the CAP Theorem?
The CAP Theorem presents a hypothesis as to why data migrations are dangerous. The C in CAP stands for data Consistency, and it turns out Consistency is constrained in ways that can be expressed with a lot of mathematical rigor. Currently CAP is the best hypothesis we have.
The weaponization of "testing vs. checking" has I think led to a situation where it's uncommon to hear discussion of computer science topics in #testing circles.

But this makes no sense. Read the above sentence above. It doesn't make sense.
So maybe we can have a bit of computer-scientific renaissance in #testing where we re-balance our priorities to include the current focus on customer-and-product-requirements but ALSO add an equal focus on the constraints and the general characteristics of the software medium.
A lot of people are asking where they can read more about the halting problem. This is the best resource I’ve found so far: web.archive.org/web/2014030616… #testing
Also here's a "popular science" explanation of the Halting Problem in Wired. #testing
A Python proof of the Halting Problem (scroll down to the second section of the post) jeremykun.com/2015/06/08/met… #testing
A practical example of the halting problem comes up when #testing Web pages that contain social media embed code.

When a Facebook button didn't render: is Facebook down? Is the embedded code broken? Or did that particular server just take a little too long to respond?
Here's a long Stack Overflow thread about the Halting Problem. stackoverflow.com/a/1111341/55478 #testing
The undecidable problems and particularly the Halting Problem are just one facet of computer science that should be basic to #testing yet isn't.

Big O Notation is another one. Why isn't every tester taught this on their first day on the job?
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Detritivore Biome
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!