First, Safety-II and therefore #devops, explicitly reject the Cartesian view of complex systems.
There is a problem right there. Validation assumes some kind of goal-oriented behavior — telos — which computers do not on their own, have.
But computers can't do that.
However. "Quality" is a moral value and that's a *human* thing.
But so far as we know…
This is important as it means "checking" cannot possibly be an activity that is only of the computer.
Checking is the validation of assumptions.
Validation is a moral function.
Morals are found only in humans. Nowhere else.
"The Ironies Of Automation" covers this… irony, canonically and Donna Haraway has made a career of deconstructing the Cartesian view of networked software systems
It's not "validation" because that requires human (moral) values.
It's not "checking assumptions" because assumptions require telos and computers — being teleonomic — don't have that.
Automated test results means both "pass" and "fail" (and lots of other values too) until *you decide* what they mean
The proof I'm thinking of is called The Halting Problem but there are others as well.
We *know with mathematical certainty* that any such claims *must* be snake oil.
So why. Don't. We. Ever. Talk. About. Any. Of this in the #testing community?
Not a rhetorical question.
A software product is a shared narrative about a problem and the people (and machines and critters too) who come together to solve that problem.
Not a rhetorical question either.
How about how to call bullshit on patently impossible technical claims that violate basic computer science scenarios like the Halting Problem?
Play-and-record testing doesn't scale. Because of the Halting Problem. Conversation over.
So why aren't we talking about things in this way?
Y'all really need to reflect on how you spend your time
Many early-and-mid-career testers have an arsenal of high-level product heuristics but haven't yet learned to view software systems through the lens of their mathematically demonstrable constraints.
In fact idk how valuable I'd consider the high level heuristics without the basic computer science constraints taken into account first. 🤔
Surely not. But they all fundamentally have confidence that certain things in any system are *just plain impossible.* Knowledge of these constraints guide their deeper explorations.
There's nothing obvious or simple about that.
Why do we talk about "data bugs" but not the CAP Theorem?
But this makes no sense. Read the above sentence above. It doesn't make sense.
When a Facebook button didn't render: is Facebook down? Is the embedded code broken? Or did that particular server just take a little too long to respond?
Big O Notation is another one. Why isn't every tester taught this on their first day on the job?