1. Your periodic reminder: you don't need explicit expected outputs to test. In a real test, a genuine experiment, what counts is what actually happens, independent of any expectation that you might or might not have, explicit or tacit.
2. Moreover, when you're testing, you have tons of tacit expectations, some about outputs and some about other things. Interesting and unanticipated things happen when you're testing. A key job of the tester is to notice them and evaluate them, and to ask "are they *problems*?"
3. The common fixation on "expected result" and "actual result" leads to really lame testing that is practically guaranteed to miss a lot of problems that matter to people. Plus there's plenty of ambiguity about what "expected" means, and sometimes those meanings don't agree.
Testing is, in part, about challenging claims and speaking truth to power. If more testers spoke up— politely, with sound arguments, *in public*—about false claims made by test tool marketers, we could reduce the chance that managers will fall for the hype. Your voice counts.
Mind you, too much effort is being wasted on the kinds of work that can be done by machines—procedural scripted checks. Designing and performing experiments is what testing is all about. We must be able to DO that and DESCRIBE it, and why it’s necessary, or we WILL be replaced.
It’s necessary in some domains because as technology gets more complex and fragile, as businesses become more frantic, and as developers get more ambitious, risk multiplies. Testing must grow up, and testers must broaden their investigative skills to respond to that risk.
We're into a tricky area here, since "acceptance tests" and "unit tests" are not *tests* as such. (I recognize that this is controversial, but please hear me out I DO acknowledge the value of them, too, but let's get clear on what I'm talking about here.)
First, "acceptance tests". Most of the time, when they've become artifacts, they represent *examples* of what the product should do. The process of *actually examining* the product to assert that it behaves in a way consistent with the example is a test. And yet...
...it's a pretty weak test; less like an experiment or an exploration; more like a demonstration to show that the product CAN work. Interestingly, though, the process of *developing* an acceptance test has more testing content in it, since thought experiments are going on.