Testing is, in part, about challenging claims and speaking truth to power. If more testers spoke up— politely, with sound arguments, *in public*—about false claims made by test tool marketers, we could reduce the chance that managers will fall for the hype. Your voice counts.
Mind you, too much effort is being wasted on the kinds of work that can be done by machines—procedural scripted checks. Designing and performing experiments is what testing is all about. We must be able to DO that and DESCRIBE it, and why it’s necessary, or we WILL be replaced.
It’s necessary in some domains because as technology gets more complex and fragile, as businesses become more frantic, and as developers get more ambitious, risk multiplies. Testing must grow up, and testers must broaden their investigative skills to respond to that risk.
We're into a tricky area here, since "acceptance tests" and "unit tests" are not *tests* as such. (I recognize that this is controversial, but please hear me out I DO acknowledge the value of them, too, but let's get clear on what I'm talking about here.)
First, "acceptance tests". Most of the time, when they've become artifacts, they represent *examples* of what the product should do. The process of *actually examining* the product to assert that it behaves in a way consistent with the example is a test. And yet...
...it's a pretty weak test; less like an experiment or an exploration; more like a demonstration to show that the product CAN work. Interestingly, though, the process of *developing* an acceptance test has more testing content in it, since thought experiments are going on.