My experience over the past 30 years has been analysts violating assumptions all over the place (I'm sure I've done it too) yielding poor results posing as "data-driven outcomes" ... or nobody testing anything.
He used A/B tests within customer segments to measure print campaigns. So far, so good.
One problem.
For whatever the reason, he just picked small quantities ... 1,500 in a cell instead of the 200,000 required.
Then he made decisions that were very bad for business.
About 2 weeks into my new role as a VP in his area, I called out his error.
He fought the error for about 90 seconds ... and then he just got quiet.
He was applying a data-driven approach to an extreme, and was using horrific test/control sample size design, and he cost my company a fortune.
Now imagine all of the foibles created by analysts who have zero statistical training but have software that allows 'em to do just about anything.
And yet, the analysts are making soooo many mistakes applying statistical methodologies, costing their companies a fortune.
Try hard to understand what your analytical blind spots are. We all have 'em.
Questions?
KH