I don’t claim to be authoritative; the source I’m working off of is Mayo’s own definitive article on the subject.
phil.vt.edu/dmayo/personal…
The notion of severity builds on a rule of inference in classical logic called modus tollens:
major premise: A implies B
minor premise: B is false
conclusion: A is false
If there is smoke the smoke detector beeps; the smoke detector isn’t beeping; therefore there’s no smoke.
premise: with high probability the smoke detector beeps when there’s smoke
data: no beep
inference: no smoke
(well, one reason why)
Frequentist methods come in for a fair bit of criticism because there’s a conceptual gap between the way they’re justified (good long-run operating characteristics) and what we use them for (inferences in the specific case at hand).
learnbayes.org/papers/confide…
In the meantime here's an entr'acte: