@PausalZ@fediscience.org Profile picture
Professional epidemiologist / causal inference researcher / python programmer, amateur mycologist #Python #epitwitter https://t.co/cuewGX6vWD

Sep 15, 2020, 9 tweets

6: NONPARAM TESTS
Section 6 goes through the sharp null hypothesis (that no effect of exposure on any individual). Note that this is weaker than the null of no _average_ effect in the population

Another way of thinking about this is if there is no individual causal effect (ICE) then there must be no average causal effect (ACE). The reverse (no ACE then no ICE) is not guaranteed

Robins provides us with the G-null hypothesis as a means of assessing the sharp null (the g-null is that call causal parameters are 0)

We are given a more complicated procedure for evaluation and a simpler algorithm (the simpler algorithm has PASCAL code which I am curious if anyone still has). Languages that disappear do make me worry a bit about my own work though 🥴

Then we are given some warnings about sparse data. Sparsity can occur through the exposure levels (A={0, 1, 2, ... a}) or in follow-up time (t={0, 1, 2, ...T})

However, not all G-null's were made the same. When models are introduced for the nuisance functions we can run into problems. Specifically, we can fail to reject at the nominal rate

We are given a list of potential solutions to address the power issue

We are now given the problem of defining time zero (particularly for the G-null). The more epidemiology I have learned, the more I realize how difficult (and important) defining time zero can be for observational studies

Section 6 concludes with the applied example for the G-null hypothesis. I think Robins' points about assumptions being slightly wrong and that large sample sizes will reject with near certainty are important

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling