I've neglected finishing the code for it, but I've been playing with an alternative take on the Diamond Plot: the Rotatogram.
Instead of fixed axes / moving regression line, the (orthogonal) regression line is fixed vertically or horizontally, and the axes rotate around it.
Some followups from the responses:
@cdsamii pointed out that this is apparently the literal cover example of David Freedman's "Statistical Models" text. Maybe one day, I'll have a truly original idea. ¯\_(ツ)_/¯
As many have noted, principal components analysis (PCA) is also based on the idea of orthogonal least squared distances. In this simple example, PCA, orthogonal least squares, total least squares and Deming regression are functionally equivalent and get you that symmetrical line.
Most importantly, the main point of this exercise is to understand what question you are asking of the data vs. what question the method you employ is asking of the data.
This doesn't mean PCA or orthogonal regression is "better," it's just asking a different question than OLS.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Folks often say that DAGs make our causal inference assumptions explicit. But that's only kinda true
The biggest assumptions in a DAG aren't actually IN the DAG; they're in what we assume ISN'T in the DAG. It's all the stuff that's hidden in the white space.
Time to make it official: short of some unbelievably unlikely circumstances, my academic career is over.
I have officially quit/failed/torpedoed/given up hope on/been failed by the academic system and a career within it.
To be honest, I am angry about it, and have been for years. Enough so that I took a moonshot a few years ago to do something different that might change things or fail trying, publicly.
I could afford to fail since I have unusually awesome outside options.
And here we are.
Who knows what combination of things did me in; incredibly unlucky timing, not fitting in boxes, less "productivity," lack of talent, etc.
In the end, I was rejected from 100% of my TT job and major grant applications.
Always had support from people, but not institutions.
Ever wondered what words are commonly used to link exposures and outcomes in health/med/epi studies? How strongly language implies causality? How strongly studies hint at causality in other ways?
READ ON!
Health/med/epi studies commonly avoid using "causal" language for non-RCTs to link exposures and outcomes, under the assumption that ""non-causal"" language is more ""careful.""
But this gets murky, particularly if we want to inform causal q's but use "non-causal" language.
To find answers, and we did a kinda bonkers thing:
GIANT MEGA INTERDISCIPLANARY COLLABORATION LANGUAGE REVIEW
As if that wasn't enough, we also tried to push the boundaries on open science, in hyper transparency and public engagement mode.
Granted, we only see the ones that get caught, so "better" frauds are harder to see.
But I think people don't appreciate just how hard it is to make simulated data that don't have an obvious tell, usually because somethig is "too clean" (e.g. the uniform distribution here).
At some point, it's just easier to actually collect the data for real.
BUT.
The ones that I think are going to be particularly hard to catch are the ones that are *mostly* real but fudged a little haphazardly.
Perpetual reminder: cases going up when there are NPIs (e.g. stay at home orders) in place generally does not tell us much about the impact of the NPIs.
Lots of folks out there making claims based on reading tea leaves from this kind of data and shallow analysis; be careful.
What we want to know is what would have happened if the NPIs were not there. That's EXTREMELY tricky.
How tricky? Well, we would usually expect case/hospitalizations/deaths to have an upward trajectory *even if when the NPIs are extremely effective at preventing those outcomes.*
The interplay of timing, infectious disease dynamics, social changes, data, etc. make it really really difficult to isolate what the NPIs are doing alongside the myriad of other stuff that is happening.
The resistance to teaching regression discontinuity as a standard method in epi continues to be baffling.
I can't think of a field for which RDD is a more obviously good fit than epi/medicine.
It's honestly a MUCH better fit for epi and medicine than econ, since healthcare and medicine are just absolutely crawling with arbitrary threshold-based decision metrics.
(psssssst to epi departments: if you want this capability natively for your students and postdocs - and you absolutely do - you should probably hire people with cross-disciplanary training to support it)