Tweetorial on going from regression to estimating causal effects with machine learning.
I get a lot of questions from students regarding how to think about this *conceptually*, so this is a beginner-friendly #causaltwitter high-level overview with additional references.
One thing to keep in mind is that a traditional parametric regression is estimating a conditional mean E(Y|T,X).
The bias—variance tradeoff is for that conditional mean, not the coefficients in front of T and X.
The next step to think about conceptually is that this conditional mean E(Y|T,X) can be estimated with other tools. Yes, standard parametric regression, but also machine learning tools like random forests.
It’s OK if this is big conceptual leap for you! It is for many people!
But now you’re also worried. Where did the coefficients go?
I care about a treatment effect and if I estimate E(Y|T,X) with some machine learning tools, the coefficients aren’t there.
We can think about defining our parameters more flexibly outside the context of a parametric model!
Can write the average treatment effect as the contrast: E_X[E(Y|T=1,X)-E(Y|T=0,X)].
Now we can move to thinking about how to operationalize estimating that treatment effect with machine learning. Here is how we write down our estimator.
You can see the conditional means, except we need to have estimates under the setting that treatment is equal to 1 and 0.
This involves:
(1) estimating E(Y|T,X) with our machine learning tool.
(2) Setting all observations to T=1 and using our fixed algorithm to obtain predicted values for each observation.
(3) Repeating (2) for T=0.
Now we can plug these values into the estimator!
What I described is a machine learning-based substitution estimator of the g-formula.
There are other ML-based estimators for effects, including methods that use the propensity score or both the outcome regression and propensity score.
I describe these steps from regression to machine learning for causal inference in more detail in my short courses (drsherrirose.org/short-courses), for example this workshop at UCSF: dropbox.com/s/wmgv51j21t3n… (starting slide 147).
There are many books on causal inference (I have co-authored two). Our targeted learning books on machine learning for causal inference can be downloaded free if you have institutional access, and two of the introductory chapters are free on my website: drsherrirose.org/s/TLBCh4Ch5.pdf.
This targeted learning tutorial is free access: academic.oup.com/aje/article/18…. It has steps for double robust machine learning in causal inference and information on calculating standard errors as well as why we want the bias—variance tradeoff for the effect, not the conditional mean.
Happy to answer questions or requests for further resources on machine learning for causal inference. ☺️
If you find this thread after my rotating curator week is over (October 30, 2020), I can be found at @sherrirose.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This varies *a lot* by type of role, seniority, and institution.
I’m tenured at a research-intensive institution and I am not teaching this term.
I spend a fair amount of time meeting with students and collaborators. Today, Monday, I have 5 such meetings.
There are also lots of emails and administrative tasks all the time.
Each day this week I’ll drop a tweet in this thread to add in unique things I haven’t mentioned yet to demystify the life of this particular professor.
🧵 time! I’d love to talk about the responsibilities we have as data practitioners. In this ~~information age~~ I think it’s critical we use data, ML, stats, and algorithms fairly, and with an eye toward making the world better for people.
Lots of people have asked me if studying biostats has actually been relevant in my career as a software engineer, and I’ve found the answer to be a resounding yes! It's super relevant in lots of engineering problems and in understanding the world generally. 🧵 follows!
When I worked on payment fraud prevention, I was always talking about diagnostic testing for rare diseases!
Diagnostic testing was something we studied at length in our early biostat & epi classes in grad school and it turns out “fraud” behaves similarly to a “rare disease” in a lot of ways.
Gerrymandering gets its name from one Elbridge Gerry, who in 1812 drew a voting district in Boston that looked like a salamander because it was politically expedient.
the practice persists through today, from city council districts all the way up to (arguably) the Electoral College!
math, statistics, and measurement have played a key role in several court cases related to the ongoing discussion and fight for fair and representative districts.
One more quick tweet, unrelated to the Gelman-Rubin diagnostic.
Someone asked, "I hear C++ is fast but a little hard to grasp. That true?"
Mostly yes. Like Python, R is mostly easier to learn and often is slower than C/C++.
I recommend you think about how your code will be used when you decide what language to code in. If you're coding for yourself and you probably just need to run it once, then R may be a good choice. Optimizing for speed may be overkill. (2/)
If you are writing a function/package for public consumption, then speed is much more of a concern. You can profile your code to see which parts are time-consuming. You can also just google what things R is slow at (ex loops). (3/)
Let's extend the linear model (LM) in the directio of the GLM first. If you loosen up the normality assumption to instead allow Poisson, binomial, etc (members of the "exponential family" of distributions), then you can model count, binary, etc responses. (4/)
You've probably heard of Poisson regression or logistic regression. These fall under the umbrella of GLM. (5/)
The LM regression equation is E(Y) = X Beta, where X is the model matrix, Beta is the vector of coefficients, Y is the response vector, and E(Y) is the expected val.
For Poisson regression, we have log(E(Y)) = X Beta.
For logistic regression, we have log(p/(1-p))= X Beta (6/)