F Rodriguez-Sanchez Profile picture
Jul 21 7 tweets 2 min read Twitter logo Read on Twitter
ICYMI this paper really is a must read:

Comparing and choosing best models based on AIC etc, then interpret coefficients causally (what's the effect of X on Y?) is flawed, yet so common.

We must draw causal assumptions first (eg. DAG)

1/7 https://t.co/icU4g8ipNJdoi.org/10.1111/ele.14…
Image
"Model selection is not a valid method for inferring causal relationships.

Model selection is appropriate for predictive inference (i.e. which model best predicts Y?), which is fundamentally distinct from causal inference (i.e. what is the effect of X on Y?)"
Imagine we want to assess the effect of 'Forestry' on 'Species Y'. But we know other things may also affect Y

We could put all these variables in a regression model (what @rlmcelreath calls a causal salad), or build models w/ different subsets of predictors and compare them. Image
That will lead to biased estimates. The best model based on AIC & BIC includes more predictors and gives biased estimate of Forestry effect on Y

The causal model (based on DAG) has much larger AIC but gives correct estimate Image
This applies to Machine Learning too (random forests etc). Showing high variable importance does not mean those predictors are important from a causal point of view, only that they are useful to get good predictions
Causal inference is rarely taught, yet seems so important. Many papers do not aim to predict but to make inferences about how important different variables are. It seems we're too often using a wrong approach
I'm trying to learn more about this. This is another recent paper from @ArifSuchinta next on my reading list: #ecopubs

FINdoi.org/10.1002/ecm.15…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with F Rodriguez-Sanchez

F Rodriguez-Sanchez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @frod_san

Jul 11, 2022
I wish academics were much more cautious when using authors' order in papers to infer contribution & leadership

Many committees use this too lightly (eg. ranking by number or % of papers as corresponding author)

Many scientists' careers depend on this, so a few thoughts 👇🏽
First, there are no universal rules regarding the first/last/corresponding author thing: doi.org/10.1002/ece3.3… @duffy_ma

Sometimes authors are sorted in decreasing order of contribution, so 2nd author is important. Sometimes last author is more important
This paper looked at contribution statements of >12,000 papers and found that middle and last authors range from doing almost nothing to having done most of the work doi.org/10.1126/sciadv…

There's large variation and many factors (incl. power dynamics) affecting authors order
Read 17 tweets
Feb 7, 2022
New version of {grateful}, the package that makes it very easy to cite #Rstats packages, so that #software authors get their deserved credit.

pakillo.github.io/grateful/

Major changes: 1/6
2/ To get a document with formatted citations for all the #rstats packages used in your analysis, just run

library(grateful)
cite_packages()

Now includes pkg versions and all their citations, ready to paste into your manuscript or report Image
3/ {grateful} can now be used within #Rmarkdown!

Just include a chunk with

cite_packages(output = 'paragraph')

and you'll get a paragraph with in-text citations for all pkgs and a formatted reference list ImageImage
Read 6 tweets
Jun 22, 2019
So many scientific results rely on code written by an inexperienced programmer and is NEVER EVER seen by anyone else. We can surely do better.

Thread 1/n
First, data and code behind published papers must be public (except for justified reasons). It's unbelievable that we have to blindly trust what is said in a paper without being able to look inside. And yes, that happens so often pnas.org/content/115/11… 2/n
Failure to publish data and code means errors never get caught (or take years of unnecessary struggles e.g. physicstoday.scitation.org/do/10.1063/PT.…).
And no one else can ever build upon those data and code, hindering scientific progress 3/n
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(