People seemed to be into the idea, so I am launching the Library of Statistical Techniques, or LOST. LOST is a Wiki guide to doing things in statistical software/code. Instructions, examples, and a little Rosetta stone btwn languages. github.com/NickCH-K/LOST/…
What's the point? (a) it's a way of providing more general guidance, in many languages at once, than StackExchange, (b) it makes it easy to link techniques to encourage best practices (i.e. the page about fixed effects can encourage, and link to, the page on clustering)
(c) when there are many different ways to do the same thing, allows for compare-and-contrast, or even savvy editors pointing towards a up-to-date "best" way, both difficult with Googling, and (d) offers a way to learn the material that's task-first.
It's light on content for now (I've written the only three articles, and in only two languages: #Rstats and #stata). So contribute! No Git knowledge necessary. And share so maybe others will contribute! See the contributor's guide here: github.com/NickCH-K/LOST
At least 66 people in this poll said they'd definitely contribute so now they are contractually obliged, it's the law. Do you want to be a lawbreaker?
New working paper out today with Eleanor Murray, "Do LLMs Act as Repositories of Causal Knowledge?"
Can LLMs (like ChatGPT) build for us the causal models we need to identify an effect? There are reasons to expect they could. But can they? Well, not really, no.
Paper here:
Why might we expect LLMs could help with this task? At first blush you might expect this to require LLMs to have a real-world causal understanding of how the world works.arxiv.org/html/2412.1063…
But not really. If people have talked online about causal links, then the LLM could potentially just repeat that back.
1. Exam performance is super interesting, but I think is misinterpreted. Exams are almost pure signal. They're not the actual thing you want to do, they're designed to be something that a *human* could only do if they could actually do the actual thing.
i.e. on a good exam, cost of performing well on the exam should drop sharply as your skill on the actual important thing improves.
Example: your ability to answer "what are the proper safety checks to run before letting a plane fly?" is easier for a good plane mechanic...
to answer than a bad one. But for an LLM that cost function should be very different than for a human. Exam questions that have you basically recall and repeat back what standard course material said is a clear case of this.
If you teach students to work with data, you're doing them a great disservice if you just teach them how to run models/analyses and not how to clean and manipulate data. If you haven't tested them directly on this, you'll be surprised how unintuitive this is to new users!
In my data viz class, the week 3 assignment has them:
1. Read some data docs 2. Recreate some variables based on the docs ("count the number of peers of each gender in each class, not counting someone as their own peer") 3. Make some tables of the form "average X by group"
4. At a few different places clean the data up or check it to see if it makes sense
This is for many students an extremely difficult assignment, and the one for which I always get barraged with extension requests and told they spent hours on it.
I have recently been following two sources of 90s music reevaluation: (1) The Number Ones and (2) the Woodstock 99 documentary and the things these, respectively, have most noticeably revised upwards my opinions of:
1. Mariah Carey 2. The Limp Bizkit song "Break Stuff"
i think i'm previously on record on twitter as rejecting any limp bizkit reevaluation, and i largely stand by that, but break stuff is an exception in the catalogue and dang did you see that crowd
the song is the stupidest and most direct thing in the world but that's absolutely to its advantage as the hallmark greeting card of angry pop music, which is not a sleight
Announcement and invitation! A new project that aims to improve the quality of research in applied microeconomics by examining researcher choices. I am hoping to recruit up to *200 researchers* of all kinds (with pay) and hope you will join me! (Thread) nickch-k.github.io/ManyEconomists/
This project, with Claus Portner, is a follow-up to this paper onlinelibrary.wiley.com/doi/full/10.11…, where multiple researchers each replicated the same studies (a “many-analyst study”). Analytic and data-cleaning choices were different, and this really impacted results.
In this new project, a larger number of researchers will independently complete the same research task. Then, there will be several rounds of revision following peer review, or a change in the research task that standardizes some of the choices made.