Lots of ways to measure "attention": cites, media outreach, speaking engagements, placed on syllabus, or policy engagement
Sometimes these go together, but not always/often: paper could generate buzz, but not be cited; paper could be cited a lot because its wrong
One of my methods pieces, namely my 2013 @polanalysis piece w/ Walter Mebane
cambridge.org/core/journals/…
"Causal Inference"
"Ignorability"
"Identification"
"nonrandom assignment"
...aaaaannnnddd I'm asleep.
cambridge.org/core/journals/…
Let's be honest, social scientists aren't too keen of "null results" (hello, publication bias).
nature.com/articles/s4156…
In other words, he wants social scientists to be cool with null results. Again, that's a tough case to make.
1) monotonicity (the effect is never negative; the effect is never positive)
2) no missing data
3) so long as we include relevant covariates, X is "as if" randomly assigned
So our paper provides a way, building on the work of Francesca Molinari, to compute Manski bounds in the presence of missing data
amstat.tandfonline.com/doi/abs/10.119…
paulpoast.com/missing-treatm…
[END]