1/ Wanted to do an #EconTwitter 🧵 on a new + important topic that's growing in the literature: rigorous evidence about how policy-makers use + respond to evidence! Most of these papers are very recent, many still WP
2/ One published in AER 2021 by @HjortJ@dianamoreira_sb Rao and Santini; an experiment w.mayors of 2,150 Brazilian municipalities; they find mayors are WTP for evidence, and update priors upon receipt; value large samples more, but not dev country studies aeaweb.org/articles?id=10…
3/ Relatedly, they show that mayors briefed on the effectiveness of one policy (tax reminder letters) are 10 pp more likely to adopt it
5/ Imp, these are higher-level policymakers (not bureaucrats) - deputy ministers. Exposure to a serious econometrics workshop ⬆ perceived imp of quantitative evidence, ⬆ WTP for RCT evidence and ⬇ WTP for correlational evidence. Effects are large + nicely captured in graphs
6/ Shifts in broader attitudes toward causation
7/ Shifts in priors around effects of deworming after exposure to evidence
8/ Shifts in WTP for RCTs
9/ Also relevant the very interesting work of @evavivalt Coville @sampskc who ran experiments at WB and IADB to understand how policymakers + practitioners weigh evidence. evavivalt.com/wp-content/upl…
10/ Found that that they place substantial wt on characteristics linked to external validity (local experts, context in which evidence collected); while researchers place more emphasis on internal validity.
11/ Assessing WTP in terms of impact, policymakers state they'd accept a program with 6 pp lower impact, if it's been recommended by a local expert. (A big gap - many programs have effects smaller than this)
12/ Related paper by same team on how researchers, policymakers + practitioners update beliefs shows that generally policymakers are more optimistic; but, when presented w/IE findings, all 3 groups update asymmetrically (focusing on good news) evavivalt.com/wp-content/upl…
14/ Policymakers less accurate in interpreting findings presented wrt to min wage, as opposed to neutral framing (the effect of face cream). Also, susceptible to framing: more likely to choose risky option when framed as losses. (Deliberation mitigates the 1st of these biases)
15/ These are all dev-oriented papers; there are certainly US-focused papers (including on education specifically) that I have missed, as well as the recent paper by @mattietoma analyzing responsiveness of US policymakers to program estimates, thread here
16/ Leaving those for others to add; or for another 🧵 ! Will conclude with the (un-original) thought: the body of research here is much smaller than the importance of the Q: surely slightly ⬆ uptake of existing evidence, at least in some domains, could have very large effects
17/ Effects could be larger, in fact, than ⬆ our body of knowledge (which we already invest in - and which I endorse!); particularly if we understand more about how to ⬆ evidence uptake in large countries, + those w/large # of poor people. Clearly much more for us to learn.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Caught up on this recent NBER WP on labor productivity growth and industrialization in Africa by McMillan and @AlbertZeufack nber.org/papers/w29570
Offers a very useful overview of trends in manufacturing and structural transformation in SSA; worth quick 🧵 #EconTwitter
The paper uses a range of data sources, but the first is the Economic Transformation Database (ETD) including 18 SSA countries that allows for estimation of value added per worker across countries.
Estimates show that labor productivity growth has been 2.5% in SSA since 2000; this is mostly driven by shift of workers from ag to non-ag (i.e., structural transformation). Minimal contribution of within-sector productivity growth.
Flyouts are starting, so here’s a quick 🧵 on advice for introverts like me. One of the challenging parts of the jm is high social interaction, possibly made more difficult if you have constraints (familial, locational) that you want to keep private at first. #EconTwitter
As to strategy for disclosures – I’ll let others speak to that, other than to say I agree you should always be truthful, but you can choose not to reveal certain things. But that can add stress, making it even harder to chat comfortably.
So, a few quick thoughts (more on the side of the non-econ part of the discussions, not the research part.) Come prepared with a few topics that are of broad interest. Hobbies? Books? Movies? Food tastes? All good. Don’t force it, of course.
As new PhD students start to look forward to their first year, short 🧵 on challenges in collaboration in grad school (and its potentially gendered dimensions).
Many people advise grad students to rely on their classmates: first in coursework, later on projects / as coauthors.
I endorse that advice! But it can also be hard to follow. I attended two grad programs (MPhil and PhD) and had similar experiences in both. There were large, energetic, overlapping-networks problem set groups that formed quickly.
They were mostly dominated by men (unsurprisingly; econ grad programs are mostly dominated by men) and, to describe it neutrally, had a fast-paced style. Always an introvert who was becoming more so, I was uncomfortable and anxious about trying to participate.
Enjoyed the presentation by @elianalaferrara today at World Bank DIME of work joint with Baumgartner, Rosa-Dias, Breza and my awesome coauthor Victor Orozco: evidence around a peer education program targeting early sexual activity teen pregnancy in Brazil.
The authors have a fascinating evaluation comparing a peer educator program with three alternate selection mechanisms for educators (school-driven; selection via peer nomination of popularity; selection by centrality in a formally mapped network) to a control arm.
In general, the peer education program is very effective: ⬆️ knowledge and communication around sexuality, contraceptive use; ⬇️ teen pregnancy. The peer educators chosen by schools (the default method), however, were generally ineffective!
So much of what we hear around RCTs are exciting stories about how evidence is used to inform policy. Which is awesome! I love evidence-informed policy. However, I'm sure many of us have also had experiences that are different, and more challenging.
In the spirit of transparency, wanted to share some different (anonymized) stories about use of evidence. Short 🧵
#1: program has mixed effects (largely null for downstream outcomes). Funding for the implementer concludes, implementer and funder move on. What happens? Brief discussion, draft paper shared (0 replies), almost no policy learning (hopefully research community benefits).
Lately I’ve been thinking more and participating in various conversations about how USAID commissions, uses research. Huge topic! But wanted to do a short🧵on what I’ve learned. 1/n
First and foremost in the hearts of most economists is DIV. DIV is awesome, as many others have pointed out! See this recent blog by @DaveEvansPhD and colleagues 2/n cgdev.org/blog/case-evid…
But, DIV primarily funds evaluation of pilots and other interventions that are implemented outside of USAID and are not directly related to the work of missions – as summarized above (there are some exceptions). In that sense it is often separate from the main aid portfolio 3/n