Very excited to share this new paper with the fantastic @ZiYangKang, out on NBER today.
Thread 👇 with an overview.
Here's the gist: a common exercise in empirical econ is to analyze the effects of a policy change by taking obs of price/quantity pairs, fitting a demand curve and integrating under it to get a measure of welfare (e.g. consumer surplus; deadweight loss). Here's an example.
Jul 27, 2020 • 17 tweets • 3 min read
1/ Constructing the dashboard to explain our paper (reopenmappingproject.com) involved a lot of careful thinking about what info to display/emphasize and how. The goal of the app was to make the message, methods and results of our paper accessible. Thread👇 for more weeds.
2/ First, data limitations: we build contact matrices from Replica's synthetic population. This is amazing data (e.g. it lets us account for how long ppl spent in the same place) but:
Tl;dr: Heterogeneity matters when thinking about lockdown/re-opening policies. Diffs in concentrations of places where ppl encounter each other, diffs in industry, demographic (and co-morbidity) distributions, diffs in when the virus hit.
Oct 23, 2019 • 5 tweets • 2 min read
Hey #econtwitter- I'm helping put together a tip sheet re computational tools for structural IO, including notes on when some languages/solvers are better than others. I don't use python for optimization but I know lots of ppl do. Any chance y'all could lend some tips? Example 👇
A few other things that it'd be great to have a 1-liner explaining (w/ links to more):
-How to evaluate trade-offs re Analytical Derivatives vs Numerical Differentiation vs Auto Differentiation
-When to impose optimizer constraints via transformation -- e.g., mapping [0,1] -> R
Policy debates re privacy on the internet often stress these trade-offs: 1) Firms getting user data -> better matches + service to larger market 👍 2) But lack of privacy is icky 👎 3) And facilitates "too much" price discrimination 👎