Shosh Vasserman Profile picture
Jul 31 18 tweets 7 min read Read on X
Very excited to see my first (and I expect, not last) paper w @ZiYangKang out in print.

Thread👇on what this paper is about + why I hope lots of folks will use it. Image
This paper is for anyone who does an exercise like this:

1) take data on demand vis a vis a price shock
2) estimate a treatment effect
3) calculate consumer surplus to assess the policy.

The question: how sensitive are your welfare conclusions to functional form assumptions?
Our paper starts at point (3).

Your treatment effect tells you a range of possible consumer surplus estimates – e.g., consumers value a subsidy at least 0 and at most (# subsidized sales x subsidy). But that can be a wide range.
A common approach is to use a simple demand model (maybe the same one used to estimate the treatment effect) – e.g. log(q) ~ log(p) – and integrate under that curve.
But how do you make sure your results aren't driven by functional form? 🤔
Our solution has 3 steps:
1⃣ Start with your preferred demand model (e.g. linear) 2⃣ Ask: how much can demand deviate before conclusions flip?
3⃣ Get a robustness measure: larger deviations needed = more robust results
It's easiest to show how it works through examples.

We work through a simple simulated example in Sec 1 and applied examples from published papers in Sec 4.

Semi-updated slides here: tinyurl.com/kvslides
Suppose we have price/quantity data from a price hike experiment in a few mkts.

The underlying demand curves could be complicated but we only see 2 price points / mkt so we can't do much curve fitting. But we can use the experimental variation to get an ATT of p on q. Image
Image
Image
We can interpret the ATT as an avg gradient of the demand curve(s).

If we assume demand comes from a simple functional form (e.g. linear/isoelastic/etc.) that's enough to fit the whole curve and compute consumer surplus.

But the CS estimate will vary based on what we choose Image
Image
How do we check all the possible functional forms that are consistent w our data?

We propose 2 ways, motivated by regression-based functional forms.

A (generalized) linear fn has (transformed) constant gradient + zero curvature. We propose 1D relaxations of each of these. Image
Image
Image
Each relaxation covers a lot of potentially complicated demand curves: e.g. gradient can vary in any way so long as it's bounded in a range around the avg.

But we don't need to fit any of them; instead we compute bounds on how big/small the CS could be under any curve in the set Image
Why is this useful?

Say you want to evaluate CS loss against an externality, G.

Under a linear model, if CS loss is below G, then the policy is net good; ow net bad.

We ask: how non-linear would true demand need to be for the result to flip? You can read it off the graph. Image
This isn't just for linear benchmarks or price experiments.

It works just as easily for any benchmark of the form A(q) ~ B(p) for monotonic fns A(.) and B(.) given a baseline price, quantity and an identified and (somehow) estimated treatment effect of p on q. Image
Image
Our replication code includes a little python pkg to compute these graphs (and their associated robustness measures) for any benchmark in this family.

For the easy/common cases, we have a table with closed form formulas.

aeaweb.org/articles?id=10…Image
Image
But wait, why is this useful?

Two thoughts:
1. Suppose you're evaluating a policy like this, estimated the ATT & want to say sth robust about welfare.
Our paper says don't think too hard about how to impute the demand curve; just do what's natural and check how important it is
2. How does this robustness measure tell us how important the demand curve is?

Our measure is a threshold "r" from 0 (not at all robust) to 1 (super robust). If you're close to 1 that's good. If not, "r" tells you conditions on feasible gradients/curvatures that are testable.
Bonus: my thread highlighted the framework/results of the paper, but the meat of why it works rests in a serendipitous (to us, at least) connection with information design that lets us borrow some neat tools that have been growing in popularity in theory but not so much outside.
Tl;dr: Please read the paper. If you want to try to use this in your own work and have questions, don't be shy :) shoshanavasserman.com/rmwa/
@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shosh Vasserman

Shosh Vasserman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @shoshievass

Jun 13, 2022
@RevEconStudies @SNageebAli @PennStateEcon @econ_greg @MSFTResearch @StanfordGSB @nberpubs A few years ago, I wrote a thread introducing a paper on "Voluntary Disclosure & Personalized Pricing" with @SNageebAli and @econ_greg (threadreaderapp.com/thread/1186778…).
Now that the paper has been accepted for publication, an update on what the paper is about 👇 1/n
The gist: In the debate about privacy, info disclosure & price discrimination, it's important to think about the structure and verifiability of the disclosure technologies. 2/n
Classical intuitions from Grossman (81) and Milgrom (81) suggest that voluntary disclosure is ultimately self-defeating and unhelpful to consumers. These intuitions are often invoked in arguments for strong privacy protections meant to keep consumer data out of firms' reach 3/n
Read 17 tweets
Jan 17, 2022
Very excited to share this new paper with the fantastic @ZiYangKang, out on NBER today.

Thread 👇 with an overview.

nber.org/papers/w29656
Here's the gist: a common exercise in empirical econ is to analyze the effects of a policy change by taking obs of price/quantity pairs, fitting a demand curve and integrating under it to get a measure of welfare (e.g. consumer surplus; deadweight loss). Here's an example.
But curve fitting usually requires assumptions: how do you interpolate between discrete points?
Read 35 tweets
Feb 26, 2021
Suppose you've just received a slew of PhD admissions decisions and you're trying to decide what to do. You log onto twitter and get a mountain of confusing, rancorous discourse flying in 50 diff't directions. You hesitate but reaffirm--you've made it this far; no stopping now 1/
You're excited but also anxious. Is doing a PhD actually awful? Is everything you've worked toward meaningless? Or maybe you're the exception to the rule, and your experience will be great? After all, all these people found a way eventually as far as you can tell. What to do? 2/
A few suggestions:
1. Log off twitter. It's helpful to some degree but sometimes no advice is better than advice.
2. Breathe. This is good -- some uncertainty has just been resolved. There is more uncertainty upcoming, but you can still take a breather and have some respite. 3/
Read 15 tweets
Jul 27, 2020
1/ Constructing the dashboard to explain our paper (reopenmappingproject.com) involved a lot of careful thinking about what info to display/emphasize and how. The goal of the app was to make the message, methods and results of our paper accessible. Thread👇 for more weeds.
2/ First, data limitations: we build contact matrices from Replica's synthetic population. This is amazing data (e.g. it lets us account for how long ppl spent in the same place) but:
3/ a) it is based on a "typical" day and has poor coverage of rare/big events like concerts; b) it is based on cell pings inside the cities and has poor coverage of travel + of kids; c) it uses Q1 of 2019 as a baseline + modifies based on policies as defined by us.
Read 17 tweets
Jun 9, 2020
Excited to tell you all about a new paper re COVID19 from a big team effort w/ @abhishekn, @akbarpour_, @Pietro_Tebaldi, @Simon_Mongey, Cody Cook, Aude Marzuoli, Matteo Saccarola and Hanbin Yang

reopenmappingproject.com/files/network-…
Tl;dr: Heterogeneity matters when thinking about lockdown/re-opening policies. Diffs in concentrations of places where ppl encounter each other, diffs in industry, demographic (and co-morbidity) distributions, diffs in when the virus hit.
These forms of heterogeneity are (largely) measurable -- and we made a major effort to measure them. We combine a representation of daily activities + meetings across metro areas built on rich cell data by Replica w/ electronic medical records, O*Net, OES + more
Read 13 tweets
Oct 23, 2019
Hey #econtwitter- I'm helping put together a tip sheet re computational tools for structural IO, including notes on when some languages/solvers are better than others. I don't use python for optimization but I know lots of ppl do. Any chance y'all could lend some tips? Example 👇
A few other things that it'd be great to have a 1-liner explaining (w/ links to more):
-How to evaluate trade-offs re Analytical Derivatives vs Numerical Differentiation vs Auto Differentiation
-When to impose optimizer constraints via transformation -- e.g., mapping [0,1] -> R
Here's my draft so far. Plz send more tips/correct any errors.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(