@RevEconStudies @SNageebAli @PennStateEcon @econ_greg @MSFTResearch @StanfordGSB @nberpubs A few years ago, I wrote a thread introducing a paper on "Voluntary Disclosure & Personalized Pricing" with @SNageebAli and @econ_greg (threadreaderapp.com/thread/1186778…).
Now that the paper has been accepted for publication, an update on what the paper is about 👇 1/n
The gist: In the debate about privacy, info disclosure & price discrimination, it's important to think about the structure and verifiability of the disclosure technologies. 2/n
Classical intuitions from Grossman (81) and Milgrom (81) suggest that voluntary disclosure is ultimately self-defeating and unhelpful to consumers. These intuitions are often invoked in arguments for strong privacy protections meant to keep consumer data out of firms' reach 3/n
More recent work has focused on disclosure as a vehicle for customization (e.g. Hidir and Vellodi ('21); Ichihashi ('21)) or disclosure through an all-powerful intermediary who can act on behalf of the consumer (e.g. Bergemann, Brooks & Morris ('15)) 4/n
We focus on a simple model of disclosure in which i) consumers can send convex messages about their types to receive an offer that they can accept or reject; ii) Messages are verifiable by a 3rd party (eg type 1/3 can claim to be in the interval [0,.5] but not [.5, 1]) 5/n
We show that the structure of the message technology makes a big difference in what can result. 6/n
In a world w/ only "simple" messages (e.g. "tell me everything or tell me nothing") voluntary disclosure cannot help consumers in monopolistic markets; the classical intuition holds. 7/n
By contrast, partial disclosure through "rich" messages (e.g. "nothing but the truth, but not necessarily the whole truth") allows for equilibria in which nearly all consumer types strictly benefit from disclosing information. 8/n
This idea generalizes: it suffices for there to be a way to generate "group pricing" (e.g. info that separates low value types from high types in equilibrium) in order for voluntary disclosure to produce Pareto improvements. 9/n
These results extend to natural relaxations of our basic model, such as when firms have external info about their consumers prior to interacting. If instead of a monopolist, there are multiple differentiated firms, even simple messages are beneficial to consumers. 10/n
A natural question is where this all fits in discussions of real life policy. For this, it's useful to revisit the assumptions in our model. 11/n
There are two key features in our messaging technologies: (i) messages are verifiable: there is a 3rd party who can ensure that consumers don't lie; (ii) consumers are never committed to sending a particular message or executing a given personalized offer. 12/n
First, note that there is verifiable disclosure all around us. Folks can submit gov't docs (EBT cards; W-2 receipts) to apply for discounts big and small. People regularly receive promotions based on their internet browsing record, or even their recent purchasing history. 13/n
In some cases, voluntary disclosure is offered as a direct promotional program by the seller, as in the auto-insurance monitoring program that Yizhou and I study. 14/n

Second, our characterization has guidance for regulatory design.
Consumer control over info, rather than rigid privacy, is what drives Pareto gains. But to implement gains, a "verifier" the gov't, a platform or a firm) can choose the coarseness of the message space. We argue that each potential verifier has interests in consumer gains 15/n
There's a lot more in the paper of course. I hope you read it and let us know what you think. This project started when I interned at @MSFTResearch as a PhD student. It's been a pleasure to work on. Thank you to all the amazing folks who've discussed it with us over the years n/n

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shosh Vasserman

Shosh Vasserman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @shoshievass

Jan 17
Very excited to share this new paper with the fantastic @ZiYangKang, out on NBER today.

Thread 👇 with an overview.

nber.org/papers/w29656
Here's the gist: a common exercise in empirical econ is to analyze the effects of a policy change by taking obs of price/quantity pairs, fitting a demand curve and integrating under it to get a measure of welfare (e.g. consumer surplus; deadweight loss). Here's an example.
But curve fitting usually requires assumptions: how do you interpolate between discrete points?
Read 35 tweets
Jul 27, 2020
1/ Constructing the dashboard to explain our paper (reopenmappingproject.com) involved a lot of careful thinking about what info to display/emphasize and how. The goal of the app was to make the message, methods and results of our paper accessible. Thread👇 for more weeds.
2/ First, data limitations: we build contact matrices from Replica's synthetic population. This is amazing data (e.g. it lets us account for how long ppl spent in the same place) but:
3/ a) it is based on a "typical" day and has poor coverage of rare/big events like concerts; b) it is based on cell pings inside the cities and has poor coverage of travel + of kids; c) it uses Q1 of 2019 as a baseline + modifies based on policies as defined by us.
Read 17 tweets
Jun 9, 2020
Excited to tell you all about a new paper re COVID19 from a big team effort w/ @abhishekn, @akbarpour_, @Pietro_Tebaldi, @Simon_Mongey, Cody Cook, Aude Marzuoli, Matteo Saccarola and Hanbin Yang

reopenmappingproject.com/files/network-…
Tl;dr: Heterogeneity matters when thinking about lockdown/re-opening policies. Diffs in concentrations of places where ppl encounter each other, diffs in industry, demographic (and co-morbidity) distributions, diffs in when the virus hit.
These forms of heterogeneity are (largely) measurable -- and we made a major effort to measure them. We combine a representation of daily activities + meetings across metro areas built on rich cell data by Replica w/ electronic medical records, O*Net, OES + more
Read 13 tweets
Oct 23, 2019
Hey #econtwitter- I'm helping put together a tip sheet re computational tools for structural IO, including notes on when some languages/solvers are better than others. I don't use python for optimization but I know lots of ppl do. Any chance y'all could lend some tips? Example 👇
A few other things that it'd be great to have a 1-liner explaining (w/ links to more):
-How to evaluate trade-offs re Analytical Derivatives vs Numerical Differentiation vs Auto Differentiation
-When to impose optimizer constraints via transformation -- e.g., mapping [0,1] -> R
Here's my draft so far. Plz send more tips/correct any errors.
Read 5 tweets
Oct 22, 2019
Hello #econtwitter! The wonderful @SNageebAli + Greg Lewis + I have just posted our working paper.

scholar.harvard.edu/files/vasserma…

We would love your comments. Thread 👇
Motivation:

Policy debates re privacy on the internet often stress these trade-offs:
1) Firms getting user data -> better matches + service to larger market 👍
2) But lack of privacy is icky 👎
3) And facilitates "too much" price discrimination 👎

See obamawhitehouse.archives.gov/sites/default/…
Lots of responses:
Policy -- GDPR, CCPA, FCC "best practices", etc
Industry -- "privacy oriented" products by Apple, Mozilla, etc.
Academics -- budding theoretical literature on privacy + value of data; small but growing empirical lit (eg this from NBERSI )
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(