We have a new working paper on the dependence of search result quality on the amount of user-generated data. This picture here shows the main result. Let me explain. 🧵1/14
The central question we answer is: Do some search engines produce better search results because their algorithm is better, or because they have access to more data from past searches? 2/14
If the reason for the better search results is that they have access to more data, mandatory data sharing, a policy that is currently discussed, could trigger innovation and would benefit all users of search engines. The idea is that it's unfair that firms like Google 3/14
In the following mini 🧵 I summarize his 5 ✅ principles
The 5 principles are:
✅ show your variation with descriptive analysis
✅ use the descriptive evidence to provide preliminary evidence
✅ use the descriptive analysis to guide choices of what you model (and not model)
✅ clearly articulate the value added of the model and …
✅ choose parameters of interest and counterfactuals that are informed by your variation
#EconTwitter:
I wish I had seen this one-pager 📄 on how to write excellent papers while I was doing my Ph.D.: scholar.harvard.edu/files/shapiro/…
Jesse Shapiro suggests doing this in 4 steps.
Here is a summary. 🧵
Step 1:
dream up a somewhat realistic introduction with a description of results and so on
if it does not excite you, abandon the project (very! important advice)
Step 2:
do the research
start with whatever is least clear to you
use the introduction as your compass