Ron Berman Profile picture
Mar 26 13 tweets 6 min read
Companies invest a lot in analytics - but are these investments valuable?

@IsraeliAyelet and I studied ~1,500 online retailers and found that using a descriptive dashboard increased their weekly revenues by 4%-10%.

>>
#MarTech #BigData #Analytics #ecommerce #DataScience SynthDiD estimate of ATT of adopting analytics dashboard by
The paper is forthcoming in Marketing Science and is available at pubsonline.informs.org/doi/10.1287/mk….

(Ungated version at dx.doi.org/10.2139/ssrn.3…)

>>

@MarketngScience
We used data from over 1,500 small and medium ecommerce global sellers (with mostly Shopify stores) with average monthly revenues of ~$60K.

Every retailer adopted an analytics dashboard that displayed KPIs such as weekly sales, avg basket size, conversion rate etc.

>> Summary statistics of over 1500 retailers who adopted the an
When the retailer adopted the dashboard, the dashboard’s provider also collected historical data, so we see performance before and after the adoption.

Because the dashboard was adopted in different times, the adoption is staggered, which you know what it means… 🙀🙀🙀

>>
Cue in TWFE DiD, Synthetic controls and what not.

Special quirk - our retailers have different time trends and potentially endogenous adoption timing.

Solution was a combination of @jmwooldridge’s POLS regression, @ArkhangelskyD et al. SynthDiD, and an IV.
Luckily, all these methods showed the same converging results, which are that adopting the descriptive dashboard yields an increase in revenues, diversity of products sold, number of repeat customers and number of transactions.

As usual, causality disclaimers apply.

>>
But can we say more about whether the descriptive dashboard causes these effects? And if so, how?

A cool and unique feature of our data is that we observe if the retailer actually _use_ the dashboard - do they login to look at reports and when.

>>
By comparing the results of dashboard users to non-users, we see that only users reap the benefit from the dashboard.

This allows us to rule out improved performance due to an unobserved and unrelated mechanism.

>>
Next, we investigate if the retailers learn directly from the dashboard.

We initially hypothesized that retailers are most likely to change pricing and advertising strategies based on the dashboard KPIs.

Surprisingly, we find that retailers do not change these strategies.

>>
Instead, descriptive analytics serve to help retailers monitor additional marketing technologies (martech) and amplify their value.

Most retailers adopt additional technologies, but only the retailers that use the dashboard are able to benefit from them.

>>
For example, we see that many retailers adopt CRM, personalization, and prospecting martech. But only those who use the dashboard, experience the benefits of the increase in diversity of products sold, increased transactions, and increased revenue from repeat customers.

>>
Why are descriptive analytics so popular then?

Although they often leave users to generate their own insights, they provide a simple way to assess different decisions, enabling managers to extend the range of actions they can take and to integrate new technologies.

>>
We would like to especially thank @avicgoldfarb who led a very constructive review process for the paper.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ron Berman

Ron Berman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @marketsensei

Jan 2
@LauraALibby raises an important point and question (thanks for the kind words Laura).

Maybe the high FDR we find is because the experiments have too small sample sizes and low statistical power?
>>
Same question was also asked by a reviewer. This is where peer review improves a paper, IMO.

So we did two types of analyses (in section 5.3):
1. We estimated what the power is in these experiments (spoiler: not so low).
2. We asked what the FDR would be with 100% power.
>>
For the effective power in the experiments, the table below shows that it is 50-80% depending on significance level used.

50% sounds low, but the following analysis shows you can't improve much on FDR.
>>
Read 6 tweets
Jan 1
How are effects of online A/B tests distributed? How often are they not significant? Does achieving significance guarantee meaningful business impact?

We answer these questions in our new paper, “False Discovery in A/B Testing”, recently out in Management Science >>
The paper is co-authored with Christophe Van den Bulte and analyzes over 2,700 online A/B tests that were run on the @Optimizely platform by more than 1,300 experimenters.

Link to paper: pubsonline.informs.org/doi/10.1287/mn…
Non paywalled: ron-berman.com/papers/fdr.pdf
>>
A big draw of the paper is that @Optimizely have graciously allowed us to publish the data we used in the analysis. We hope this would be valuable to other researchers as well.
>>
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(