Imagine a model to predict customers who will unsubscribe from your service.
You want to incentivize them with $10 because they will cost you $100 if they churn.
Look at the attached confusion matrix showing that the model is only 77% accurate.
Is this model good enough?
I love this question because it puts a couple of things in perspective:
1. A model that doesn't look too good by the numbers.
2. A business case that can use a less-than-ideal solution to solve the problem.
There's only one question about this problem: how many people will not churn if they get the incentive?
We don't know, but we can play out different scenarios and see what happens.
Assume everyone who takes the incentive sticks around.
That means that we will be spending $190 on incentives.
Churn will cost us $1,300 (13 people will churn — the false negatives.)
Total cost: $1,490.
If we did nothing, we would have $2,200 in costs.
The model is useful!
Assume that half the people who take the incentive stay.
We will still spend $190 on incentives, but churn will be ~(13 + 4) * $100 = $1,700.
The total cost is $1,890 which is still less than $2,200.
Model is still useful with the incentive at a 50% success rate.
This model wouldn't make sense if we can only get a single person (or none of them) to stay after getting the incentive.
In every other case, the model will be useful.
Here is the summary of the story:
In a business context, performance metrics aren't the whole story. The economy of things plays a more prominent role.
(And of course, in this example we are assuming we created the model for free... but it's just an example.)
@AlejandroPiad I remember we had a conversation a while ago about measuring whether a model is useful or not. We discussed ROI that day.
This model is one example.
Even better yet, assuming that the incentive is 100% effective, then giving $10 to everyone will save more money than using the model in the first place.
It all comes down to the assumptions we make about the effectiveness of that incentive.
I have not seen any proof that Twitter "kills your content" if you include links to your tweets.
Here is the result of a very unscientific experiment: comparing my top 10 tweets with and without links.
If you have something concrete, please let me know.
This is anecdotal evidence at best.
It doesn't prove that Twitter doesn't mess with your links, but it does suggest that —if anything is going on— it is much more subtle than what some believe.
I haven't found any documentation either.
This is what I do know:
Breaking the links that you add to your tweets is self-serving: it makes it worse for the people who follow you. They can't just click to get the content.
I can't see how this will make your content better in any way.