THREE simple frameworks for thinking about measures of central tendency.
This thread has it all!
Warning: You may have heard people say there's only one thing called "the average" or "the mean". In this thread, we're going to use the word "average" or "mean" to apply to any one of a large family of measures of central tendency.
1. Mode
(Let's start slow. Feel free to skip the stuff you already know!)
This is the value that occurs most frequently in your data.
2. Median
If you line your data up from largest to smallest, then this is the value at the center of your data. (If you have an even number of data points then it's the number that's half way between those two central values.)
3. Arithmetic mean
This is what people usually mean by "the mean" or "the average". It's the gold standard. You add up all your data and divide by the number of observations.
4. Midrange
The value in the exact middle of the range of your data. It's halfway between the maximum and minimum value.
FRAMEWORK: Distance to Data
The mode, median, midrange and arithmetic mean might seem disconnected but there's a single mathematical idea that ties them together.
They all minimize the distance measure below for specific values of p.
They're the "closest" point to your data.
The idea is the "center" of our data is the the point that's closest to all the data points simultaneously.
The mode, median, midrange and arithmetic mean are at the center of our data according to four different definitions of distance.
5. Weighted arithmetic mean
In physics, the center of mass is the point where an object perfectly balances.
The weighted mean is kind of like the center of mass of your data when weighted according to your chosen weights. The formula is basically the same as the physics version
6 Geometric mean
To compute this mean, we multiply all the values together and take the nth root.
If your investments grew at a factor of x in the first year and y in the second then the average yearly growth of your investments is the geometric mean.
7. Harmonic Mean
You might be wondering when would anybody ever use this crazy mean?
It actually has plenty of real-world relevance. For example, if you drive to work at speed x and return home at speed y, the average speed of your round trip is the harmonic mean of x and y.
8. Root mean square
This one shows up in physics class as a measure of the power of waves. Waves vary in time and this is the right way of averaging over that variation.
This mean also shows up in a slightly modified form as a measure of average error in machine learning models
FRAMEWORK: The Algebraic Perspective
9. Power Mean
The root mean square and also the arithmetic, geometric and harmonic means probably seem disconnected as well but they have their own unifying principle.
They are specific examples of the power mean!
10. F-Mean
The power mean itself is just a specific example of a more general concept, the F-mean!
If there's a function f that never decreases in the range of our data, we can use it to define our own mean.
(You'll probably never use this but it's still fun to know.)
FRAMEWORK: A Shopping List of Desirable Criteria
We can further unify the concept of an average by thinking of them as a collection of procedures that usually have most of the following properties.
(Don't worry. I will explain these in plain English in the next tweet.)
Homogeneity: mean of k times the data is k times the mean
Symmetry: order of the data doesn't matter
Monotonicity: increasing any of the values never decreases the mean
Idempotence: mean of identical values is the value itself
Boundedness: mean is always between the min and max
SUMMARY:
Averages arise in diverse ways:
- measures of distance to our data
- analogies to physical properties (center of mass)
- summarizers of physical and real world processes like average speeds, interest rates and waves
Despite that diversity, they aren't disconnected concepts, there are several intriguing, unifying themes in their mathematical properties.
If you liked this thread and want more stuff like this on your timeline, give me a follow and don't forget to click the notification bell!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
You may have heard hallucinations are a big problem in AI, that they make stuff up that sounds very convincing, but isn't real.
Hallucinations aren't the real issue. The real issue is Exact vs Approximate, and it's a much, much bigger problem.
When you fit a curve to data, you have choices.
You can force it to pass through every point, or you can approximate the overall shape of the points without hitting any single point exactly.
When it comes to AI, there's a similar choice.
These models are built to match the shape of language. In any given context, the model can either produce exactly the text it was trained on, or it can produce text that's close but not identical
I’m deeply skeptical of the AI hype because I’ve seen this all before. I’ve watched Silicon Valley chase the dream of easy money from data over and over again, and they always hit a wall.
Story time.
First it was big data. The claim was that if you just piled up enough data, the answers would be so obvious that even the dumbest algorithm or biggest idiot could see them.
Models were an afterthought. People laughed at you if you said the details mattered.
Unsurprisingly, it didn't work out.
Next came data scientists. The idea was simple: hire smart science PhDs, point them at your pile of data, wait for the monetizable insights to roll in.
As a statistician, this is extremely alarming. I’ve spent years thinking about the ethical principles that guide data analysis. Here are a few that feel most urgent:
RESPECT AUTONOMY
Collect data only with meaningful consent. People deserve control over how their information is used.
Example: If you're studying mobile app behavior, don’t log GPS location unless users explicitly opt in and understand the implications.
DO NO HARM
Anticipate and prevent harm, including breaches of privacy and stigmatization.
Example: If 100% of a small town tests positive for HIV, reporting that stat would violate privacy. Aggregating to the county level protects individuals while keeping the data useful.
Hot take: Students using chatgpt to cheat are just following the system’s logic to its natural conclusion, a system that treats learning as a series of hoops to jump through, not a path to becoming more fully oneself.
The tragedy is that teachers and students actually want the same thing, for the student to grow in capability and agency, but school pits them against each other, turning learning into compliance and grading into surveillance.
Properly understood, passing up a real chance to learn is like skipping out on great sex or premium ice cream. One could but why would one want to?