Researchers put together an incredible workplace wellness program that provided thousands of workers with paid time off to receive biometric health screening, health risk assessments, smoking cessation help, stress management, exercise, etc.
What did this do for their health?🧵
So, for starters, this program had a large sample and ran over multiple years.
Because of it, we have evidence on what people do with clinical health info, with exercise encouragement and advice, with nutritional knowledge, through peer effects, and so on.
Participants in the treatment group were prompted to participate with cash rewards ranging from $50 to $350.
Go to screening? Earn some money, help yourself by bolstering your knowledge about yourself and potentially improving your health.
What could be simpler?
The participants certainly seemed to think so.
The cash rewards did get more people into screenings and advising, and they even got some people moving more.
If estimates from earlier studies were to be believed, this effort should even do enough to save employers money!
But that didn't work.
Average monthly medical spending didn't change when comparing the treatment to the control group.
In fact, this study stands out in the literature, as getting nulls across basically every outcome relevant to the employer.
Health and wellness incentives and opportunities did not make people less absent or medically costly, or much else (which we'll get to).
Before getting to other outcomes, we have to ask: Why trust this over other results? A few reasons:
For one, it was bigger than other studies in the experimental literature.
For two, it was preregistered, publicly archived, and independently analyzed by outside researchers.
All of that on its own is really good. But what really takes the cake is that the prior literature was impacted by p-hacking and publication bias, whereas these researchers committed to publishing their results regardless.
Who do you trust more?
"We aren't financially conflicted and we'll publish regardless of what happens and of course we provide data and code."
or "p = 0.04, this program is life-changing (ignore my financial conflicts of interest :))"
I know my answer, you know my answer.
Now let's talk other outcomes.
Medical spending: not affected in total, admin-wise, drug-wise, office-wise, hospital-wise, or in terms of any utilization metric.
Employment and productivity: Didn't affect employee retention, salaries, promotions, sick leave, overtime, etc.
More employment and productivity: Didn't affect job satisfaction or feelings of productivity. BUT, did affect views about management priorities on health (increased) and the likelihood of engaging in a job search (increased).
That's backfiring, potentially.
Participants failed to increase their number of gym visits, didn't participate in the IL marathon, 10k, or 5k more often, despite smoking cessation advice and help they didn't smoke less, they didn't report better health, hell, they became (marginally-significantly) fatter!
Across basically every metric, the results were null, null, and--my favorite--null.
And this is what we expect with credible intervention evaluations of high-quality samples. This is so common, in fact, that it's been dubbed the "Stainless Steel Law":
But the most amazing detail, in my opinion, is that this study went further:
It explained why prior observational work showed such large benefits for workplace wellness programs.
The reason is selection: health-conscious employees selected into the program and stuck with it!
These programs' effectiveness is a classic example of selection leading to results that simply cannot be trusted.
But... how?! Why?! After all, this program had all the ingredients that so many prominent people think will solve America's public health issues.
The answer is that they misunderstand people.
Most people are lazy, commitment is hard
My recommendation to ppl who haven't learned that is to do a clinical rotation or read abt the thousands of programs across America that have done food delivery coaching, etc., with no effect
This leads me to something important:
Do you know why Ozempic works so well and has enjoyed such incredible popularity of late?
If you can understand these headlines, you'll get it.
Ozempic makes it automatic to lose weight.
It takes out the effort, and people have an easier time doing more (in this case, work) than they do being asked to eat less or doing things that simultaneously bore and fatigue them (exercise) without a commitment mechanism like a boss
For this reason, GLP-1RAs are going to decisively beat all efforts to advise people, to provide them with healthy food and instructions on how to prepare it, and all of that tried-and-true advice that's been around and in vogue for decades, but clearly hasn't worked.
To top this all off, here's the result of a contemporaneous large, cluster-randomized controlled trial of workplace wellness programs at BJ's Wholesale Club.
Similar intervention, somewhat optimistic effects, and, once again, no results to show for it.
The idea is to put large, powerful animals like bulls or lions in the ring with several dogs, and the winner lives.
The sport has existed for thousands of years. One of our first records is of Indians showing it to Alexander the Great.
The first record in England comes from 1610 and features King James I requesting the Master of the Beargarden—a bear training facility—to provide him with three dogs to fight a lion.
Two of the dogs died and the last escaped because the lion did not wish to fight and retreated.
For one, there's no supportive pattern of sanctions. For two, you can develop in near-autarky, and before post-WW2, that was comparatively what the most developed countries were dealing with.
I'm not talking fatalities, but bites, because bites are still a bad outcome and any dog who bites should be put down.
If we take the annual risk a dog bites its owner, scale it for pit bulls and Golden Retrievers, and extrapolate 30 years...
How do you calculate this?
Simple.
First, we need estimates of the portion of the U.S. population bitten by dogs per year. Next, to adjust that, we need the portion of those bites that are to owners. So, for overall dogs, we get about 1.5% and roughly ~25% of that.
Then, to obtain lifetime risk figures, we need to pick a length for a 'lifetime'. I picked thirty years because that's what I picked. Sue me. It's about three dog lifetimes.
P(>=1 bite) = 1-(1-p)^t
It's pure probability math. To rescale for the breed, we need estimates of the relative risk of different dog being the perpetrators of bites. We'll use the NYC DOHMH's 2015-22 figures to get the risk for a Golden Retriever (breed = "Retriever" in the dataset) relative to all other dogs, and Lee et al. 2021's figures to get the risk for a pit bull. The results don't change much just using the NYC figures, they just became significantly higher risk for the pit bulls.
To rescale 'p' for b reed, it's just p_{breed} = p_{baseline} \times RR_{breed}.
Then you plug it back into the probability of a bite within thirty years. If you think, say, pit bulls are undercounted for the denominator for their RR, OK! Then let's take that to the limit and say that every 'Black' neighborhood in New York has one, halve the risk noticed for them, and bam, you still get 1-in-5 to 1-in-2.5 owners getting bit in the time they own pit bulls (30 years).
And mind you, bites are not nips. As Ira Glass had to be informed when he was talking about his notorious pit bull, it did not just "nip" two children, it drew blood, and that makes it a bite.
Final method note: the lower-bound for Golden Retriever risk was calculated out as 0.00131%, but that rounded down to 0. Over a typical pet dog lifespan of 10-13 years, an individual Golden Retriever will almost-certainly not bite its owner even once, whereas a given pit that lives 11.5 years will have an 18-33% chance of biting, and if we use the DOHMH RRs, it's much higher. If we use the DOHMH RR and double their population, that still holds.
The very high risk of a bite associated with a pit bull is highly robust and defies the notion that '99.XXXX% won't ever hurt anyone.' The idea that almost no pit bulls are bad is based on total fatality risk and it is a farcical argument on par with claiming that Great White Sharks shouldn't be avoided because they kill so few people.
Frankly, if we throw in non-owner risk, the typical pit bull *will* hurt some human or some animal over a typical pet dog's lifespan. And because pit bulls live a little bit shorter, you can adjust that down, but the result will still directionally hold because they are just that god-awful of a breed.
Final note:
Any dog that attacks a human or another dog that wasn't actively attacking them first should be put down. That is a big part of why this matters. These attacks indicate that the dogs in question must die.