Profile picture
Jason Abaluck @Jabaluck
, 86 tweets, 11 min read Read on Twitter
It's time for Day 2 of the NBER Health Meetings! I've heard concerns about the volume of tweets so let met allay those concerns: I have my laptop today and can type much faster, so no longer will you have to wait precious seconds or minutes for the live tweeting to buffer.
I've also had a lot of requests from people I just invented to give my thoughts on single payer health care and the most important inefficiencies in the healthcare system based on our discussion at dinner last night -- that's coming up later
For now, we have paper #1: Justin Sydnor presenting work with Keith @KeithMMEricson on liquidity constraints in health insurance
I'll give my favorite motivation for this paper -- we see people doing lots of funny things when they chose health plans. Jon and I find that people often choose plans with no deductibles even if they lose money by doing so, and Sydnor and Bhargava find that people...
Are even willing to pay $600 in premiums to get rid of a $500 deductible. That seems really strange! My preferred explanation prior to this paper is that people are confused and don't understand deductibles and that they're throwing away money.
@nealemahoney points out that this explanation must be partly right -- even rich people who clearly don't have liquidity constraints show these weird inconsistencies. Nonetheless, the situation is more complex for people who don't have wealth stored.
Let's recap the standard insurance demand model -- two things matter at the end of the day. People are risk averse and so they'd prefer to pay a constant premium equal to their expected costs so that an insurer will take on all of their medical risk.
However, insurance leads to moral hazard -- people consume care that generates less value than they pay for it if they don't have to pay the full cost (or any cost) and this leads to overconsumption. Baily showed (and Chetty later generalized) that in general,
We can think of optimal insurance as trading off risk protection vs. moral hazard. Feldstein and Gruber argue that this means that optimal insurance is "major risk insurance" -- a large deductible to combat moral hazard, and full catastrophic coverage to protect you...
In states of the world when insurance is most valuable because you've experienced the largest shock. This paper says, for liquidity constrained people, maybe not!
Holy moly I just discovered there is a "plus" button so I don't have to keep clicking back to my profile between every tweet.
Justin and Keith are going to focus on the *risk protection* piece of insurance. We're going to step back from moral hazard and try to understand how the risk protection value on the margin depends on whether people are liquidity constraints.
@JonSkinner17 asks the question I was wondering myself. In what way does the conventional Baily-Chetty formula *not* allow for liquidity constraints? I don't get Justin's answer about the relationship. Will try to figure this out as the paper goes on!
Maybe the story is that the conventional model -- while made dynamic by Chetty -- nonetheless assumes perfect capital markets and if you can't save and borrow against your future income that things look different? I'm not sure if that's right.
Meltzer -- Singapore has forced savings and high-deductible health plans. Interesting to look at. Not totally on point but I agree. Health economists have too little incentive to study other countries because of the importance of domain specific knowledge,
We just spent all this time learning about Medicare -- now you expect us to learn about *another* country?? It's not privately optimal to do so, but it would be socially optimal. Basically, subsidize us!!
This paper will be theory plus some empirics trying to understand the degree to which liquidity constraints correlate with a preference for "dominated plans". I'm ambivalent about the focus on "dominated plans" here -- this is a clear puzzle for sure, but I'd rather see...
whether liquidity constraints have broader explanatory power instead of focusing on an exceptional case. Okay, so now we have the model -- very conventional expected utility model. Now add dynamics. T consumption p eriods, no assets, per-period income. Contract covers 12 months.
Consider two ways to pay premiums -- evenly divided over months, or all upfront. Assume a large loss has some probability of occurring in each period assuming it hasn't occurred yet.
Now we have a dynamic programming problem. You're maximizing consumption given the continuation value and remaining assets next period. Special cases: no borrowing and cash on hand. Incidentally I checked the Chetty paper and his formula still applies...
with arbitrary borrowing constraints so I don't have a good account of the difference yet (this isn't snark, I'm sure there is one, I just haven't figured it out yet).
Amy asks, "What is are liquidity constraints?" Does it mean "poor" or does it mean, "can't borrow"? Answer: Justin is saying you're liquidity constrained if both apply. You're not constrained if you can still draw down assets (this is a semantic point but a helpful one)
Also clarifies that the model is neoclassical in that there is no gap between demand and normative welfare. The value of insurance is defined in the usual way as the certainty-equivalent premium difference that would make you indifferent between two contracts.
Let's do some simulations. Annual income is $20K, log utility, no discounting. interest rate of 1, we'll vary borrowing costs. Potential loss > $1K. What is the amount you must be compensated to increase the deductible from $500 to $1K? (imagine fully insured after year 1)
Lengthy discussion of whether the (imagine fully insured after year 1) assumption ignores the most important case because it rules out precautionary savings. Lots of good questions but wish @ProfFionasm were here to tell everyone we should get to some results first!
Basic idea is that when borrowing is really expensive, if you have to pay premiums up front, you're really upset. If premiums are paid regularly, you're really happy. In the latter case, you may pay more than $500 in premiums to eliminate a $500 deductible.
Upshots: we should think carefully about the schedule of premium payments. Some mistakes may not be mistakes.
Result from Arrow: if you have less than perfect actuarial value, you always want a deductible + full coverage after. Some ACA plans look like this. More standard is deductible + partial coverage range + full coverage. Standard explanation: moral hazard.
This might be better due to liquidity constraints because you prefer a series of smaller losses to one big one.
Okay, so I think the relationship with Chetty-Baily is that the Chetty-Baily formula telling us to compare the change in consumption with the compensated elasticity of utilization w.r.t. insurance coverage still holds here at the optimum, but some of our prior theoretical...
intuitions about what would realize this may be wrong.
My main takeaway is that I would like to see the empirical work also try to quantify the relevance more broadly instead of focusing only on exceptional cases like when some plans dominate others.
Next up, paper #2! Ralph Koijen presenting work with Stijn Van Nieuwerburgh on "Financing the War on Cancer".
Cancer drugs are really expensive. Richer countries tend to use more of them (I assume despite large price differences across countries)
The point of this paper is that life insurers get a huge benefit from life-extending treatments from pharma. We should take advantage of this!
I think another way of putting this point is, "Sure, people like living rather than dying but not enough. Life insurance companies *love* when you're alive (I assume annuity companies hate it -- maybe that can also be used somehow?)
There is a total benefit of 6.8 billion dollars a year from existing immunotherapies for cancer based on the estimates from clinical trials and an actuarial model. The total cost of these treatments is $10 billion, the total copay is $4 billion.
I'm confused at this point -- if life insurance companies are competiive, aren't these savings priced into the policies?
In other words, premiums are based on actuarial projections which presumably take into account that life expectancy changes over time due to medical technologies?
They claim that since insurance companies are profitable this is a "Free lunch". We can just have these life insurers pay for pharma. But that doesn't make a lot of sense (this is one of those presentations where no questions are allowed for the first 20 minutes)
One could say more generally, "Industry X is profitable. Let's take their money, it's a free lunch!"
We need to ask as IO economists many questions such as, "Is there some reason it's efficient for companies to have non-zero profits?" If not, is there some way to take money from them so that the incidence falls entirely on the firms and not consumers?
These are tricky questions and I don't see how the paper answered it. Here come the questions!
Neale points out benefits to life insurers are losses to annuity companies. Agree. Neale also points out that if life insurers subsidize drugs, drug prices might change.
Q&A session has left me more puzzled. On the one hand, it seems like there is some externality here (which is what the first few slides got it). Life insurers get some additional benefit from people using drugs that people don't take into account.
On the other hand, I don't think the way they did their calculation measured this externality.
Specifically, I think their calculation makes an assumption that life insurers are not taking into account potential future health benefits at all when they price their policies. I don't think this assumption is plausible and harassed them about it a little.
I do think there is a correct idea here -- there *is* an externality that could be corrected, but I don't think they have properly calculated it.
Next up: Frank Lichtenberg on the long-run impact of new medical ideas on cancer survival and mortality.
Two roughly equal components of GDP gains over the last century or so: income and health (according to Murphy and Topel)
There is a consensus in macroeconomics that income growth comes from technology / new ideas. Is the same true for health growth?
I don't totally get this question -- what's the alternative explanation for health growth besides "New ideas"? Is it "diffusion of existing ideas" or is that also part of how he's construing technology?
He's going to construct some measure of "new ideas" and see how they explain the 5-year observed survival rate conditional on cancer diagnosis and unconditional potential life years lost prior to age 75. I think this is interesting but a little different from his motivation.
What he's actually doing is asking how much of the change in mortality can be explained by the particular measure of innovation he has in his data (assuming identification etc...)
Has data on a bunch of different types of cancer over time. Regresses survival on 5-year lagged measure of new ideas based on pub med articles. I think what he's trying to get at here is, "Suppose we subsidized R&D and had more pubmed articles with a bunch of citations..."
Would this reduce cancer mortality. Very interesting question, hard to get at super cleanly but this is what tenure is for -- this is not airtight enough to publish in a top journal but still worth seeing what suggestive evidence exists.
His measure of innovation is based on keywords used in articles about particular types of cancer -- he's counting the number of new "descriptors" and calling that number of new ideas. But wait, he doesn't just count the number of new descriptors,
he computes the "vintage" of descriptors, some kind of weighted average of the age of all the descriptors used. The idea is that newer ideas are of higher quality than older ideas. I don't totally get this but I suppose it's highly correlated with whether there are a bunch of
recent new ideas.
There is some noise in the data. He gives example of a new idea in a cancer article in 2016 according to his methodology "Alcohol drinking in college." Probably not invented in 2016!
Story of the paper in one chart: horizontal axis is change in idea novelty between 1981 and 1995. Vertical access is change in mortality 18 years later. Steep downward slope.
I'm still a little confused about counterfactual -- does this show that more R&D money would lead to more innovation? Not necessarily -- could be that these ideas were low-hanging fruit that would have come with money or not.
His last point is interesting -- we can forecast changes in cancer mortality in the future using this correlation (even if you don't believe it's causal!)
Paper #3: David Chan presenting work with Jon Gruber on Triage Judgments in the Emergency Department
I take the idea as being: people do this currently in an ad hoc way, wouldn't it be better to do things in a less ad hoc way?
Why is this a question for economists rather than entrepreneurs? Entrepreneurs are kind of doing it wrong -- they're using algorithms to learn what humans already do. That's silly -- we want to know what would be optimal.
This is incidentally also the basic premise of my paper with Leila and David Chan on blood transfusions (working paper coming soon!)
They will proceed in two steps: identify high-performers and then try to emulate high performers. I'm not sure why a two step process like this.
Laptop battery is dead, back to phone!
Im confused about a narrow question — why not estimate optimal triage function giving triage rule as a function of attributes of all patients? Why go in two steps?
Mark Duggan asks good question. What if triage decisions are correlated with unobservables physician skill?
Currently, triage nurse handles triage — there is a 114 pg book of guidelines. Do people follow? Probably not.
Basic idea is — get value added estimate for each nurse. 1 as higher is 0.18 pp reduction in mortality (13.4%) — validate with quasiexperimental variation, then ask — who are they?
Triage varies a lot by place — assign a score from 1-5 but often have very different procedures, different va eds have very different standards for how buckets and exceptions are formed
They construct a jackknife estimate of triager value added (leave one out mortality) for each patient using mortality for all patients seen on other days.
Form an instrument that lives at the “day level” — so I think the idea is, we have these nurse measures — if you arrive on the day when nurses are bad by this measure, are you more likely to die,
Key assumption is random assignment of patients to nurses conditional on time dummy. Show that it’s balanced wrt observables.
So using this iv strategy we get quasiexperimentally validated value added measures for each nurse triager — now we want to know, what explains them?
They relate these fixed effects to predicted mortality, predicted wait time and ESI behavior. Age, gender and tenure have little impact
Next they ask if there are controls they can add that make the triage nurse effect go to zero. But still a correlation.
Triagers who are more sensitive or predicted mortality in esi scores have lower mortality (need to think carefully about how to interpret this but we are 5 min from lunch)
With station specific lasso functions of observables, they can explain 80% of the measured variation in triager value added
This means that maybe we can have an algorithm that emulates triage nurses? This means you can predict which nurses are good. Unclear to me David’s stronger claim that you can therefore automate triager value added
I talked to Dave at length and now understand much better. There are TWO conceptually separate exercises they smashed together into one regression
Exercise 1 is constructing nurse value added and validating that with quasiexperimental variation
Exercise 2 is using variation in nurse propensity to treat with different x’s in order to identify the optimal rule — so that part is not purely correlational
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Jason Abaluck
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!