Profile picture
Jason Abaluck @Jabaluck
, 122 tweets, 13 min read Read on Twitter
Time to take my twitter beyond pedantic comments on @C_Garthwaite and live tweet the nber Health meetings to all 198 of my followers, including several humans
First up is Ami Ko presenting work with Hanming Fang on rating areas in the ACA —
The ACA imposes many “community rating provisions” — you have to charge everyone the same price with the exception of age, smoking status and geography
This arguably strengthens the degree of insurance coverage by preventing premiums from increasing next year if you get sick (reclassification risk)
This paper is studying the fact that the geographic provisions are imperfect — you can’t vary prices within ratings areas BUT you can simply stop offering coverage in some counties (thus setting an infinite price)
For reasons I don’t yet fully understand, they want to disentangle whether this behavior is positively correlated across plans (reflecting common shocks) or negatively correlated across plans (reflecting competition)
I checked the working paper — the idea is that plans are using the ability to offer plans in some counties and not others to each pick a county where they are a monopoly to avoid competition...
Then this would call for a regulation saying insurers must offer plans everywhere
Alternatively, if insurers were avoiding high cost counties, this would suggest subsidies need to be adjusted to avoid the problem that insurers are losing money on particular predictable risk classes
Live tweeting is hard and I already missed 10 interesting questions while furiously typing! Several concern the question of whether controlling for Medicare advantage entry somehow captures fixed costs (I didn’t fully understand idea or questions)
On average, insurers enter 85% of counties within a “rating area” (dozens of counties where they’re supposed to have the same price)
Gruber: “why not study New Jersey?” I didn’t get why due to typing but *why not New Jersey”
States with big rating areas tend to have higher insurer participation (insurers can exit unprofitable counties). Ah, New Jersey has maybe just one rating area
Insurer participation drops in the right tail of the Health distribution across counties. I take this to be evidence of inadequate risk adjustment
Ashley wonders why some counties have 100% heavy drinkers. We misunderstood — these are quantiles of things that predict bad health, not % of people who are heavy drinkers
Anyway, point remains that sickest counties have less entry within rating areas
Next point — insurers enter 90% of counties where with ACA plans where they also have MA plans. 70% where they don’t, showing “fixed costs for play a role”. Not totally following this point
Model — Bertrand competition with “spurious product differentiation”. Consumers observe noisy signal of prices? Neale: most people have subsidies. Ami: that’s what this model captures. This seems like a pretty heuristic argument but...
I see the point that people likely not fully informed about subsidies and/or understanding net price (supported by my work with Jon in Oregon). Do t know if this supports additive noise specification
They numerically solve equilibrium model which captures intuitions at the start — insurers not offering in some counties could be due to competition or market level shocks, which predict negative and positive correlations between insurer entry decisions respectively
They construct a measure which quantifies whether insurers tend to enter in the same counties and the result is...
0.35! So positive correlation, which suggests imperfect risk adjustment is the problem. But can’t this be investigated more directly using data in insurer subsidies / profitability?
Now they spend a lot of effort conducting formal statistical tests of whether this correlation is different from zero. I already believed that. P-value of 1e-07. I would rather see them dive in to more direct evidence of the imperfect risk adjustment purportedly driving result
Ah that is what they are doing now!
Or sort of. They do regressions to demonstrate earlier point re MA entry predicting ACA entry. Can’t you look directly at the adequacy of risk adjustment in counties with sicker people? Maybe this is hard or impossible for reasons I am not aware of.
Anyway I am complaining a lot but I will do that with every paper. The observation that common prices within rating areas is undermined by insurer exits is important, and this paper suggests there is imperfect risk adjustment in high risk counties.
Next up — Maria Polyakova presenting joint work with Kate bundled and Ming tai-seale on a randomized experiment trying to help consumers make better insurance decisions
Quick aside: see nber.org/papers/w22917 for our instructive and comically unsuccessful attempt to improve choices by giving people information
They frame this paper as testing whether consumers have imperfect information about product attributes or are unsure how to map attributes into utility function.
They frame this as trying to reconcile the epic Abaluck-gruber vs Ketcham et al debate. I don’t fully see this. I see those papers as focused more on the bottom line of whether choices get better over time. But I agree this distinction is important...
For trying to understand how different types of info might improve choices.
My prior before seeing paper is that people are uninformed about both attributes and how to map attributes into utility — but maybe small set of attributes like premiums, donut hole coverage and deductibles are “known” (at least taken into account in choices)
Incidentally the question they are asking is closely related to early stage work of mine with Giovanni Compiani. Can we distinguish whether low utility weight in a logt reflects low value or lack of information? (Giovanni and I want to do this w/o experiment...
And this predict efficacy of experimental interventions)
Ah okay there intervention is a bit different than I expected. Control arm, treatment 1 where they get expected total costs and quality (like what Jon and I did in Oregon) and treatment 2 where they get an “expert rating” combining both
This also is closely related to work by Jon gruber, Ben Handel and Jon kolstad on picwell (I have only heard about this in conversation, but the jons tell me they find big effects of informing brokers with algorithm and having brokers convey info)
This is in contrast with existing work like Kling at al, Ericson at al and my work with Jon which suggests it’s hard to change behavior with info
Anyhow, first results here are that people in the expert arm switch a bit more (8 pp) and also (oddly) spend more time searching
I’d call this broadly consistent with earlier work — it’s damn hard to get people to switch. Info helps a little. Magnitude here is comparable to kling et al but what’s new is the expert arm va just information.
They find in structural model noise in both x and beta (attributes and weights) — called it! ;)
My main takeaway here is along the lines of gruber, Handel and kolstad — people still don’t know how to interpret info, but if you tell them what “experts” say they are more responsive. Overall switching rates are small.
Up next is gruber hoe and stoye, presented by Thomas hoe from Cornell — they study a regulation in England that, unusually, limits the discretion of physicians
Starting point: emergency room wait times are important. If you’re waiting for care and then you die, that’s bad. So in the UK they regulated wait times — must be under 4 hours for 98% (later reduced to 95%) of patients. Enforced by financial penalties...
Docs can be fired if they don’t meet threshold. HUGE discontinuity I wait times at 4 hours — almost no one over but huge peak right before
First, they’ll show us counterfactual wait times — then they’ll examine how treatment and mortality change.
Guy behind me handed me my hotel room key which fell out of my pocket.
Roughly 20 min wait time reduction for people impacted by policy under assumption that people outside the “exclusion region” near the link point are unaffected. Try to test by somehow comparing more and less busy depts. I didn’t follow details but I get gist...
Next, look at admissions probability. Huge spike at 4 hours. They clarify assumptions necessary to construct counterfactual distribution. Is story here that patients are admitted at 4 because when one patient is at 4 others probably are too?
No significant increase in length of stay, but 30 day costs do go up. 30 day mortality falls by 14%!!!
What saved lives — more admissions or shorter wait times?
Look at heterogeneity by diagnosis. I didn’t follow because I got distracted and missed main point but upshot was they think wait times are responsible.
This is a striking finding. Weird that I like papers by my advisor and frequent co-author. Big question I see this raising — should we impose more regulatory constraints on docs? (Is right interpretation that doc discretion can backfire or...
Is regulation somehow solving a coordination problem?)
Subsequent discussion has revealed I had no idea what was going on in that paper
Specifically how did they get the counterfactual mortality distribution? Apparently they constructed it based on the observed wait time and mortality relationship beyond 4 hours combined with the indices change in wait time
But implicit in this is some kind of assumption about who is moving where between the ex ante and ex post distribution (specifically, people are randomly assigned from past 4 hours to before 4 hours). Don’t know if I buy this but need to think harder.
Next up is Agarwal, ashlagi, Azevedo, featherstone and karaduman on kidney exchanges
100K people on kidney waiting list — 12K per year get kidneys from deceased donors
You can’t buy and sell kidneys, but you can exchange. A wants to give a kidney to B but is incompatible, C wants to give a kidney to D but is incompatible, so A gives to D and C gives to B
3 basic facts in this paper — half of kidney exchanges are “ad hoc”, within hospital. This is inefficient (that’s fact 2). Unreimbursed participation costs are the problem.
Kidney exchanges can be more complicated — could have 3 party exchanges or more. But more parties require simultaneous surgeries. You don’t want to extract a kidney from a donor unless you know the patient will respond well. Usually <= 3
Can also have Good Samaritan chains. I donate to person B, person B’s friend agrees to make donation to person C etc... these don’t require simultaneous surgeries.
Factors like blood type and PRA determine the ease of finding a match (the number you can donate to and the number who can donate to you)
Can easily construct examples where no within hospital matches are possible but can match across hospital. But might be the case that across hospital helps one hospital but not another
This results in too few transplants of hospitals only care about their own transplants (not rewarded for helping patients at other hospitals). Could have situation where more total transplants but fewer at any given hospital
The majority of kidney exchanges are currently within hospital (originally 99%, as of 2014 about 50%) — mainly due to expansion of the national kidney registry
Clear evidence of inefficiency o-donors (who can donate to anyone) are matched to non-o patients (who can receive from many) only 7% of the time on national exchange but 23% within hospitals.
Each of these donations from universal donors (the only people who can help other “o”’s) to non-o’s reduces the total number of transplants by about 1 — this is bad
(Aside — this presentation shows why al Roth’s nobel prize was richly deserved. These questions were off economists radar until his work)
Model: costs of transplants given away to a given hospital are lost transplants at that hospital plus transaction costs
Benefits are rewards you get from market (maybe you get transplants back ). Across hospital decisions need to be subject to each hospital donating incentive compatible amount of kidneys (and feasibility given amount that hospital has)
Upshot of all of this is that hospitals should be compensated if they give valuable kidney donations they receive to a centralized exchange.
Now there are a bunch of supply and demand graphs. Not sure what these add to the previous point — reminds me of ed glaeser in undergrad micro circa 2003: “graphs confuse students” (bad general lesson — understand graphs and intuition and equations for deeper understanding)
Graphs implied returns to scale from the national kidney exchange — how many transplants you get per donor. It’s large enough that these are flat. But most hospitals are in the region of hugely increasing returns to scale and so their participation would increase donors.
Not sure what identified that graph — he says “engineering estimates”. Martin Hackman asks about why donations can’t be stored — model has exit rate of unmatched donors due to death — kidneys can’t be stored more than 24 hours.
They are proposing a credit system where each hospital gets credits if they give valuable donors to an exchange. Gruber: why not mandate participation in national exchange? Nikhil: maybe innovation at exchange level — 2nd largest exchange innovated by looking globally for matches
Concluding thought — this seems like a clear inefficiency and the solution makes conceptual sense. Important problem identified. Hard to follow details of quantitative estimates in a 1 hour talk but the general approach seems sensible.
Is this legal asks the guy in the back? That’s a grey area says nikhil. Hope the referees include a lawyer!
Marika cabral makes point to me afterwards — why not have mandate to get around incentive compatibility constraint? Simpler than credits. I affee
And were back from the standard nber lunch fare of plain white chicken. Next up: Jon skinner presenting work with Diego Colin and Doug Staiger on tech diffusion
Skinner: my coauthor is a macroeconomist. Gruber: so you only have three data points? Oh snap!
Why do technologies diffuse quickly for some? Maybe quick diffusion is due to overconfidence. Let’s look at diffusion of defibrillators (icds)
You know those paddles you see on tv shows? That’s an icd but they put it inside you.
Next time on medicine for economists: you know bicycles? That’s the Krebs cycle
Trial in 2005 shows icds reduce mortality substantially for certain types of heart disease
Medicare claims data shows vast geographic variation in diffusion rates — eg terry haute Indiana took off to 0.5 per 100 Medicare enrollees. Savannah is flat at 0.1
Amy: why not use hospitals? Skinner: JUST WAIT. Also the macroeconomists couldn’t handle that many data points
Then in 2006 a study gave bad news about icds. Skinner says this leads to “exnovation.” This is a terrible word and anyone who uses it should be shot.
Basic point is icd use went up then down but at vastly different rates across hospitals. Next look at the hospital level. Construct a shrunken estimate of (hospital-year). Correlate with icd adoption
Rapid adoption is hugely positively correlated with mortality *conditional* on icd use (all risk adjusted). How should we interpret this?
To do so, we first must turn to the tale of Muenster Indiana. Nytimes article in 2005 raises question about whether docs qualified to implant defibrillators. Skinner says: working down marginal benefit curve. Docs gives treatment until no benefits.
I think skinners story is that risk adjusting controls for all observances. Not sure this is plausible in Medicare data — you treat more with sicker people no?
Lots of back and forth mostly notable for use of the word portmanteau. Basic point is — isn’t this also consistent with a world where first trial was wrong, some people updated based on trial but no one was “overconfident”
Aside: I think debate about this paper would be clarified by asking what counterfactual they care about. People keep telling different stories and skinner says, “that’s overconfidence” — I don’t disagree but...
Some types of overconfidence mean different things depending on counterfactual
Okay now they are getting to counterfactual — in their structural model they shut down “overconfidence”, which I think is identified by the degree to which some docs go further down marg benefit curve, under assumption that this explains cross-sectional variation in outcomes
This is actually closely related to a test that Leila Agha and I do in our AER paper on diagnostic testing —how would number of tests and test yield be different if all docs had the same threshold for what prob of a positive warrants testing?
That model was based on Chandra staiger, and Staiger is a coauthor here, but they let the macroeconomist coauthor write the model — same basic idea I think. I don’t really get why they are calling it overconfidence rather than something more general like overuse
The discussion was a bit scattered. At the end of the day my takeaway is that lots of variation in icd use, suggestive evidence that it’s about overuse since more use correlates with higher mortality conditional on observables.
Okay next up: Michael Darden, Ian mccarthy and a third coauthor who I deeply apologize to because he switched slides before I could catch the name (presented by Ian)
Question is: Medicare reduced payments to hospitals — how do hospital prices change?
One possibility is that hospitals will increase the price they charge — cost-shifting (would only theoretically expect with non-profits). A second possibility is that price will fall due to relaxed capacity constraints when you have fewer public patients (ht: Tim Layton)
They have really good private price data from hcci. Exploit two Medicare payment changes.
Usually prices to private payers are negotiated every three years as a % of Medicare payments for different classes of services
Variation source 1: penalty for hospital read missions lowers reimbursement for some hospitals. Variation source 2: value based purchasing rewards some hospitals and punishes orbers
I just realized that the autocorrect suggestions my phone makes for the last word in each tweet don’t get implemented. Too bad ... no time for such niceties when we have economics to convey!
Okay so anyway we have these penalties from Medicare that reduce payments to hospitals — what happens?
Prices increase! Penalized hospitals get a 1.4% increase in price
My questions — only for nonprofits? How does this relate to magnitude of penalty? Hopefully we will find out soon
I think he’s talking about all hospitals — cost shifting for for-profit hospitals would be... really weird
So far this paper is very much against conventional wisdom. Want to hear more about why penalties are exogenous... might be penalty implemented in response to background trends which are driving things (eg something causes more readmissions and higher prices for private people)
Main result is posed as a puzzle. Cost-shifting here would contradict existing lit and be theoretically suspect. I suspect identification problem. Figure 1 of working paper looks like time trends would cause effect to go away.
Next up — Castello, juanmarti and Lopez, presented by Judit call castello — how does losing health insurance impact undocumented migrants?
How does restricting access to the public healthcare system impact mortality among undocumented migrants in spain?
September 2012 — Spanish reform restricts access to healthcare system except for emergencies, pregnancies and children
Estimate number of undocumented with each nationality by subtracting estimate of total number with each nationality minus number with residence permits
Regress mortality rate by nationality, year and month on the percent undocumented x post reform and appropriate controls. Graphs show no change for nationalities with few undocumented. Big change for nationalities with more undocumented after reform.
They estimate about 150 deaths per year as a result of removing healthcare access. No effect for accidents (still covered from emergency care). Doesn’t seem to be about changes in population of undocumented.
Two questions I want to ask if time: why do the econometrics on nationality instead of other x’s? Also, what is $ per life saved?
Meltzer says cancer mortality effect is very surprising because were not good at treating cancer
I think this result is suggestive and first evidence on a really important question. I share meltzers skepticism — this type of research is hard — they’re asking a question that is hard with great data (how does ins impact mortality?) and doing it with low quality data
But — important but — no other evidence for this population
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Jason Abaluck
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!