, 79 tweets, 38 min read
My Authors
Read all threads
Prof Miguel Hernan from @HarvardChanSPH giving the 17th Armitage Lecture in Cambridge on target trial emulation: "How do we learn what works? A two-step algorithm for causal inference from observational data."
@HarvardChanSPH Common passion with Peter Armitage: evidence-based decision making.
@HarvardChanSPH Why do we want to know what works - because decisions must be made now. For clinical practice: Treat with A vs B? Treat now vs later? For public health.
@HarvardChanSPH How do we know what works? Conduct a randomized experiment. Every question in comparative effectiveness has a counterpart randomized trial - possibly infeasible.
@HarvardChanSPH But randomized evidence is often expensive, unethical, impractical and untimely - need to make decisions now.
@HarvardChanSPH Observational data - "real world data" - data that weren't collected for the purpose of answering the research question.
@HarvardChanSPH Randomized trial is preferred option. Want to analyse observational data as attempt to emulate target trial.
@HarvardChanSPH If we can't articulate target trial, then we don't have a good causal question.
@HarvardChanSPH Idea is old (Dorn, Cochran, Rubin, Feinstein, Dawid) for a simple setting with time-fixed treatment and single eligibility point. Name is new. And explicit generalization to sustained treatment strategy.
@HarvardChanSPH Has practical implications for how we analyse data.
@HarvardChanSPH Two step algorithm: 1. Ask a causal question (point to the target). 2. Answer the causal question (shoot the target).
@HarvardChanSPH How do we ask a causal question? We design a target trial.
@HarvardChanSPH Then two options: 1) perform the trial, or 2) emulate the trial.
@HarvardChanSPH If option 2), define target trial protocol: eligibility criteria, treatment strategies, etc. And then emulate it.
@HarvardChanSPH Why emulate a trial? Because not doing so leads to bias.
@HarvardChanSPH Example - postmenopausal hormone therapy (HRT) and heart disease. Observational evidence: HRT associated with 30% lower risk. Randomized trial: HRT associated with 24% higher risk.
@HarvardChanSPH Randomized trial: Women's Health Initiative - intention-to-treat hazard ratio for early follow-up was high (1.51 for 0-2 years of follow up) and then attenuated (<1 for 5+ years follow up).
@HarvardChanSPH Why attenuation? Because susceptible women already had disease event - selection bias in long follow-up.
@HarvardChanSPH Can show this occurs under reasonable assumptions even if there were no effect (@mats_julius Stensrud Epidemiology 2017).
@HarvardChanSPH @mats_julius Why? Popular theory: insufficient adjustment, residual confounding. Alternative theory: observational studies were not emulating target trial.
@HarvardChanSPH @mats_julius Design - compare initiators of hormone therapy to non-initiatiors. Analysis - compare takers of hormone therapy vs non-takers. Hence the early years of susceptibility are ignored.
@HarvardChanSPH @mats_julius Incorrect conclusion from observational analysis due to incorrect design, not incorrect analysis.
@HarvardChanSPH @mats_julius If you have already survived early years of hormone therapy, then no excess risk (not due to unmeasured confounding).
@HarvardChanSPH @mats_julius Constraints of emulated trial - no blinding, no placebo control (only "real world" trials)
@HarvardChanSPH @mats_julius Eligibility criteria, treatment strategies. But how to emulation random assignment? If insufficient data on confounders, then emulation of random assignment fails.
@HarvardChanSPH @mats_julius Need to adjust for baseline covariates (via matching, stratification/regression, standardization or IP weighting...).
@HarvardChanSPH @mats_julius Emulate the intention-to-treat (ITT) effect. Not because we love ITT, but because the trial estimates ITT.
@HarvardChanSPH @mats_julius Compare risk between initiators and non-initiators of therapy at baseline -- regardless of continuation of treatment (ITT)
@HarvardChanSPH @mats_julius If we are missing an important confounder, then we have bias.
@HarvardChanSPH @mats_julius Findings from observational analysis of Nurses Health Study - overall null effect, but some evidence of harm for 0-2 years post initiation. [So no evidence of protective effect, but also no strong evidence of harm - still doesn't quite fit the RCT.]
@HarvardChanSPH @mats_julius Findings from observational analysis of Nurses Health Study - overall null effect, but some evidence of harm for 0-2 years post initiation. [So no evidence of protective effect, but also no strong evidence of harm - still doesn't quite fit the RCT.]
@HarvardChanSPH @mats_julius Example #2: Statins and mortality in cancer patients - statin use associated with 30% reduction in mortality amongst statin patients.
@HarvardChanSPH @mats_julius Observational data from Medicare (people 65+ years in US), combined with cancer registry data (SEER).
@HarvardChanSPH @mats_julius Finding from target trial - no benefit. What is going on here? Observational studies were comparing current users vs current non-users - not initiators.
@HarvardChanSPH @mats_julius Previous studies were defining statin users based on future usage => immortal time bias (not target trial).
@HarvardChanSPH @mats_julius Example #3 - does statins prevent cancer? Observational studies reported implausibly strong associations. Time for a target trial!
@HarvardChanSPH @mats_julius Observational data - CALIBER (UK primary care data). Result - no clear benefit of statins on cancer.
@HarvardChanSPH @mats_julius Previous observational study: two key deviations from target trial - 1) including prevalent users, and 2) use post-baseline information to assign baseline treatment status (immortal time).
@HarvardChanSPH @mats_julius Problem with these observational analyses was not lack of randomization - problem is not emulating a target trial.
@HarvardChanSPH @mats_julius Failure to adjust for confounding is hard-to-fix problem. But failure to choose correct time zero is easy-to-fix.
@HarvardChanSPH @mats_julius Time zero is not a problem in real randomized trials - the time at which: i) individual meets eligibility trial, ii) treatment strategy is assigned, iii) study outcomes begin to be counted. In observational analyses, these three events need to be aligned.
@HarvardChanSPH @mats_julius Bias occurs when these three events are misaligned. Why is this hard? 1) Time of eligibility may not be unique. 2) Treatment may not be known at time zero.
@HarvardChanSPH @mats_julius In example #2, first cancer diagnosis is a unique time. But for hormone therapy and statin treatment, no obvious time zero.
@HarvardChanSPH @mats_julius Example #4: Colonoscopy screening and colorectal cancer. In US, recommended at age 50 and then at 10 year intervals. But no published randomized trials. (But even still, those trials don't include older people.)
@HarvardChanSPH @mats_julius Define target trial. Data from Medicare claims dataset. Multiple eligibility times. We can exploit that. Two choices: 1) choose single time (eg first time or random time), 2) choose all times.
@HarvardChanSPH @mats_julius Emulate a new target trial each week of follow-up. Pool all trial results. Need to bootstrap to account for dependence between trials.
@HarvardChanSPH @mats_julius Result - in screening group, screening finds cancer immediately (so higher cancer rate). But then rate of cancer goes down. In non-screening group, rate of cancer remains constant, but steeper gradient. Cross-over at 4.5-5 years - those in screening have fewer events eventually.
@HarvardChanSPH @mats_julius Both methods (single eligibility / multiple eligibility) give same answer - question is just statistical efficiency.
@HarvardChanSPH @mats_julius If we mess up time zero, then we get bias - if treatment is defined based on the future (or if non-treatment). Not always obvious because we only see hazard ratio (no Kaplan-Meier curve). Target trial helps estimate absolute risk difference.
@HarvardChanSPH @mats_julius Why do we get this wrong? Because analyses are organized around person-time - time with exposure vs time without exposure. But no time zero for person-time.
@HarvardChanSPH @mats_julius Two key components: 1) specification of time zero (synchronization of eligibility and treatment assignment), 2) randomized assignment. Confounding cannot be addressed until time zero is correct.
@HarvardChanSPH @mats_julius If we get time zero correct, how important is confounding? We will never know, but can compare empirically for examples.
@HarvardChanSPH @mats_julius Example #5: Statin and coronary heart disease. Extreme example of confounding (as statins prescribed to those at high risk of CHD). Data: UK primary-care data (THIN database).
@HarvardChanSPH @mats_julius Need sequence of target trials (otherwise few initiators in the single time period for eligibility). Two-year washout (if someone initiates treatment, can't initiate again until 2 years later).
@HarvardChanSPH @mats_julius In claims data, lots of data on age, sex, diagnosis. Not data on cholesterol, blood pressure, etc.
@HarvardChanSPH @mats_julius With no adjustment, positive HR (biased). With adjustment for all available data, negative HR (albeit non-significant). Extreme confounding and imperfect measurement of confounders - but we get the correct direction.
@HarvardChanSPH @mats_julius If we don't emulate target trial, can perform analysis in current users, in persistent users - we can get any result we want by changing the definition of current user!
@HarvardChanSPH @mats_julius So lack of randomization is okay? No. Emulating the target trial only eliminates self-inflicted injuries (selection bias, immortal time bias). Confounding is not a self-inflicted injury.
@HarvardChanSPH @mats_julius So if we have target trial, we don't need clinical trials? No - observational studies cannot turn themselves into randomized experiments. But we can do better observational analyses.
@HarvardChanSPH @mats_julius Limitations of observational studies remain (confounding, measurement error), but we do not compound them with additional problems (selection bias, immortal time bias).
@HarvardChanSPH @mats_julius When does emulation fail? - for preventative interventions (hard to get all confounders), and when treatment assignment is universal for those with certain prognostic factors (eg anti-hypertensives for those with high blood pressure - intractable confounding).
@HarvardChanSPH @mats_julius Difficult to publish examples when target trial fails - hard to publish an example demonstrating limitation of technique. (Even in Epidemiology!)
@HarvardChanSPH @mats_julius Target trial is typically a compromise. Two-step algorithm is an iterative process. Need to apply your complex analytical machinery once you have pointed at the target.
@HarvardChanSPH @mats_julius We can only get association not causation from observational. But we need to find our target quantity of interest in causal language (especially in the analysis of observational data). We know there are limitations, but need to be explicit about target of inference.
@HarvardChanSPH @mats_julius [*] We can only get association not causation from observational data. But we need to define our target quantity of interest in causal language (especially in the analysis of observational data). We know there are limitations, but need to be explicit about target of inference.
@HarvardChanSPH @mats_julius If you are adjusting for confounders, then you are trying to target a causal effect. So need to allow causal language in defining the target.
@HarvardChanSPH @mats_julius If someone is using observational data do target causal effect, ask "What is the target trial?"
@HarvardChanSPH @mats_julius @_MiguelHernan Q: What about blinding? Should we worry about that? A: Target trial helps understand limits of observational data - this is one limitation (not often articulated). If you think blinding is a problem, then should worry about it. Also lack of blinding for assertainment of outcome.
@HarvardChanSPH @mats_julius @_MiguelHernan Q: What about positivity? In trial, positivity is given. Do the controls (in target trial) really have a chance to get treatment? A: If someone can't be assigned to criteria, shouldn't include in trial. Should be eligibility criterion in target trial.
@HarvardChanSPH @mats_julius @_MiguelHernan A: Important for evaluation of sustained treatment strategies - need to ensure sequential positivity. Need to not censor when we have departure from "protocol" - as in a clinical trial.
@HarvardChanSPH @mats_julius @_MiguelHernan Q: Could target trial extend eligibility? Comorbid population can't enter clinical trial, but target trial could be extended to comorbid individuals. A: Good motivation for target trial. Clincal trial has short follow-up, limited eligibility/recruitment.
@HarvardChanSPH @mats_julius @_MiguelHernan A: Benchmark is trial as performed. But if we are confident in the target trial, then can extend framework - generalize to longer follow-up, wider eligibility.
@HarvardChanSPH @mats_julius @_MiguelHernan Q: Emphasis on individual studies. But studies exist in a scientific framework. Control is as important as randomization [More of a comment than a question, but it was David Cox, so we can forgive]
@HarvardChanSPH @mats_julius @_MiguelHernan A: RCTs answer simple questions. Causal inference = data + assumption - balance depends on complexity of q. Simple q - we can depend on data and need no assumption (RCT). Complex q - need to blend data/assumption (RCT/observational data).
@HarvardChanSPH @mats_julius @_MiguelHernan A: As questions become more complex, we need more and more untestable assumptions. Will never be a scientific framework for inference (as in theoretical physics). Target experiment -> system experiment.
@HarvardChanSPH @mats_julius @_MiguelHernan A: [I got lost a bit here] As question becomes more complex, more mathematical modelling needed.
@HarvardChanSPH @mats_julius @_MiguelHernan Okay that's the real end!
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Stephen Burgess

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!