, 13 tweets, 6 min read Read on Twitter
Good morning, #epitwitter!
I’m a co-author on @JessGeraldYoung’s new paper out now in Trials & I want to tell you all about it!

Do you use longitudinal data? Are your measurements often enough? Do you know what “often enough” is?

Time for a #tweetorial!
If exposure happens once, then we only need to worry about confounders once too.

But if exposure can happen over time, so can confounding! & if exposure happens every day, so can confounding😰

But we usually don’t have data everyday. Is that bad? Classic epi answer: it depends!
How often do we need to measure our exposure and confounders to be able to adequately adjust for confounding?

We did a simulation study to find out!
In our simulated data, exposure happened monthly & so did confounding. To make it easy to see the bias, we simulated an ineffective exposure.

If we can correctly remove confounding when we analyze this data, then we should get a null effect estimate!
Our simulated data was designed to mimic a randomized trial with non-adherence.

Even if we have confounding for non-adherence, the intention-to-treat estimate should not be confounded.

And that’s what we see: the intention-to-treat effect is perfectly null. So far, so good!
Next, we estimated the per-protocol effect.

In the data, adherence is confounded by both prior covariates and prior adherence. So, if we just restrict to adherent person-time, we expect to still see bias.
And that’s exactly what we see!

When we do a naive per-protocol analysis, the treatment looks much better than the placebo!

But, when we use inverse-probability weighting to adjust for the time-varying confounders & confounder-adherence feedback, we recover the true null!
Okay, so when we have complete data, everything works as expected in our simulation. That’s a good check, but it doesn’t answer our original question.

Next, we varied how much data we could “see” in the analysis by creating study visits and only using information from those.
The bad news?

When we have strong confounding, study visits 12 months or more apart mean we cannot successfully control for the adherence confounding *even with inverse probability weights*!

The good news? The residual bias is smaller than the naive analysis (even tho we had full data there), so weighting did help a bit.

The better news? The residual bias is easily reduced by measuring data more frequently!

This figure shows 12, 6, and 3 month intervals👇🏼
Okay, but those simulations had strong confounding. Does the strength of confounding matter? Yes!

If you have less (weaker) confounding then measuring your data every 12 months is not so bad!

(See paper for details on what counted as strong & weak in our simulations)
Finally, what else can we do to limit the residual bias which can happen when our study visits are too far apart?

Encourage adherence! As the percent adherent goes up, the residual bias goes down, even with strong confounding and infrequent study visits!
So there you have it.

If you’re designing a randomized trial & want to estimate per-protocol effects, or a cohort study & want to assess time-varying exposures, you need to think about strength of confounding & amount of non-adherence when planning study visit frequency!
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Ellie Murray
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!