, 35 tweets, 5 min read Read on Twitter
Following up on a recent thread re: alpha-spending and interim analyses, here I’ll dig further into the case that made me curious & a few statements from the study investigators in the aftermath
The scenario posed in that thread tweaked a few of the details, but some may have recognized it as similar to this trial: “Goal-directed perfusion to reduce acute kidney injury: A randomized trial” (link to original paper: ncbi.nlm.nih.gov/pubmed/29778331)
The GIFT trial was stopped early (at 50% of target enrollment) with a conclusion that the experimental treatment was effective for prevention of AKI; while the treatment may well be effective, the trial design and execution are an opportunity for learning & discussion
The trial was designed and carried out without input from an experienced trial statistician (the authors stated this in a letter responding to some of the critique) and, as we’ll discuss, I believe this would have drawn much more scrutiny in regulatory context
Questions were evidently raised by journal reviewers as well as letters to the editor after publication; the explanations from the study team are a bit unsatisfying & somewhat contradictory
The trial randomly assigned patients undergoing cardiac surgery to a “goal-directed perfusion” strategy versus a control arm; I’ll leave further analysis of the treatment strategies to appropriate experts, and will focus on the statistical issues.
Also worth noting: the first author of the primary paper developed and patented an algorithm for monitoring oxygen delivery during bypass (which is disclosed in the paper). I am also *not* an expert on financial conflicts of interest, nor in patent law…
…and I fully appreciate that trials are often going to be carried out and funded by people that invented and/or believe in their technology. I’m not going to write off *any* trial that has involvement from someone with financial interest – but, if I’ve interpreted correctly…
…the lead author stands to profit from a positive trial. Totally fine – but then the results of the trial ought to be scrutinized closely to ensure the design was rigorous, which I think is questionable given the curiosities of the interim analysis strategy.
Aside: it’s totally normal to carry out interim analyses during a trial. If there is a strong enough suggestion of benefit or harm from the early patient data, stopping may be warranted.
However, the strategy used by the authors was not a particularly rigorous one, and explanations after the fact were unsatisfying. Here is what the initial trial report says:
“Interim analyses were planned at 25%, 50%, and 75% of patient recruitment, with stopping rules for safety, futility, and efficacy (see Online Data Supplement).”
OK. The original plan had three interim looks…
“The protocol was amended in August 2016 following completion of the first interim analysis (data closed in February 2016).”
...and after the first interim look, the plan was amended…
“The amendments included…a change to the stopping rule for efficacy from P < .005 at the 50% interim analysis to P < .05.”
Wait - the original stopping rule for efficacy at the second interim analysis was p<0.005, but was changed to 0.05 after the first interim analysis took place?
Even if you’re not an expert on alpha-spending functions, that seems out of whack. Most trials are designed with an overall alpha=0.05. Taking an interim look with p<0.05 as the stopping rule would inflate the overall alpha beyond 0.05 if another look has already occurred.
(sidebar: yes, there are separate conversations worth having about use of NHST in general and whether 0.05 is the appropriate alpha level for all trials; however, the authors addressed none of this in their paper)
Online Data Supplement: “Stopping rule for efficacy: the trial will be stopped for efficacy in presence of a difference for the primary outcome (any AKI) in favour of the GDP group at a P value of 0.01 at the 25% interim analysis or 0.05 at the 50% and 75% interim analyses.”
OK, there are a few things that still have me confused.
First, it’s difficult to tell what the *original* planned alpha-spending function was. The authors mention that the plan was amended and that the threshold for stopping at second interim was changed from p<0.005 to p<0.05, but never list the original thresholds.
Second, the strategy that appears in the Online Data Supplement is not appropriate if the trial was intended to have an overall alpha=0.05 (again, never really made very clear)
Mayo Clinic biostatistician Philip J. Schulte (Philip, are you out there on Twitter? Feel free to chime in if you see this…) saw this and wrote a letter: ncbi.nlm.nih.gov/pubmed/30685172
The first author wrote a response in which a few other explanations are put forth: “What is not mentioned is that there were other reasons that led to stopping the trial” – and goes on to mention slow recruitment and “possible lack of financial resources”
To this, I am sympathetic! Things can go wrong during the conduct of a trial. If slow recruitment and/or lack of financial resources were identified as problems, it may be necessary to stop the trial and analyze what data were able to be collected.
Hence my thread, where I asked what folks thought should be done in a similar hypothetical: enrollment slower than hoped, one interim already took place with alpha=0.005 that was not stopped, facing the decision of whether...
...to continue the trial or shut it down and merely analyze the data that they already had.
The problem, as I see it, is that the authors' approach in writing up and publishing the results doesn't really add up here.
If the problem was slow recruitment / financial issues, and they transparently stated that this was the reason the trial ended, so they adjusted their alpha-spending approach to "spend the remaining alpha" with the data that they had...
...I think it would've been fine. However, that's not what they did - they rejiggered the analysis plan to take a look at the second interim using alpha=0.05, but left the door open to continue and take another look at alpha=0.05
In addition to Philip Schulte's letter, there was another letter published alongside the trial which included some nice thoughts in the aftermath: ncbi.nlm.nih.gov/pubmed/30557947
“The attempt by Ranucci & coworkers to interject RCTs into the practice of CPB is thus both commendable and novel. Is the study of Ranucci rigorous? No. Is there room for critical comments? Yes. Are there important take-home messages that stem from the study of Ranucci? Yes.”
“The article validates the possibility of performing multicenter randomized trials in patients having cardiac operations. Further, there is a signal that the use of goal-directed perfusion that is based on oxygen delivery is safe and probably equivalent to traditional CPB.”
“Possibly the single most important benefit of the article of Ranucci and coworkers is that it serves as challenge to stimulate further randomized studies about cardiac operations and the conduct of cardiopulmonary perfusion.”
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Andrew Althouse
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!