Comparative effectiveness research evaluates the efficacy of one treatment relative to another, treatment A vs treatment B.
For example:
Radiation vs surgery for prostate cancer
Ivermectin vs placebo for COVID
Streptomycin for TB (1st RCT!, ncbi.nlm.nih.gov/pmc/articles/P…)
Since the 1970s, hospital databases have started to grow to allow for "real world data" analysis, using rudimentary methods like univariate and multivariate analysis
Since 2000s, the creation of large national databases allows for more complex statistics, eg, PSM.
The premise of methods like PSM is that they "recapitulate a randomized controlled trial."
This is a misconception.
PSM helps to mitigate some selection bias, but it will never get you to the level of a randomized controlled trial.
A randomized controlled trial allows for a few important quality control measures: (1) it sets a start time, t = 0. (2) it controls for known confounders. (3) it controls for unknown confounders.
(1) t=0.
Without randomization, you introduce "immortal time," where patients by definition cannot have an event.
So, you need to adjust for this, with something like left truncation or landmark analysis.
(2) controlling for known confounders.
You might adjust for race, age, gender, etc but there is no rule that says you have to control any/all of them.
So, there are combinations of covariates that may be in a model. This is "vibration of effects."
If you start to apply more methods, then the HR starts to approach 1, and p values become > 0.05.
With our methods, we can generate any answer you want.
A is better than B (HR < 1, p < 0.05)
B is better than A (HR > 1, p < 0.05)
A = B (p > 0.05)
Here is an example with extreme HRs >> 1 and p <<0.005, for breast cancer.
Any of these studies would have suggested that local therapy for M1 breast cancer results in worse survival.
One of the issues with real world data / retrospective CER is that investigators truly believe in their therapy, and it usually treatment intensification.
There is subsequent publication bias:
1000s of studies will favor doing tx.
Few will favor not doing it.
Hence, we see studies favoring multi modality therapy (vs organ preservation) that show supposed improvement in survival.
If investigators keep publishing these retrospective CER studies, patients are harmed by overtreatment.
I don't understand people who say these retrospective CER studies are "hypothesis-generating."
The hypothesis has not been addressed by the study. We know as much before the study as we do after.
However, zealots of treatment A or B will use the study to support their dogma.
Some people say "you wouldn't do a randomized trial of parachutes vs not."
Parachutes have a 99.99+% absolute survival benefit.
The vast majority of treatments we have in medicine have no benefit. Rarely, they improve QOL. More rarely, survival. The benefits are usually marginal.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1, #RadOnc oral boards are the most clinically relevant exams (vs rad bio, physics, written exam, inservice, etc).
Many of the questions about management come straight from @NCCN guidelines, so use these as a primary reference.
2, have a prepared script of what to say for standard questions. eg, workup, setup, margins, doses
Here is an example for prostate ca history / workup #pcsm
How to run a meeting at an academic medical center
🧵
Originally, this presentation was for our oncology trainees, and we figured we would share it on #AcademicTwitter#MedTwitter to maximize the impact of your meetings.
Health services research using United States cancer databases
Here is everything you want to know about @theNCI SEER, @AmericanCancer@AmCollSurgeons NCDB, and newer claims databases for clinical research in oncology
🧵
First, many thanks to these great people for helping me with the material
Retrospective databases are ideal for certain types of questions related to epidemiology, staging, rare diseases, quality, prognostication, prediction, and some "real world evidence / data"