I cleared my schedule in anticipation of 2nd jab side-effects, and I’m aching all over with muscle pains, so it’s just the right time to channel my inner grumpy old man (I’m moving like one anyway).
In other words:
🔹Time to critique #ESTRO2021 abstracts! 🔹 #radonc#medphys
Standard disclaimer:
I’m just one abstract reviewer. My opinions and dislikes may not be representative.
I’m not targeting any specific authors - so even if you feel this hits too close to home, it’s probably not you that I’m thinking of (five other people did the same!)
I reviewed clinical track abstracts this year, while my previous involvement has been on the physics track. That's obviously colouring my experience.
(I am but a lowly physicist, etc, etc ... 😂)
Firstly: Primary/secondary analyses from large trials are always going to score high, unless something massively wrong with the abstract.
But! I've given high scores to 10-patient cohorts or dose planning studies in the clinical track - if truly innovative / novel & high quality
Okay, back to grumpy. Please get the basics right:
MAKE SOMEBODY NOT INVOLVED IN YOUR STUDY READ YOUR ABSTRACT. You'll be amazed how easy it is to leave out essential information …
(You study stage IV pts? I still need to know baseline T if studying local Tx & response ...)
Avoid obvious mistakes: Not capitalizing first word in title, gross spelling errors/typos, missed punctuation, font changes, etc. Everybody is able to proofread one page of text. If you haven’t done it, that shows a lack of attention to detail which reflects on your science
Side note:
Why are clinical track authors so much worse at following submission instructions, compared to other specialities?
I’m specifically thinking of the instruction to not include any information identifying authors or institution ... which was still done by ~15% #radonc
Use the &#%§ figure and table option!
I can 100% guarantee you've left out information which could be of interest to the readers - and which could've be added as figure or table. Even if it’s just patient details table, diagram of method or example dose plans #radonc#medphys
Is it some sort of resident/registrar rite of passage to submit an abstract on baseline blood measures and their relationship to survival in ~100 patients?
Stop & think: Is your study truly adding to our collective knowledge, or just wasted time to justify meeting participation?
Similarly, comparing outcomes with different Tx regimens in retrospective series where choice of regimen will have depended on patient & disease characteristics is a big no-no. I reviewed several such studies of <100 patients, and none of them had anything useful to teach us.
Another side note, from an ethics perspective:
How do you justify accessing patient data, if not learning something new? And if it's for training (e.g. learning data analysis), why submit an abstract for a scientific conference? #dataethics
In other words, I'm will score an interesting & well-conducted comparative dose planning study higher than yet another (small/medium) retrospective patient case audit. Unless you’re doing something truly novel with your audit, the former will be of more interest #radonc#medphys
(Although from next year onwards I AM going to judge you harshly you if you haven’t used the RATING guidelines! 😉thegreenjournal.com/article/S0167-…)
Univariable analysis to select for multivariable analysis is not a robust variable selection approach. No, it still isn’t. Yes, I know that ‘everybody’ does it. YOU still shouldn’t. sciencedirect.com/science/articl…
Why, oh why, do you all insist on dichotomising your variables … ???
Is there something magical about 64.3 years (or whatever else happens to be the mean/median age in your dataset)? No? If not, why are you not treating it as a continuous variable?
Okay, something slightly(!) more advanced:
Let say you’re studying a new marker for disease/survival outcome. Have you checked that you can’t predict just as well who'll get recurrence/die using standard prognostic factors (age / T / N stage)? If not, why should I be interested?
No, you cannot just write “we used machine learning”. Nor that you "developed a deep learning model’" You need to actually state what you did, and which statistical methods you used. Yes, even in the clinical track 🤷🏽♂️🤦🏽♂️
Same for propensity score matching. It’s not some magic technique which solves everything - I do need to know how you did it, and which factors you took into account #radonc#medphys
Okay, I think I'll leave it there - now I'm just repeating myself from previous years. But one single last thing for you all to contemplate ...
Abstracts are presented to reviewers in the order submitted (as far as I can discern). Does it impact the scores? And should you try to game it? 🤷🏽♂️
I know just about enough about decision making theory to know that it probably matters, but not enough to tell you what to do 😉
I did however spend the better part of a Friday evening discussing this with a specialist in decision making @Beardy_Econ, and we agree that we'd love to see @ESTRO_RT do randomized experiments #radonc#medphys#ESTRO2021
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Non-operative management of rectal cancer is becoming increasingly important - more and more patients are offered observation instead of surgery if they have a complete response after (chemo-)radiotherapy #radonc#ESTRO2020
Most published series are only reporting on that select group of patients - the ones who got a complete response. That's great if we want to understand if observation is a safe strategy for those patients #ESTRO2020 thelancet.com/journals/lance…
If you think about it, we are actually unlikely to see a dose-response relationship for rectal cancer in the (neo-)adjuvant setting: Even if there exists a dose-response relationship, it must be very shallow
After reviewing 'predictive modelling & radiomics' abstracts for #ESTRO202, I had quite a few thoughts. I've finally found time to organise them in a semi-coherent manner
To follow: Some common pitfalls in modelling & radiomics abstracts for clinical conferences #radonc#medphys
First of all, the basic stuff:
Get somebody who’s never seen your study before to read through the abstract - to ensure fundamental information isn't missing.
(And no, you won't notice yourself, because you’re too concerned with whether you can squeeze in another AUC value ...)
If you are submitting to a radiotherapy conference, maybe make clear what the relevance is for radiotherapy? Several image analysis / radiomics / AI abstracts were probably technically excellent, but I scored them low due to lack of radiotherapy relevance
First, what characterises medical physicists?
- We're quantitative, systematic & analytical
- We're trained in modelling, data visualisation, & interpretation of evidence
(And sometimes we - by which I mean me - go exploring in caves, which is almost like running a trial 😅)
But importantly, we understand the opportunities and limitations in current technology & are uniquely placed to understand current gaps in knowledge.
We can ask
“How can we best utilise technology to improve outcomes?”
“Will this be achievable in daily practice?”