The Guardian article says it's "a pre-print paper still undergoing peer review that is posted online by the National Institutes of Health"
Note the NIH has many disclaimers on their site explaining they don't endorse the content of articles
(For context, the NIH's PubMed provides public access to studies, republishing them from journals where they were originally published.
In this case, the preprint was originally published on Research Square, and is under review at the journal Nature Portfolio. Not by NIH.)
The Guardian article says the researchers looked at brain samples from people with & without dementia including Alzheimer's.
But this isn't mentioned anywhere in the paper
Or in other papers by the same author. Where's this claim from?
(ht @literalbanana) ncbi.nlm.nih.gov/pmc/articles/P…
The article has this quote: "You can draw a line - it's increasing over time."
When I read this, I thought the study found consistently rising concentrations over the years between 2016 and 2024.
But in fact, there were samples from only two years: 2016 and 2024.
The samples come from the Office of the Medical Investigator (OMI) in New Mexico, which conducts statewide autopsies of any sudden deaths each year.
But the study doesn't give any detail about why samples only came from two specific years.
This is not just a matter of transparency, but having data from more years helps understand whether this was an actual trend, rather than a discrepancy or fluctuation.
One more: the Guardian article originally comes from this article in The New Lede.
Note, in the study, there were a total of 27 samples from 2016 and 24 samples from 2024.
That's seems fine for a preliminary study. But I question how much we can learn from these specific samples about microplastics concentrations across New Mexico, let alone at a global scale.
Let's actually look at the study though.
To recap, the study is a preprint where 27 samples from 2016 and 24 samples from 2024 were obtained from the statewide autopsy department and analysed for concentrations of microplastics in different organs.
As I understand it, a major challenge with microplastics studies is properly accounting for potential contamination while handling or analysing samples.
Indeed, the study mentions some reasons the authors believe their results aren't due to contamination.
For example, they say in the Limitations that they had KOH blank samples and measured the polymer composition of all plastic tubes and pipette tips, which are essential in the digestion and measurement process.
But these aren't mentioned anywhere else in the study or supplement. No KOH blanks, pipette tips, tubes.
What were the results of these quality assurance steps?
How about potential contamination at other steps before the researchers obtained the samples from the autopsy department — including the fixation and storage of the samples?
The graphs in the results sections also have some oddities that aren't clarified in the paper.
For example, the concentrations in brain samples in 2024 have much less variation than the other data. I think this is implausible, but the authors don't comment on it.
Maybe these questions will be answered during the review process at Nature Portfolio, but maybe they won't — either way, the reporting of this preprint is very poor.
In my view, science journalists should ask questions to researchers and peers about the methods of studies, and tell us what they said.
Tell us what was done and why. We shouldn't only hear impressions of the headline results.
Sorry, I made a mistake here - this quote was also in the Guardian piece
It would also be good to hear from others e.g. chemists and forensics/autopsy researchers, on whether they think the specific methods were appropriate, aside from the points I made.
@drStuartGilmour Hi! Thanks for the comments with this. I think there's a lot of confusion here, let me try to clarify each point further.
@drStuartGilmour As I describe in the article, checkboxes don’t directly feed into vital stats in other countries. Italy has enhanced surveillance to investigate potential maternal deaths; it shows their vital stats also underreport maternal deaths
@drStuartGilmour These are slightly different statistics:
Left shows maternal deaths per 100,000 women, for which there’s global comparable data in the WHO mortality database). Right shows maternal deaths per 100,000 live births.
In childhood, the most common cause of death are ‘external causes’.
This is a broad category (in red) that includes accidents, falls, violence & overdoses.
Also a notable contribution from birth disorders (muted green), childhood cancers (blue) & respiratory diseases (cyan).
The share of deaths from childhood cancers stood out to me.
We’ve seen lots of progress against childhood cancers over the last 50 years — e.g. treating leukemia, brain cancers, kidney cancers, lymphomas & retinoblastoma — but this is a reminder that there’s still further to go
But I also see it as an economic & political blunder — the world could have had a malaria vaccine sooner. We should learn from this, not just celebrate & move on. That's what this 9000 word essay is about. worksinprogress.co/issue/why-we-d…
The malaria vaccine was trialled for the first time in humans in 1997.
It was approved in 2021.
Each step of the journey faced struggles in funding and operations, to set up & run each next stage of trials.
In 2015, after the vaccine went through all prelicensure stages of clinical trials, the WHO asked for pilot projects to rule out potential side effects, that were based on post-hoc analyses of the trial data.
It then took another 4 years just to *start the pilot project.*