These days, researchers are so focused on data that they seem to forget the value and power of logical reasoning in science.
This week in journal club, we applied logical reasoning to evaluate the strength of hypothesis. Here is a summary. 1/
The most common forms are inductive and deductive reasoning.
In inductive reasoning, we use observations to infer a theory or hypothesis. In deductive reasoning, we use a hypothesis to make predictions about the observations, which are then tested by data.
Here’s an illustration of the difference between the two. In a deductive argument, the conclusion (c) follows from the premises (p) with certainty, and in an inductive argument with a probability. A conclusion from induction may be true, but no guarantee.
But what if the premises are false?
Deductive arguments are sound if the premises are true and the reasoning is valid. Inductive arguments are cogent if the premises are true and the reasoning is strong.
Because a conclusion follows from induction with uncertainty, adding premises can make an inductive argument stronger or weaker. Similarly, omitting(!) premises can make an argument *look* stronger or weaker.
Here’s are two examples of deductive reasoning in scientific research
And here is where the fun begins: the introduction of scientific articles can be summarized in the framework of inductive arguments, which shows the strength of the hypothesis. A pretty strong classic here.
Which looks even stronger after unraveling some hidden premises.
Here is the introduction of the recent trial on increasing vegetable consumption to reduce prostate cancer progression. Does this look strong?
Some more background was given in the protocol, but that made even more clear that essential observations were lacking that would have made the hypothesis stronger and the study more likely to deliver.
The lesson:
If you want your research to deliver, make sure your hypothesis is strong.
Unraveling the reasoning in scientific articles helps identifying strong studies. And fishing expeditions.
As you would have guessed, the study did not measure IQ in the babies. It measured early learning, verbal and non-verbal development.
This is not my field, but call me skeptic that this can be assessed *reliably* in babies <1 year old. What did you do in your first year?
A big red flag: the scientific article gives no details about the babies and the measurements. Who were they? Why were they eligible for the study? How were they selected? Were they randomly invited? Did parents sign them up? How were they tested?
Ik heb de rapporten van het Fieldlab onderzoek doorgenomen. Een verontrustend draadje over bladzijde 11 van "Bijlage 5: Resultaten risico analyse"
Of evenementen al dan niet veilig zijn en zo ja onder welke voorwaarden wordt in dit onderzoek bepaald met een risicomodel. Dat is een gebruikelijke methode, maar er zitten haken en ogen aan.
Elk model is een versimpeling van de werkelijkheid die je in staat stelt om voorspellingen te doen. Bv, met een model kun je uitrekenen wat je netto gaat verdienen als je het brutobedrag weet.
Of de voorspelling uitkomt, en het model correct is, hangt af van 2 voorwaarden: ...
In enquete-onderzoek wil je dat deelnemers een afspiegeling zijn van de *doelpopulatie*, bv alle Nederlanders. *Alleen* als die afspiegeling *representatief* is gelden de resultaten van het onderzoek voor de hele doelpopulatie.
Een representatieve afspiegeling bereik je door deelnemers willekeurig uit te nodigen en te zorgen dat of iedereen meedoet of dat niet-deelname ook willekeurig is, dwz ongerelateerd aan deelnemer-kenmerken of voorkeuren.
Ik schreef dit weekend een column over slechte wetenschap met daarin een nauwelijks-te-geloven voorbeeld over een index. Hier is een toelichting op de index met wat bronnen. @nrc@nrcwetenschap nrc.nl/nieuws/2021/02…
Hier is een oud draadje waarin ik de index uitleg en toelicht wat er mis is:
Is searching scientific literature the most undervalued aspect of scientific research or is that my impression only?
I gave a lecture to our epi students on how to search literature for their theses.
Here's the essence, incl, I'm biased, how @CoCites makes searching easier.
1/
Everyone who wants to do science needs to find out
- what the state of the art is on their topic, and
- how you set up a study that can move the science forward.
It's not 'any study will do'.
You will need that background research not only for the introduction of your thesis/paper, but for all of it. Doing science is more than running a data analysis ...
In de versie van 28 maart was de verspreiding "mainly from person-to-person" en landde de virusdroplet nog op de mond of neus. (bron: Wayback Machine Internet Archive)
Maar ook toen werd al gedacht aan verspreiding via mensen die geen symptomen hadden, of via het aanraken van besmette oppervlakten of objecten. Dat waren echter niet "not thought to be the main way the virus spreads".