1/ Let me get this straight: "Doing research to BUILD a new NOVEL virus/bacteria which doesn't exist in nature ain't gain of function, but if I have bad luck and it has new properties IS?"
Anyone that knows about this, enlighten me on how doing the research itself is not GoF?
2/ Where did I heard that before? I think it went like this: "I am not doing research in autonomous warfare tech but I am researching how to build autonomous platforms capable of carry 'weapons'. But if someone takes my platform and arm it IS". Kinda edgy in my opinion.
3/ And yes, before anyone asks I know that we do that with bacteria every day to create new materials, etc. We are talking about dangerous known pathogens here.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ The good thing about putting this stuff in the open is that there are a lot of people with the right background looking at the data. In here I will highlight the work (and method) @AlbertoAut did on trying to get a more precise estimation of the Persons/day metric.
2/ The problem as originally presented on the thread is that we need to know how the incidence changes when we include the left out estimated deaths. I used a simple approach which was using the mean for the cohort.
3/ Here we got to a surprise, I though that I was overestimating it (the mean was over the median). Well, as I will showcase here how @AlbertoAut shows that the mean was actually below it. Not impossible, but surprising.
1/ Challenge time has ended, so here comes the solution. Why did I challenge you to estimate the prevalence of the left-out group, you may ask yourself?. Because, finding on your own teaches things.
2/ In this massive thread I won't give you the fish either, but I will give you all the tools to figure out the massive analysis errors that are made in most of the studies of this type. You may remember the missing deaths conundrum.
3/ Some believe these are signs of conspiracies; IMHO the problem is low skills in analyzing real-world messy data, so everybody just copy what others have done. In this case, most studies mimic the published protocols for the actual trials. And make the same mistakes.
1/ For the upcoming numbers riddle I will give you some time for you to study and try to figure out the riddle before I give the answer. You know the last riddle.
1/ It was eventually retracted, as we pointed out with @LDjaparidze the logic was flawed BUT retraction should only be used for misconduct or requested by the authors themselves. mdpi.com/2076-393X/9/7/…
2/ Peer review is broken since earlier than SARS-Cov-2 and review board should be accountable because they allowed it to pass review for publication. In a sense this retraction is not authors fault, but the whole editorial board. ALL OF THEM.
3/ The reason why retraction is not the tools is that sometimes even bad papers provide good data. When a retraction is stamped into it, the data collected even if poorly interpreted gets destroyed. Bad paper, data pointing to new science. One example:
1/ Probably you have seen this. This got me scared for a bit, luckily I can say that my calculations say this is NOT the case. We have enough with the actual state of the data to add more sources of worry. Reason is complex, but I will try to explain. nejm.org/doi/full/10.10…
2/ The first thing I've noticed was that you are doing a rolling enrolling (which means that not everybody gets enrolled at the same time in the pregnancy) which is essentially what the last picture said. Suffice to say the number there is right, the interpretation is NOT.
3/ What do I mean? Well, between 50% to 75% of miscarriage's are reported to happen in the first 8 weeks. There is a caveat (as always) that many are not even clinically recognized. But we know that between 8% to 15% are. So that's better than nothing.