1/ The good thing about putting this stuff in the open is that there are a lot of people with the right background looking at the data. In here I will highlight the work (and method) @AlbertoAut did on trying to get a more precise estimation of the Persons/day metric.
2/ The problem as originally presented on the thread is that we need to know how the incidence changes when we include the left out estimated deaths. I used a simple approach which was using the mean for the cohort.
3/ Here we got to a surprise, I though that I was overestimating it (the mean was over the median). Well, as I will showcase here how @AlbertoAut shows that the mean was actually below it. Not impossible, but surprising.
4/ What he did was pretty clever, we dont have the details but we can still use the information that it is available to us from other sources. He found the daily doses that were distributed, I have seen that in Argentina too (more on that in a later thread).
5/ From there you can actually estimate during that period the total days people has passed knowing how many people has been contributing to each cohort.
6/ Then you need to adjust based on population statistics, persons in the study, you know the whole deal. He then sent me the results and were actually very close to the ones published in the papers. Interesting. The plot thickens.
7/ We put ourselves to work to understand why. At some point we figured it out, in the same way I was underestimating, he was overestimating. Why? Because you have to ask yourself what happens when someone has been infected. They are removed from the study (they become an event).
8/ So we had to look for the deaths, the infected, adjust based on the probability of the total events... you know the drill. After we did all that the outcome in @AlbertoAut words was: "Answer moved towards your end of the estimate but still closer to my upper end."
9/ Plugging in a more accurate estimation into the model shows that there is no change in the general results. Unvaccinated cohort keeps beating the other two. And the 1 event one is still 3 times worse. So no change there. Kudos to @AlbertoAut work!!! An example to follow.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ Challenge time has ended, so here comes the solution. Why did I challenge you to estimate the prevalence of the left-out group, you may ask yourself?. Because, finding on your own teaches things.
2/ In this massive thread I won't give you the fish either, but I will give you all the tools to figure out the massive analysis errors that are made in most of the studies of this type. You may remember the missing deaths conundrum.
3/ Some believe these are signs of conspiracies; IMHO the problem is low skills in analyzing real-world messy data, so everybody just copy what others have done. In this case, most studies mimic the published protocols for the actual trials. And make the same mistakes.
1/ For the upcoming numbers riddle I will give you some time for you to study and try to figure out the riddle before I give the answer. You know the last riddle.
1/ It was eventually retracted, as we pointed out with @LDjaparidze the logic was flawed BUT retraction should only be used for misconduct or requested by the authors themselves. mdpi.com/2076-393X/9/7/…
2/ Peer review is broken since earlier than SARS-Cov-2 and review board should be accountable because they allowed it to pass review for publication. In a sense this retraction is not authors fault, but the whole editorial board. ALL OF THEM.
3/ The reason why retraction is not the tools is that sometimes even bad papers provide good data. When a retraction is stamped into it, the data collected even if poorly interpreted gets destroyed. Bad paper, data pointing to new science. One example:
1/ Probably you have seen this. This got me scared for a bit, luckily I can say that my calculations say this is NOT the case. We have enough with the actual state of the data to add more sources of worry. Reason is complex, but I will try to explain. nejm.org/doi/full/10.10…
2/ The first thing I've noticed was that you are doing a rolling enrolling (which means that not everybody gets enrolled at the same time in the pregnancy) which is essentially what the last picture said. Suffice to say the number there is right, the interpretation is NOT.
3/ What do I mean? Well, between 50% to 75% of miscarriage's are reported to happen in the first 8 weeks. There is a caveat (as always) that many are not even clinically recognized. But we know that between 8% to 15% are. So that's better than nothing.
1/n The power of the internet is incredible. Long story short. Back in 2018, I built a set of trading indicators, some of them truly novel stuff which I haven't published neither in source or a paper. tradingview.com/u/redknight666…
2/n Needless to say that I have been using them successfully since then, but something interesting happened yesterday. A user of Tradingview platform sent me this message. Some of my indicators happen to have some likes, and I always wondered how people used them.
3/n Obviously I said YES, bring it on. The indicator in question is a very strange indicator from a family I named as "Trend Denormalized". Most oscillators have an equivalent TA version. And in fact, I have and used successfully too the TA-RSI and other unpublished ones.