1/ By popular demand I was going to do a deep dive into the European CDC Face Mask recommendation study. Well, it may end but be a bit shallow. There is not much depth to be diving into. ecdc.europa.eu/en/publication…
2/ The study follows a usual form with clear inclusion and exclusion criteria (which is good). It uses the GRADE framework to ascertain the evidence and generate a recommendation. That is among the best we got in the evidence based land.
3/ The number of studies included is 'interesting'. With a n=118 we would expect to get a nice body of clear cut evidence to support the recommendation.
4/ The glossary is interesting. Interesting to note, medical face masks are not considered to protect against aerosol while the respirator N95/N99 is. Keep this in mind for the future.
5/ Let's just skip straight to the recommendations so we can asses properly the rest of the study. So far, so good. The recommendation is 'Use masks' in whatever setting. Let's see if the evidence agrees.
6/ For anyone that knows GRADE, there is a certain format any study has to follow and a way to evaluate the data. While subjective it helps to keep ourselves objective, BUT there is a key point in where subjectivity can creep in. The evidence certainty rating.
7/ In order to have a better assessment lets focus on the key objective metrics of the part of the study the authors actually provide the data. (more on this later)
For details on what each one means I suggest this article written by Gordon Guyatt himself bestpractice.bmj.com/info/toolkit/l…
8/ The setting is where the study has been performed at, in here we see community (which is what is interesting for this recommendation), households, health care facilities and/or mixed environments.
Community = 7
Household = 2
Health care = 9
Mixed = 2
9/ The risk of bias is a metric that encompass the general result of looking at the bias part of the table where individual biases are assessed. Here comes the first surprise.
No = 1
Serious = 16
Very serious = 2
Yes, you read it right. A SINGLE study without risk of bias!!!
10/ But this is a 'deep dive' what is the distribution of 'potential' biases distributed in all the studies shown in the supplementary?
Only 6 studies have 2 or less biases identified. And NONE has ZERO.
11/ Now lets look to the 'indirectness' metric. So here the authors were kind enough to classify 8 of them as Serious or Very Serious. From 20 studies only 4 are not 'indirect'. And half of the rest are Serious or worse.
No = 4
Yes = 8
Serious = 5
Very Serious = 3
12/ How about 'imprecision'? In this one we are better, we are split almost in half.
No = 11
Yes = 7
Serious = 2
13/ 'Inconsistency' I like that one :). There are some that it doesn't really apply (which makes authors life easier) but what about the rest?
N/A = 8
Yes = 5
No = 7
Mhhhhhhhh.
14/ And now let's switch into the 'not so objective' one, which is certainty of the evidence. In here we can find that 19 out of 20 have low or very low certainty.
16/ What is it telling us? The authors themselves rate the evidence that the 'true effect' (what it is reported) 'might or is' probably MARKEDLY DIFFERENT from what the study itself claims for 19 out of 20. Let that sink in!!!
17/ Probably at this rate you would say: Good Job!! We are done. BUT NO!!! There is more :) but I will leave that for tomorrow as today it is already 2AM. If you find it interesting already, make sure to retweet it so other will have the chance to read it
1/ After almost 1.5 years of studying cancer research for personal reasons, I arrived at a realization that prompted me to write this tweet. I will lay out the hypothesis in this thread.
2/ Disclaimer: I am not a formally trained health researcher. More like a very curious and tenacious guy with a 15+ year background in research, development, & reproducibility in computer science (computer science).
3/ I am putting the hypothesis out there because it may make sense to others doing field work. Feel free to dissect this hypothesis, find holes in it, and play devil's advocate. We will all come out smarter from it.
1/ There is a very perverse dynamic on how Chavism (aka "the communist socialism") works. Let's use Argentina as the example. Over the first 20 years they initiate a process that we could call "Earnings Substitution" that will seal your fate over time.
2/ Your earnings/salary is going down and at the same time "subsidies" start to go up in order to fool people into think that nothing has changed. This works because the dirty job is done by inflation which is a much slower process.
3/ By the time people starts to realize that something is wrong, because some critical goods are not available (medicine, food, you name it) or inflation enters a death spiral; most people already depend on subsidies for spending.
1/ Recently some interesting papers have been doing the rounds in the health community. To me the most interesting ones have been the GlyNAC paper and the more recent Taurine deficiency as a driver of aging papers.
2/ Disclaimer: While I have been researching this for a year and even executed an experimental protocol tailored for myself based on the GlyNAC paper, I am NOT a health professional, and I am just taking my health into my own hands. This is not advice of any kind.
3/ Disclaimers aside, why do I think these 2 papers are interesting? First because the claim (if true) is a game changer. And second because they may be related but I haven’t seen this relationship spotlighted by anyone.
This just confirmed the weaponization of block lists. If enough people/bots block and mute you, they are essentially cancelling you. I find lots of people with I have never interacted with that has me blocked. Assuming there are third party block lists and block networks.
Normally that is an issue in general. Anyone that has done reinforcement learning had figure out (usually in the worst way) that you have to be incredible cautious with penalties. They are very prone to be gamed.
2/ Since the general problem that practitioners find (in the worst way) is always training set tainting (guilty-as-charged). Habits die hard, the first thing I did is asking to do a review of the paper without any extra knowledge about what the paper says
3/ From the response alone I learned 2 things. First, our paper title was deadly accurate. I also learned that it has no information whatsoever on it, as the entire response can be generated from understanding the title itself.
2/ Since I am doing it by hand I started with a very simple prompt.
3/ I have been arguing that this trying to constrain the model is actually harming it before. This is one of those cases. The good thing is that at least for you just add "Use the tokens" at the end of the request when it refuses and it will do it properly