2/ The problem is that the vaccination will be diluted over many weeks.
I assumed that e.g. 10% of the population in an age group will vaccinate per week. Then this gives the expected weekly vaxx death background (red for 1:10k vaxx CFR-->1:100k, and blue 1:50k CFR-->1:500k).
3/ So we could maybe see it for the below 30y. But here, also the vaxx CFR is rather >>1:100k. So difficult. Maybe in cases when a lot of people vaccinated in the same week.
That's what @OS51388957 is hunting. He knows what he is doing. 😎🤙💪
4/ Sources for creating the population adjusted age graph for NL:
5/ Correction for the 1:50k line (red). If 10% get vaxx per week, this gives 0.2 per 100k. So it would be buried in noise.
That doesn't mean that 1:50k is acceptable for healthy children wo have a lower C19 IFR than this!! Let's be clear on this!! 1:50k--> not OK even.
6/ To put things in perspective (for those not used to log scales): here a linear y-axis version of the plot.
But again: even a vaxx CFR of 1/500k would not be OK. This is not a small number for children!!
7/ Diclaimer: the CFR examples are theoretical(!!). I'm not saying that this is what we have for a healthy person. This value remains UNKNOWN. We have no data allowing to inferring what it is. The VAERS, PEI, Lareb, EMA reported deaths are to my view mostly co-morbidities.
8/ Similar as C19 pushes the "weak" over the edge, the Spike vaxx may behave in the same way.
This was the vaxx CFR by age as extracted from the PEI report July 2021. Most likely, the vaxx CFR curve for healthy is far below this level. But I have no data to estimate what it is.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ I was told non US GHCN “raw” is adjusted already.
-----TRUE-----
Now I see it. Gosh.
Composite. 2x adjusted. NOAA doesn’t even know where non-US stations are—or what they’re measuring. Their own US data (USCRN) is light-years better. But for “global”? It’s clown-tier level.
2/ And here it is—the DOUBLE-adjusted COMPOSITE.
Not raw. I doubted @connolly_s at first—like someone denying their 2nd-hand car is stolen, crash-salvaged, and repainted twice. Turns out he was right.
NOAA’s “global” QCU (non-US): not raw.
3/ Credit where due.
Normally I block on first bad-faith signal.
But intuition said: bait him back.
Let’s see what he hands over.
And he did:
✔ Clown location
✔ 120% urbanized
✔ Composite
✔ Adjusted twice
Thanks for the assist.
1/ The WMO’s temperature station classification study isn’t a glamorous reading —but it’s the bare minimum anyone aggregating climate data should know about every single station. They don’t.
2/ Class 1 is “bare minimum” for climate-grade weather station suitability. One means maybe ok. met.no/publikasjoner/…
I’ll be counting impressions. I’ll know if you didn’t read.
(you’re allowed to LLM TlDR it.)
Next up: NOAA climate site requirements (HLR). 👇 x.com/orwell2022/sta…
3/ The NOAA HLR system makes WMO classes look gentle.
Most stations? Fail spectacularly.
1/ Digging deeper, we find 3 USCRN sites with 2 IDs — a legacy historical one and a USCRN. That’s big. It means we can stitch together long-term time series for 3 “golden” stations. Why haven’t @NOAA or @hausfath done this? Not the “right” narrative result? 🙃 Let’s take a look
2/ Here is an example of such a pair. STILLWATER. Note that you can see the wind fence around the precipitation gauge on satellite picture — that round structure. ncei.noaa.gov/access/crn/pdf…
3/ Well, let’s do it. We try. And...
...no hockey stick.
Despite STILLWATER being a growing urban area.
So... where’s the hockey stick? Anyone?
We're told it should be there. But the best data says no.
1/ Mr. @hausfath packed multiple fallacies into one graph. We replicate: he used homogenized data. We get the same.
Bottom right shows the raw. His fallacy: claiming that USCRN-ClimDiv agreement in the modern era (where adjustments are ~zero) validates strong past adjustments.
3/ His fallacy is blatant bad faith. Measurement validation isn't done by induction. He claims adjustments are valid because USCRN-ClimDiv align from 2008-2024—yet no adjustments were made in that period. Then he asserts past adjustments are proven. Exceptional level of malice.
4/ Another fallacy: He cherry-picked 1970—the coldest point in 100 years. He highlights only post-1970 warming in green, hiding earlier trends. But the real scandal? Extreme (false) pre-1970 adjustments, erasing the 1930s warmth with absurd corrections.
1/ New tool - let's test with VALENTIA (hourly) overlay: solid agreement. A model (ERA5) is only as good as its ground truth measurements constraints it. We saw good US results before, but obvious heat bias in polar regions—nothing measured to compare with there anyway.
2/ Now we match the 1940-2024 range. Note temp vs. anomaly scale—same curve, just shifted. A trick to amplify range. Few notice. Climate stripes? Perfect for manipulation—e.g. add offset (ECMWF) to make it red “=warm"= behavior science (manipulative).
3/ With the 1940–2024 range matched, comparison improves. For a clearer view, monthly temps are shown on top left, yearly in the middle—overlaying ERA5. Not perfect overlay, but ERA5 is A) a cell average (of a weather model) and B) fed by adjusted data.
1/ Absolutely my worldview. But I haven’t found a trace of it in temperature measurements. Accuracy doesn’t seem to be a factor at all. Instead, they rely on bizarre software that arbitrarily alters the data. No station audits. No QMS existing. Nothing.
2/ This magic software even adjusts in various directions from day to day—without any explicit justification beyond it doing so. Is the sensor accuracy changing day to day?? No.
This finding by @connolly_s is important and exposes PHA being unrelated to measurement principles.
3/ Here’s clear proof of failure. If the @noaa adjustments were correct, they’d bring raw data closer to the high-quality USCRN reference station (designed bias/error free). Instead, PHA alters the classic (cheap) neighborhood station’s raw data to be wrong—to be false.