1/ #oversterfte Netherlands: The observed mortality rates from EMA Pharmacovigilance (C19 vax) look bad, but in a range that shouldn't show up in the total mortality. It would be a disaster if it did.
What we need are mortality rates by cause and age (e.g. cardiac...).
2/ NL 15-19 years: nothing to see, except MH17 incident 2014.
3/ NL 20-24 years: nothing to see here, except MH17 incident 2014.
4/ NL 25-29 years: nothing to see here. MH17 incident 2014 still visible above the normal rates.
5/ NL 30-34 years: nothing to see here. MH17 incident 2014 still there.
6/ NL 35-39 years: nothing to see here.
7/ NL 40-44 years: nothing to see here.
8/ NL 45-49 years: nothing to see here.
9/ NL 50-54 years: nothing to see here.
10/ NL 55-59 years: nothing to see here.
11/ NL 60-64 years: nothing to see here. Seasonality starts to get visible in this age group.
12/ NL 65-69 years: seasonality visible. Also the sharp but short peak of the first C19 wave. But not at amplitudes that would justify a general panic.
13/ NL 70-74 years: seasonality visible. Sharp but short peak of the first C19 wave. The 2020 spring peak was higher (but shorter) than the typical flu wave. The total peak area is comparable with the 2018 flu season. The 2nd 2020 autumn wave was longer.
14/ NL 75-79 years: seasonality visible. Sharp but short peak of the first C19 wave. The 2020 spring peak was higher (but shorter) than the typical flu wave. The total peak area is comparable with the 2018 flu season. The 2nd 2020 autumn wave was longer.
15/ NL 80-84 years: seasonality visible. Sharp but short peak of the first C19 wave. The 2020 spring peak was higher (but shorter) than the typical flu wave. The total peak area is comparable with the 2018 flu season. The 2nd 2020 autumn wave was longer.
16/ NL 85-89 years: seasonality visible. Sharp but short peak of the first C19 wave. The 2020 spring peak was higher (but shorter) than the typical flu wave. The total peak area is comparable with the 2018 flu season. The 2nd 2020 autumn wave was longer.
17/ Conclusion for 2021: 1) Seasons start at different times--> little can be said now for 21/22 season. 2) The background is higher (even in the young) than any potential vax signal. It would be a disaster if vax would be visible in total mortality. 3) We need data by cause.
18/ To assess any vax safety issues in detail we need: 4) mortality figures by cause, gender and age bin for the young cohorts(<65), e.g. cardiac events. 5) IC data by cause, gender and age for the young cohorts (<65) e.g. cardiac events and thrombotic events.
1/ I was told non US GHCN “raw” is adjusted already.
-----TRUE-----
Now I see it. Gosh.
Composite. 2x adjusted. NOAA doesn’t even know where non-US stations are—or what they’re measuring. Their own US data (USCRN) is light-years better. But for “global”? It’s clown-tier level.
2/ And here it is—the DOUBLE-adjusted COMPOSITE.
Not raw. I doubted @connolly_s at first—like someone denying their 2nd-hand car is stolen, crash-salvaged, and repainted twice. Turns out he was right.
NOAA’s “global” QCU (non-US): not raw.
3/ Credit where due.
Normally I block on first bad-faith signal.
But intuition said: bait him back.
Let’s see what he hands over.
And he did:
✔ Clown location
✔ 120% urbanized
✔ Composite
✔ Adjusted twice
Thanks for the assist.
1/ The WMO’s temperature station classification study isn’t a glamorous reading —but it’s the bare minimum anyone aggregating climate data should know about every single station. They don’t.
2/ Class 1 is “bare minimum” for climate-grade weather station suitability. One means maybe ok. met.no/publikasjoner/…
I’ll be counting impressions. I’ll know if you didn’t read.
(you’re allowed to LLM TlDR it.)
Next up: NOAA climate site requirements (HLR). 👇 x.com/orwell2022/sta…
3/ The NOAA HLR system makes WMO classes look gentle.
Most stations? Fail spectacularly.
1/ Digging deeper, we find 3 USCRN sites with 2 IDs — a legacy historical one and a USCRN. That’s big. It means we can stitch together long-term time series for 3 “golden” stations. Why haven’t @NOAA or @hausfath done this? Not the “right” narrative result? 🙃 Let’s take a look
2/ Here is an example of such a pair. STILLWATER. Note that you can see the wind fence around the precipitation gauge on satellite picture — that round structure. ncei.noaa.gov/access/crn/pdf…
3/ Well, let’s do it. We try. And...
...no hockey stick.
Despite STILLWATER being a growing urban area.
So... where’s the hockey stick? Anyone?
We're told it should be there. But the best data says no.
1/ Mr. @hausfath packed multiple fallacies into one graph. We replicate: he used homogenized data. We get the same.
Bottom right shows the raw. His fallacy: claiming that USCRN-ClimDiv agreement in the modern era (where adjustments are ~zero) validates strong past adjustments.
3/ His fallacy is blatant bad faith. Measurement validation isn't done by induction. He claims adjustments are valid because USCRN-ClimDiv align from 2008-2024—yet no adjustments were made in that period. Then he asserts past adjustments are proven. Exceptional level of malice.
4/ Another fallacy: He cherry-picked 1970—the coldest point in 100 years. He highlights only post-1970 warming in green, hiding earlier trends. But the real scandal? Extreme (false) pre-1970 adjustments, erasing the 1930s warmth with absurd corrections.
1/ New tool - let's test with VALENTIA (hourly) overlay: solid agreement. A model (ERA5) is only as good as its ground truth measurements constraints it. We saw good US results before, but obvious heat bias in polar regions—nothing measured to compare with there anyway.
2/ Now we match the 1940-2024 range. Note temp vs. anomaly scale—same curve, just shifted. A trick to amplify range. Few notice. Climate stripes? Perfect for manipulation—e.g. add offset (ECMWF) to make it red “=warm"= behavior science (manipulative).
3/ With the 1940–2024 range matched, comparison improves. For a clearer view, monthly temps are shown on top left, yearly in the middle—overlaying ERA5. Not perfect overlay, but ERA5 is A) a cell average (of a weather model) and B) fed by adjusted data.
1/ Absolutely my worldview. But I haven’t found a trace of it in temperature measurements. Accuracy doesn’t seem to be a factor at all. Instead, they rely on bizarre software that arbitrarily alters the data. No station audits. No QMS existing. Nothing.
2/ This magic software even adjusts in various directions from day to day—without any explicit justification beyond it doing so. Is the sensor accuracy changing day to day?? No.
This finding by @connolly_s is important and exposes PHA being unrelated to measurement principles.
3/ Here’s clear proof of failure. If the @noaa adjustments were correct, they’d bring raw data closer to the high-quality USCRN reference station (designed bias/error free). Instead, PHA alters the classic (cheap) neighborhood station’s raw data to be wrong—to be false.