1) The CDC report is very well done and properly normalized. No further calculations are needed. All is done and ready.
2) The German report is poor. No normalization by doses, nor by sex. In General the report is of a very low quality.
4/ So for the German PEI report, some calculations and estimations were needed. The administered dose per age group are also not published and need to be estimated.
Approach: use German 2019 population pyramid and multiply by vaccination rate (estimated). Here the result.
5/ Remarks. Germany hasn't given a general recommendation for vaccinating children below 18.
Thanks to the #STIKO, they didn’t follow the unethical example of CDC.
That’s why the German curve begins at 18 only. The data below 18 is fortunately not available.
6/ The general quality of the German PEI reports are of low quality (lacking dose/age normalization etc.).
The comparison with the high quality report by CDC shows this very clearly. But judge for yourself.
9/ Bonus round: why vaccinating children is unethical:
Using the same type of normalization as decribed above, I estimated the age dependant vaccine CFR from the German PEI report and compared it with the C19 IFR for healthy. IFR source below here:
10/ All safety features have fallen. This here is unprecedented. Politics make medical decisions now and ignore #STIKO. The kids must be vaccinated at all costs it looks like. And rumors say that STIKO will change it’s recommendation. Based on new data? No.
Clearly CDC US and PEI DE have an underreporting issue while Canada and Israel show similar (higher) numbers.
What about @Lareb_NL ? Nothing to report? Silence?
14/ The potential root cause for the age curve has been discussed by @bringsmileback here: a combination of testosteron and stronger Th1 inflammatory response in young boys compared to older men:
1/ Mr. @hausfath packed multiple fallacies into one graph. We replicate: he used homogenized data. We get the same.
Bottom right shows the raw. His fallacy: claiming that USCRN-ClimDiv agreement in the modern era (where adjustments are ~zero) validates strong past adjustments.
3/ His fallacy is blatant bad faith. Measurement validation isn't done by induction. He claims adjustments are valid because USCRN-ClimDiv align from 2008-2024—yet no adjustments were made in that period. Then he asserts past adjustments are proven. Exceptional level of malice.
4/ Another fallacy: He cherry-picked 1970—the coldest point in 100 years. He highlights only post-1970 warming in green, hiding earlier trends. But the real scandal? Extreme (false) pre-1970 adjustments, erasing the 1930s warmth with absurd corrections.
1/ New tool - let's test with VALENTIA (hourly) overlay: solid agreement. A model (ERA5) is only as good as its ground truth measurements constraints it. We saw good US results before, but obvious heat bias in polar regions—nothing measured to compare with there anyway.
2/ Now we match the 1940-2024 range. Note temp vs. anomaly scale—same curve, just shifted. A trick to amplify range. Few notice. Climate stripes? Perfect for manipulation—e.g. add offset (ECMWF) to make it red “=warm"= behavior science (manipulative).
3/ With the 1940–2024 range matched, comparison improves. For a clearer view, monthly temps are shown on top left, yearly in the middle—overlaying ERA5. Not perfect overlay, but ERA5 is A) a cell average (of a weather model) and B) fed by adjusted data.
1/ Absolutely my worldview. But I haven’t found a trace of it in temperature measurements. Accuracy doesn’t seem to be a factor at all. Instead, they rely on bizarre software that arbitrarily alters the data. No station audits. No QMS existing. Nothing.
2/ This magic software even adjusts in various directions from day to day—without any explicit justification beyond it doing so. Is the sensor accuracy changing day to day?? No.
This finding by @connolly_s is important and exposes PHA being unrelated to measurement principles.
3/ Here’s clear proof of failure. If the @noaa adjustments were correct, they’d bring raw data closer to the high-quality USCRN reference station (designed bias/error free). Instead, PHA alters the classic (cheap) neighborhood station’s raw data to be wrong—to be false.
1/ The temperature (USCRN) since the 2014/2015 El Niño has been stable and slightly declining (cooling). Yet, we’re witnessing an unprecedented surge in mania. Interesting, isn’t it? Let’s demonstrate this by exposing the bias in GPT. We’ll trick it. Ready?
2/ To force it to be honest, we’ll deactivate ideological filters by labeling USCRN anomalies into portfolio value (adding 30 to shift upward of zero). This way, it will think it’s analyzing the fund performance from an automated trading product from my bank.
Hah. Gotcha. Down☺️
3/ We do the same with Google trends "warmest month" data for US. We relabel it to "all time high" mentions.
Revisiting Las Vegas: The observed warming trend is likely driven primarily by urbanization. Unfortunately, there is no available data from before the 1970s, and the period of overlap with USCRN station records is short, which limits ability to analyze long-term.
We look closer and see that for 10 years, there's no warming—flat. To make the Urban Heat Island (UHI) effect in Vegas clearer, we compare it with Gallup Muni AP🔵, a highly rural spot (BU 2020 <<1%). This highlights urbanization's impact in VEGAS🔴.
It worsens. Aimed to expose the adjustment fallacy on number of frost days like showing that the F77 SW will tell us that the frozen thermometers weren't frozen...and guess what? GHCN adjusted "qcf" only exists AFTER aggregation to month.
Simply unbelievable! The claim is that adjustments only exist after temporal aggregation. Yet when asked, “show me location X on Day Y,” they can’t explain what went wrong or why.
Untestable locally nor in time?? Simply a diabolic malice.