So lets add storage (50% efficiency). 150 such parks. Makes 150x200=30.000 ha (min). That is the total area of Munich. PVs is trash in 30 years and needs continuous replacement (fresh fossil powered mining + industry) somewhere in the world.
The total primary power need for Germany is 460 GW. You would need 460/1.4*30.000 ha ~ 1.5x Bavaria (1.5 times red here) as PV park surface. Sounds like a plan.
The Great Wall is the largest man-made project in the world. 20,000 km (~2000km2). Germany will beat this by 50x with the Great Solar Park, 100.000km2. 50 times the Great Chinese Wall. Life time 30 year only.
1M km of 100 m PVs. Earth-Moon back twice.
Wir schaffen das 🙂
A NL engineer showed a calculation that NL does not have sufficient area (incl. the complete Dutch North Sea sector) to produce sufficient energy. The RE Amish utopia only works if Randstad emigrates to Africa and only the farmers stay. Without fossils or nuclear, no NL society.
To the moon and back. That will be generation 2 (you need to make one every 30 years) of the Great Solar Wall. Generation 3 will be on the moon (problem: night is 15 days long there).
The Tower of Babel (to reach the god of the sun) project can begin.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ I was told non US GHCN “raw” is adjusted already.
-----TRUE-----
Now I see it. Gosh.
Composite. 2x adjusted. NOAA doesn’t even know where non-US stations are—or what they’re measuring. Their own US data (USCRN) is light-years better. But for “global”? It’s clown-tier level.
2/ And here it is—the DOUBLE-adjusted COMPOSITE.
Not raw. I doubted @connolly_s at first—like someone denying their 2nd-hand car is stolen, crash-salvaged, and repainted twice. Turns out he was right.
NOAA’s “global” QCU (non-US): not raw.
3/ Credit where due.
Normally I block on first bad-faith signal.
But intuition said: bait him back.
Let’s see what he hands over.
And he did:
✔ Clown location
✔ 120% urbanized
✔ Composite
✔ Adjusted twice
Thanks for the assist.
1/ The WMO’s temperature station classification study isn’t a glamorous reading —but it’s the bare minimum anyone aggregating climate data should know about every single station. They don’t.
2/ Class 1 is “bare minimum” for climate-grade weather station suitability. One means maybe ok. met.no/publikasjoner/…
I’ll be counting impressions. I’ll know if you didn’t read.
(you’re allowed to LLM TlDR it.)
Next up: NOAA climate site requirements (HLR). 👇 x.com/orwell2022/sta…
3/ The NOAA HLR system makes WMO classes look gentle.
Most stations? Fail spectacularly.
1/ Digging deeper, we find 3 USCRN sites with 2 IDs — a legacy historical one and a USCRN. That’s big. It means we can stitch together long-term time series for 3 “golden” stations. Why haven’t @NOAA or @hausfath done this? Not the “right” narrative result? 🙃 Let’s take a look
2/ Here is an example of such a pair. STILLWATER. Note that you can see the wind fence around the precipitation gauge on satellite picture — that round structure. ncei.noaa.gov/access/crn/pdf…
3/ Well, let’s do it. We try. And...
...no hockey stick.
Despite STILLWATER being a growing urban area.
So... where’s the hockey stick? Anyone?
We're told it should be there. But the best data says no.
1/ Mr. @hausfath packed multiple fallacies into one graph. We replicate: he used homogenized data. We get the same.
Bottom right shows the raw. His fallacy: claiming that USCRN-ClimDiv agreement in the modern era (where adjustments are ~zero) validates strong past adjustments.
3/ His fallacy is blatant bad faith. Measurement validation isn't done by induction. He claims adjustments are valid because USCRN-ClimDiv align from 2008-2024—yet no adjustments were made in that period. Then he asserts past adjustments are proven. Exceptional level of malice.
4/ Another fallacy: He cherry-picked 1970—the coldest point in 100 years. He highlights only post-1970 warming in green, hiding earlier trends. But the real scandal? Extreme (false) pre-1970 adjustments, erasing the 1930s warmth with absurd corrections.
1/ New tool - let's test with VALENTIA (hourly) overlay: solid agreement. A model (ERA5) is only as good as its ground truth measurements constraints it. We saw good US results before, but obvious heat bias in polar regions—nothing measured to compare with there anyway.
2/ Now we match the 1940-2024 range. Note temp vs. anomaly scale—same curve, just shifted. A trick to amplify range. Few notice. Climate stripes? Perfect for manipulation—e.g. add offset (ECMWF) to make it red “=warm"= behavior science (manipulative).
3/ With the 1940–2024 range matched, comparison improves. For a clearer view, monthly temps are shown on top left, yearly in the middle—overlaying ERA5. Not perfect overlay, but ERA5 is A) a cell average (of a weather model) and B) fed by adjusted data.
1/ Absolutely my worldview. But I haven’t found a trace of it in temperature measurements. Accuracy doesn’t seem to be a factor at all. Instead, they rely on bizarre software that arbitrarily alters the data. No station audits. No QMS existing. Nothing.
2/ This magic software even adjusts in various directions from day to day—without any explicit justification beyond it doing so. Is the sensor accuracy changing day to day?? No.
This finding by @connolly_s is important and exposes PHA being unrelated to measurement principles.
3/ Here’s clear proof of failure. If the @noaa adjustments were correct, they’d bring raw data closer to the high-quality USCRN reference station (designed bias/error free). Instead, PHA alters the classic (cheap) neighborhood station’s raw data to be wrong—to be false.