1/ A great summary! After having peer reviewed many papers in the past, I can't leave this uncommented. There is just too much truth in it. But also many things missing. @markdhumphries
2/ "only one of Einstein’s 300 or so published papers was ever peer-reviewed, which so disgusted him that he never submitted a paper to that journal again."
He was not alone. Nature rejected Kary Mullis PCR paper (Nobel Price awared).
3/ Peer Review is nothing more than "please have a look". It's a basic check, not a quality endorsment. Most papers I received were Chinese low quality papers pushing into high-end journals like Phys. Rev. B or Phys. Rev. Letters. I rejected (or redirected elswhere) most of them.
4/ It was clear that pushing low quality into high-end journals was about reputation and money. It's a quantitative money game, driven by the sick funding process in science. The more I rejected (or redirected elsewhere), the more I received from Phys. Rev. I noticed empirically
5/ Other reviewers may not be critical, so the flooding tactics to the high-end obviously works by being lucky (catching e.g. a lazy "ok" reviewer). For my own papers, I considered such high-end flooding tactic as unmoral to engage in. Nice small conferences are fine too for me.
6/ "much peer review is aggressive, rude, lazy, or just plain bad.".
You nailed it!
We don't get paid for this, so what do you expect? Quality? Most papers are bad, so it's really not fun nor a popular task to proof read. 99.99..% of the papers are not breaking discoveries.
7/ When a paper drops in for review, what is more likely? A) You drop your work or B) you pass it on to the PhD student? At some point, when Phys. Rev. sent too much, I started to reduce, reject or pass on. Checking the "not my field" box was the fastest way out for boring papers
8/ Peer Review is NOT a quality stamp nor a "certification" like mainstream COVID manic media claims.
"Does it stop a plainly wrong or plainly nonsense paper from being published? No"
9/ The article forgot to mention another issue: Rivality between competing groups. Dirty games may be played on the high end front. Rejection in order to publish ahead. At least that's what rumors tell for high impact publications on Moore's law research. Not seen it myself.
10/ Academic integrity and courage at the level of @ConceptualJames@BretWeinstein@peterboghossian@SwipeWright is exceptionally rare. They deserve a big thank you in this sinister "post factual" propaganda times of political science.
12/ The weak point seems to be at the editorial level. Once you get a political agenda pushing admin on such post, it's game over. In science and media. Nice example is @ggreenwald (also a shining star) who resigned from the outlet he co-founded. theguardian.com/media/2020/oct…
13/ Team #DRASTIC has shown us the pathway for the future. It's time to scarp and wrap-up the dead dinosaurs, both in media and science journals.
Ideally we should have a block chain version of an uncensorable version of Twitter for science with a built in pre-print database.
14/ Closing words: "Satoshi Nakamoto" un-reviewed #bitcoin paper provided a solution to a long unsolvable mathematical problem: "The #Byzantine Generals’ Problem". A major mathematical discovery with disruptive impact on society. bitcoin.org/bitcoin.pdf link.medium.com/8tpn7lYHWgb
1/ Digging deeper, we find 3 USCRN sites with 2 IDs — a legacy historical one and a USCRN. That’s big. It means we can stitch together long-term time series for 3 “golden” stations. Why haven’t @NOAA or @hausfath done this? Not the “right” narrative result? 🙃 Let’s take a look
2/ Here is an example of such a pair. STILLWATER. Note that you can see the wind fence around the precipitation gauge on satellite picture — that round structure. ncei.noaa.gov/access/crn/pdf…
3/ Well, let’s do it. We try. And...
...no hockey stick.
Despite STILLWATER being a growing urban area.
So... where’s the hockey stick? Anyone?
We're told it should be there. But the best data says no.
1/ Mr. @hausfath packed multiple fallacies into one graph. We replicate: he used homogenized data. We get the same.
Bottom right shows the raw. His fallacy: claiming that USCRN-ClimDiv agreement in the modern era (where adjustments are ~zero) validates strong past adjustments.
3/ His fallacy is blatant bad faith. Measurement validation isn't done by induction. He claims adjustments are valid because USCRN-ClimDiv align from 2008-2024—yet no adjustments were made in that period. Then he asserts past adjustments are proven. Exceptional level of malice.
4/ Another fallacy: He cherry-picked 1970—the coldest point in 100 years. He highlights only post-1970 warming in green, hiding earlier trends. But the real scandal? Extreme (false) pre-1970 adjustments, erasing the 1930s warmth with absurd corrections.
1/ New tool - let's test with VALENTIA (hourly) overlay: solid agreement. A model (ERA5) is only as good as its ground truth measurements constraints it. We saw good US results before, but obvious heat bias in polar regions—nothing measured to compare with there anyway.
2/ Now we match the 1940-2024 range. Note temp vs. anomaly scale—same curve, just shifted. A trick to amplify range. Few notice. Climate stripes? Perfect for manipulation—e.g. add offset (ECMWF) to make it red “=warm"= behavior science (manipulative).
3/ With the 1940–2024 range matched, comparison improves. For a clearer view, monthly temps are shown on top left, yearly in the middle—overlaying ERA5. Not perfect overlay, but ERA5 is A) a cell average (of a weather model) and B) fed by adjusted data.
1/ Absolutely my worldview. But I haven’t found a trace of it in temperature measurements. Accuracy doesn’t seem to be a factor at all. Instead, they rely on bizarre software that arbitrarily alters the data. No station audits. No QMS existing. Nothing.
2/ This magic software even adjusts in various directions from day to day—without any explicit justification beyond it doing so. Is the sensor accuracy changing day to day?? No.
This finding by @connolly_s is important and exposes PHA being unrelated to measurement principles.
3/ Here’s clear proof of failure. If the @noaa adjustments were correct, they’d bring raw data closer to the high-quality USCRN reference station (designed bias/error free). Instead, PHA alters the classic (cheap) neighborhood station’s raw data to be wrong—to be false.
1/ The temperature (USCRN) since the 2014/2015 El Niño has been stable and slightly declining (cooling). Yet, we’re witnessing an unprecedented surge in mania. Interesting, isn’t it? Let’s demonstrate this by exposing the bias in GPT. We’ll trick it. Ready?
2/ To force it to be honest, we’ll deactivate ideological filters by labeling USCRN anomalies into portfolio value (adding 30 to shift upward of zero). This way, it will think it’s analyzing the fund performance from an automated trading product from my bank.
Hah. Gotcha. Down☺️
3/ We do the same with Google trends "warmest month" data for US. We relabel it to "all time high" mentions.
Revisiting Las Vegas: The observed warming trend is likely driven primarily by urbanization. Unfortunately, there is no available data from before the 1970s, and the period of overlap with USCRN station records is short, which limits ability to analyze long-term.
We look closer and see that for 10 years, there's no warming—flat. To make the Urban Heat Island (UHI) effect in Vegas clearer, we compare it with Gallup Muni AP🔵, a highly rural spot (BU 2020 <<1%). This highlights urbanization's impact in VEGAS🔴.