🧵
Last month my op-ed in @FT on outdated climate scenarios of the NGFS used by central banks around the world to assess future climate risk & climate policy risk
I argued that the NGFS baseline scenario projected an implausible future for CO2 emissions ft.com/content/a82a7b…
Today the NGFS has published newly updated climate scenarios ... and guess what? I was correct and to their credit, they are moving their baseline scenarios in the right direction
Here is how the new NGFS baseline (red) looks compared to that which I critiqued as implausible (blue)
NGFS 2.0 has emissions growing to ~2080 and plateauing thereafter
This is a massive revision is just a short time frame
Good for NGFS
However, even with the massive revision (cumulative CO2 emissions from energy 2020-2100 lowered by ~18%), a case can still be made that the NGFS "current
policies = Hot House World 2.0" scenario is still too extreme as a baseline
Here is how it looks compared to HHW 1.0 as well as the range of plausible scenarios in Pielke et al 2021
Much better, but still extreme
The good news is that NGFS has added a second baseline "NDCs" that offers a more plausible baseline against which to perform stress testing and transition risk analyses ngfs.net/sites/default/…
Bottom line
Bravo to the @NGFS_ for recognizing that its scenarios were out of date & taking quick action to update them
PS. The NGFS methodology still has some serious problems
For instance the tropical cyclone damage function employed relies on Emanuel 2011 (based on our methods actually) that uses SRES A1b (like RCP8.5) plus a single model
Guess which model was selected to use from the below?
The tropical cyclone damage analysis of the NGFS cites Emanuel 2011 which is actually a follow-up to our paper:
Crompton et al 2011. Emergence timescales for detection of anthropogenic climate change in US tropical cyclone loss data. ERL, 6(1), 014003. iopscience.iop.org/article/10.108…
Our paper reports comprehensive results from CMIP3 model ensemble (so pretty dated), Emanuel's re-do of our analysis applies his bespoke methods (ignoring CMIP) & still arrives at similar results
Even so, cherry picks most extreme model results
This carried forward to NGFS 2021
Understanding scenarios in climate research and applications is ridiculously complex as there are scenarios nested within scenarios (within scenarios and so on), typically using assumptions that go back a decade or more
It is a troubling black box
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The idea it was perfect under Democrats, as @afreedma & other advocacy journos suggest, is simply wrong
The most recent NCA was totally capture by interest groups and companies that would benefit from the report - UCS, TNC, EDF, CAP, Stripe etc
Below just a few of its authors
@afreedma The head of the NCA5 stated publicly that she would never cite our work in the assessment, even though our work is by far the most cited research on economic losses in the US associated with floods, hurricanes, tornadoes
🧵Let's take a quick look at the implications of the regulations that have followed from the 2009 EPA endangerment finding
According to @C2ES_org the 2021 GHG standards for light vehicles would reduce projected CO2 emissions by a cumulative 3.1 billion tons to 2050 c2es.org/content/regula…
Over the next 25 years the world would emit 925 gigatons of CO2 assuming constant 2025 emissions and ~690Gt assuming emissions are cut in half by 2050
That means that the projected impact of the regulations would reduce global emissions by 0.0003% (constant) & 0.0004% (halved)
The idea that CO2 can be regulated out of the economy is flawed
If the purpose of CO2 regulation is to create a shadow carbon tax, then it is a horribly inefficent way to do that
Once again, all this leads us back to Congress and the need for smart energy & climate policy
🧵
The percentage of a percentage trick is increasingly common & leads to massive confusion
Here a undetectable difference of 0.01 events per year per decade is presented as the difference between a 31% and 66.4% increase (in the *likelihood* of the event, not the event itself)
The resulting confusion is perfectly predictable
Here is a reporter (NPR) explaining completely incorrectly:
"The phenomenon has grown up to 66% since the mid-20th century"
False
Also, the numbers in the text and figure do not appear to match up
I asked Swain about this over at BlooSkeye
A Frankenstein dataset results from splicing together two time series found online
Below is an example for US hurricane damage 1900-2017
Data for 1980-2017 was replaced with a different time series in the green box
Upwards trend results (red ---)
Claim: Due to climate change!
The errors here are so obvious and consequential that it is baffling that the community does not quickly correct course
The IPCC AR6 cited a paper misusing the Frankenstein hurricane loss dataset to suggest that NOAA's gold standard hurricane "best track" dataset may be flawed
JFC - Using flawed economic loss data to suggest that direct measurements of hurricanes are in error!
We’ve reached the point where an IPCC author is openly rejecting the conclusions of the IPCC out of concern over how their political opposition is correctly interpreting the AR6
The integrity of the IPCC on extreme events is now under attack
The IPCC explains that a trend in a particular variable is DETECTED if it is outside internal variability and judged with >90% likelihood
For most (not all) metrics of extreme weather detection has not been achieved
That’s not me saying that, but IPCC AR6
The IPCC also assesses that for most (but not all) metrics of extreme weather the signal of a change in climate will not emerge from internal variability with high confidence (ie, >90%) by 2050 or 2100, even assuming the most extreme changes under RCP8.5