Short answer: yes, certainly, but because the people doing it are only human, not (generally) because they are doing a bad job.

I should note that some of these ideas (and the screencaps) come from Granger Morgan's great (though somewhat technical and sophisticated) book Uncertainty. This sort of thinking was a central part of my PhD program, so all the @CMU_EPP folks out there will find this familiar.
So how can I say that energy modeling results are overconfident? The simple answer is that basically all results from all fields that we can check are found to be overconfident. Overconfidence is a feature of humans and sneaks into even the most objective-seeming results.
Measuring physical constants like the speed of light ought to be pretty objective, with uncertainty based only on the accuracy of the measurement tools, right? But if you look back at previously reported uncertainties, they demonstrate overconfidence.
Note that a 90% confidence interval implies that you are 90% sure the "right" answer is in there, so you should expect to be "wrong" about 10% of the time. But the estimates from the above are "wrong" more than half the time - the error bars need to be much larger!
And this wasn't some quirk about measuring the speed of light. It is true for estimates of mass of an electron, Planck's constant, and other things. Somehow, when physicists said "I'm 90% sure the answer is in this range" their ranges were far too small.
One of the lines I use in class is "Everyone is overconfident all the time about everything." (See if you can find the "subtle" joke there)

And studies show that this overconfidence applies to those that are good/bad at math, and on topics both familiar and foreign.
Some more data. When asked the likelihood of things, events that people say have 50% likelihood actually only happen 30-40% of the time. Things they say have only a 2% chance of happening occur 20-40% of the time! And "training" (the "after" columns) doesn't help much.
Now to more specifics on energy modeling: there are lots of kinds of uncertainty and we only have good tools for some of them. The only one we are really good at is parametric uncertainty (Do I have the right natural gas price for 2030? The right solar learning rate?)
We deal with parametric uncertainty through various "uncertainty analysis" tools, like sensitivity analysis (change one or more inputs and see how the answer changes). Even there, we have a tendency to focus on the base case results and downplay the sensitivity analysis.
The base case results are the most likely outcome, but are almost guaranteed to be wrong - we shouldn't be surprised if they are. I tell students that their sensitivity analysis *is* their result - the base case output is just a special case.
But there are lots of other kinds of uncertainty: What id my model (but not my inputs) is wrong? What if the set of technologies changes in a surprising way? What if the framing of the problem and objectives is wrong? There are hard to deal with, so we mostly ignore them.
But in my experience, most energy systems modelers are appropriately humble about what their results actually tell us. If you ask them if their model of the year 2035 is right, they will promptly say, "No, but it might be informative." That is the right answer.
I made the analogy before that energy system modeling is like a map when driving: provides guidance but doesn't tell you how to steer the car.
Another would be that energy system modeling is like playing games - both are ways to explore and understand complex systems. If you want to be a good chess player, you practice chess to understand how the game works: the systems, logic, interaction, flow, etc.
You don't practice chess because you are attempting to model out and memorize the future of a specific game that you will play (though computers do to some degree). Same for modeling: we do it to better understand the system, not the specific outputs.
A final bit on "groupthink". That term has a specific meaning that I think mostly doesn't apply to energy modeling. There are other reasonable explanations for the agreement (see this whole thread).
A simple one is that elements of the uncertain future look much the same to anyone looking forward from today. For example, take a roll of two dice. If you ask me to predict what they will add up to, I would guess 7 (both the most likely and the average outcome).
In fact, anyone with their wits about them ought to guess 7 - all the experts agree! But the outcome is more likely not to be a 7!

Is that groupthink? No. It just means that experts can agree on the most likely answer, even if that answer is not likely to be correct.
So when experts agree on best guesses for things like fuel costs, learning rates, etc (and hence get similar results from these similar inputs), that may not mean that those are particularly likely to be correct. Yes, we have sensitivity analysis to apply, but it is underused.
Anyway, this is way too long, so I'll conclude by saying: energy system modeling is meant to help us understand these systems and no one should expect that it is giving an accurate description of the distant future. Most modelers know & convey this. Distrust those that do not.
Missing some Tweet in this thread?
You can try to force a refresh.

# Like this thread? Get email updates or save it to PDF!

###### Subscribe to Eric Hittinger

Get real-time email alerts when new unrolls are available from this author!

###### This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

# Try unrolling a thread yourself!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`