Chris Martz Profile picture
Mar 26 14 tweets 24 min read Read on X
I'm an atmospheric science major, and I also watched @ClimateTheMovie.

While I don't necessarily agree with everything said in the movie, the scientists interviewed often made great points, and much of what this “science journalist” has argued is crap.

Time to debunk the debunker. 1/? 🧵
Maarten argues that “The ‘warm’ Medieval and Roman periods... were actually REGIONAL. Current warming is EVERYWHERE.”

Except... that's not what the United Nations' IPCC said in their First Assessment Report (FAR) in 1990. Directly from Chapter 7.2.1 on Page 202,

“There is growing evidence that worldwide temperatures were higher than at present during the mid-Holocene (especially 5,000-6,000 BP), at least in summer, though carbon dioxide levels appear to have been quite similar to those of the pre-industrial era at this time... Parts of Australia and Chile were also warmer. The late tenth to early thirteenth centuries (about AD 950-1250) appear to have been exceptionally warm in western Europe, Iceland and Greenland. This period is known as the Medieval Climatic Optimum... South Japan was also warm. This period of widespread warmth is notable in that there is no evidence that it was caused by an increase of greenhouse gases.”

Figure 7.1 is captioned as showing “global temperature variations.” Figure 7.1 (c) covers the last 1,000 years, and it is evident that the Medieval Warm Period (MWP) was anomalously warm relative to the modern era. In later reports, this diagram was replaced with Michael Mann's “Hockey Stick” graph.

🧵 2/?

Image
This Dutch science journalist then goes on to argue that the Ljungqvist (2010) [1] Northern Hemispheric temperature reconstruction shown in the movie is “TWENTY YEARS OLD,” and argues that it is wrong because of the widely accepted Mann et al. 1999 “Hockey Stick” reconstruction that is now used in the IPCC reports and serves as a basis for guiding global policymaking.

Except... for the fact that Moberg et al. (2005) [2] is very similar to Ljungqvist (2010) and the schematic diagram of global temperature used in the IPCC's 1990 First Assessment Report (FAR).

References:
[1] Ljungqvist (2010) - A New Reconstruction of Temperature Variability in the Extra-Tropical Northern Hemisphere During the Last Two Millennia.

[2] Moberg et al. (2005) - Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data:

🧵 3/?jstor.org/stable/4093099…
researchgate.net/publication/20…Image
Image
Worst of all, @mkeulemans makes a shoddy attempt at splicing the instrumental temperature record onto the end of the Ljungqvist (2010) reconstruction, which ends in the year 2000.

It is scientifically unethical to interweave two datasets based on different methodologies. This is especially true when scientists attempt to merge a multi-proxy reconstruction based on, in this case, marine sediments, lake sediments, ice cores, and tree rings, with modern instrumental data collected by surface-based GHCN station thermometers. Why? Well,

➊ The first and most significant problem that arises from merging two datasets of different methodology together comes from the fact that instrumental data provides continuous, point-specific measurements of atmospheric state variables that can be used to calculate a global mean. Proxy data, on the contrary, provides indirect measurements that are often discontinuous and are geographically averaged over large areas by interpolation, masking out regional variations that, in effect, can affect a global mean.

➋ The highest quality multi-proxy reconstructions have a temporal resolution of maybe a decade or two, at best. Each data point used in the Ljungqvist (2010) time series represents a 10-year average, at best. Most proxy reconstructions don't even have that high of resolution. Averaging temperatures across vast regions with limited proxies and over decadal or multi-decadal time scales will veil short-term variations, often large, that are captured by direct temperature observations.

➌ Oh, and I didn't even mention that proxies such as tree rings and ice cores are affected by environmental factors separate from temperature. Tree growth is affected by drought, soil quality / moisture, exposure to sunlight and even invasive species of insects. Ice core oxygen isotope ratios are affected by evaporation, condensation and salinity. All of these hurdles make calibrating these to a temperature scale incredibly difficult and prone to error.

Hence, adjoining two datasets produced by different methodologies isn't practical, and doesn't really tell us anything useful.

🧵 4/?Image
Later, @mkeulemans just writes off the anomalously warm and dry 1930s “Dust Bowl” as being localized to the U.S. Lower 48 and argued that it was caused by poor land use (specifically, the plowing and cattle overgrazing of drought-resistant prairie grasses that anchor the soil and prevent wind-driven erosion).

This is a half-truth, at best, and is an argument that climate activists use to try justifying rewriting history. Interestingly enough, I wrote a very detailed analysis of what actually caused the “Dust Bowl” drought and heat that seared the North American continent during the 1930s on March 15th. I'll copy and paste that Tweet here:

From me (@ChrisMartzWX) on March 15, 2024:

“𝗗𝗲𝗯𝘂𝗻𝗸𝗶𝗻𝗴 𝗠𝘆𝘁𝗵𝘀 𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 “𝗗𝘂𝘀𝘁 𝗕𝗼𝘄𝗹”

Climate activists are quick to write off the heatwaves that torched North America during the 1930s as being an outlier that was instigated by “unsustainable” farming practices in the Great Plains. This is another excuse for them justify rewriting history.

So, here are the facts:

➊ The 1930s “Dust Bowl” drought was 𝙣𝙤𝙩 caused by poor farming practices. There is evidence based on a number of studies (e.g., Shubert et al., 2004 [1]; Seager et al., 2005 [2]; and Cook et al., 2008 [3]) that the drought was forced by multiple La Niña events (i.e., cool waters in the equatorial Pacific) and an anomalously warm subtropical North Atlantic.

La Niña is a well-documented critical component to stimulating droughts in the Great Plains and Desert Southwest. During La Niña years, the subtropical jet stream shifts north (Seager et al., 2005), enhancing geopotential heights over the Lower 48. Beneath these warm-core highs, there is synoptic-scale subsidence that suppresses convection [and by convention, precipitation] and warms the air through adiabatic compressional heating. This dries out soil and vegetation, leading to drought.

➋ Cook et al. (2008) was able to reproduce a drought in the Great Plains with 1930s SST configurations using general circulation models (GCMs), but were ‘‘unable to reproduce the severity and spatial pattern of the ‘Dust Bowl’ drought of the 1930s with SST forcing alone.’’ The precipitation anomaly is weaker and “centered too far south” in comparison to GHCN-daily observations.

The authors were able to get the GCMs to intensify the simulated drought, and shift the center northward by imposing a high dust loading over the region where the largest wind erosion occurred. This is based on the fact that deep-rooted, drought-resistant prairie grasses that covered the Great Plains worked to keep the soil in place were plowed and replaced by drought-prone wheat and overgrazed by overstocked cattle.

The authors of Cook et al. (2008) noted, however, that while the spatial patterns of the 1930s “Dust Bowl” drought disappear in the GCM ensemble means, there are 𝙞𝙣𝙙𝙞𝙫𝙞𝙙𝙪𝙖𝙡 𝙢𝙚𝙢𝙗𝙚𝙧𝙨 𝙩𝙝𝙖𝙩 𝙥𝙧𝙤𝙙𝙪𝙘𝙚𝙙 𝙫𝙚𝙧𝙮 𝙨𝙞𝙢𝙞𝙡𝙖𝙧 𝙧𝙚𝙨𝙪𝙡𝙩𝙨 𝙩𝙤 𝙤𝙪𝙧 𝙤𝙗𝙨𝙚𝙧𝙫𝙖𝙩𝙞𝙤𝙣𝙨, suggesting that SST forcing alone might have played a larger role than they thought, and it's open to further study. Hence, the authors concluded that “unprecedented atmospheric dust loading over the continental U.S. 𝙚𝙭𝙖𝙘𝙚𝙧𝙗𝙖𝙩𝙚𝙙 the ‘Dust Bowl’ drought [locally],” but didn't 𝙘𝙖𝙪𝙨𝙚 it.

In other words, the drought [and complementary heat extremes] observed across 𝙖𝙡𝙡 𝙤𝙛 𝙉𝙤𝙧𝙩𝙝 𝘼𝙢𝙚𝙧𝙞𝙘𝙖 in the 1930s was 𝙣𝙤𝙩 𝙘𝙖𝙪𝙨𝙚𝙙 by Farmer Johnson driving his John Deere Waterloo Boy, plowing a field outside of Hays, Kansas in 1929. Washington, D.C. tied their “all-time” [since 1872] record high temperature in July 1930, and New York City set theirs [since 1869] in July 1936. Obviously, soil degradation in the Plains had little or nothing to do with that. Such practices also weren't responsible for the anomalously warm Arctic (e.g., Dirk van As et al., 2016 [4]) or surface mass balance (SMB) on the Greenland Ice Sheet (e.g., Fettweis et al., 2008 [5]; Dirk van As et al., 2016 [4]; and Mankoff et al., 2021 [6]).

➌ From 1856 to 1865, there was another major drought in the Great Plains often referred to as the “Civil War Drought.” This is backed by rain gauge data collected by stations from the Army Surgeon General scattered across various forts that pre-date GHCN-daily station data, as well as tree-ring analysis conducted by Dr. David Stahle in 2004 [7].

The “Civil War Drought” was worse than the “Dust Bow” in states such as Texas, Oklahoma and Kansas, and about the same magnitude as the latter in states such as Nebraska, Montana and the Dakotas. It is suggested that this decade-long drought was forced, too, by multiple La Niña events and an anomalously warm subtropical North Atlantic. Farmers in Kansas didn't cause that widespread drought or heat, such as during the summer of 1860, the heat extremes of which are comparable to 1934, 1936, 1954 and 1980.

➍ The occurrence of the “Civil War Drought” suggests that SST forcing alone is capable of producing severe, decade-long droughts in the Heartland if given a full deck of cards, even if GCM ensemble averages aren't capable of reproducing these results. 𝙉𝙖𝙩𝙪𝙧𝙖𝙡 𝙞𝙣𝙩𝙚𝙧𝙣𝙖𝙡 𝙫𝙖𝙧𝙞𝙖𝙗𝙞𝙡𝙞𝙩𝙮 𝙖𝙣𝙙 𝙘𝙝𝙖𝙤𝙩𝙞𝙘 𝙧𝙖𝙣𝙙𝙤𝙢𝙣𝙚𝙨𝙨 𝙖𝙡𝙡𝙤𝙬 𝙬𝙚𝙞𝙧𝙙 𝙖𝙣𝙙 𝙚𝙭𝙩𝙧𝙚𝙢𝙚 𝙩𝙝𝙞𝙣𝙜𝙨 𝙩𝙤 𝙝𝙖𝙥𝙥𝙚𝙣, regardless of any human interference in the non-linear system. This suggests that climate activists can't just write off the “Dust Bowl” as a statistical outlier because the excessive drought and heat are inconvenient for their narrative.

Using the University of Memphis' Drought Atlas (data derived from the Cook et al., 2010 [8] reconstruction), I was able to plot contoured PDSI maps over the U.S. comparing the “Civil War Drought” (1856-1865) to the “Dust Bowl Drought” (1930-1940). You can see this in the animation I created below [9].

𝗥𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀:
[1] Causes of Long-Term Drought in the U.S. Great Plains - Schubert at al. (2004):

[2] Modeling of Tropical Forcing of Persistent Droughts and Pluvials over Western North America: 1856–2000 - Seager et al. (2005):

[3] Dust and sea surface temperature forcing of the 1930s “Dust Bowl” drought - Cook et al. (2008):

[4] Placing Greenland ice sheet ablation measurements in a multi-decadal context - Dirk van As et al. (2016):

[5] Estimation of the Greenland ice sheet surface mass balance for the 20th and 21st centuries - Fettweis et al. (2008):

[6] Greenland ice sheet mass balance from 1840 through next week - Mankoff et al. (2021):

[7] Causes and consequences of nineteenth century droughts in North America - Dr. David Stahle's tree-ring reconstruction:

[8] Megadroughts in North America: placing IPCC projections of hydroclimatic change in along-term palaeoclimate context - Cook et al. (2010):

[9] North American Drought Atlas: ”
journals.ametsoc.org/view/journals/…
journals.ametsoc.org/view/journals/…
agupubs.onlinelibrary.wiley.com/doi/10.1029/20…
geusbulletin.org/index.php/geus…
tc.copernicus.org/articles/2/117…
essd.copernicus.org/articles/13/50…
ocp.ldeo.columbia.edu/res/div/ocp/dr…
onlinelibrary.wiley.com/doi/abs/10.100…
drought.memphis.edu/NADA/Default.a…Image
Oh, and as it turns out, whether the 1930s anomaly is included or not, heatwaves were more frequent and intense in the U.S. prior to 1960. In 2022, I completed an analysis of 828 long-running (i.e., ≥100-years of daily temperature data) GHCN-daily and ThreadEx stations. The 1901 to 1960 average number of days ≥95°F (35°C) was 15.4 days. The 1961 to 2020 mean was 12.6 days; that's an 18% decrease, and this trend is evident whether the 1930s are included or not.

I'm working on updating this chart to include data up through 2023, but it's a work in progress. Regardless, the point stands.

🧵 6/?Image
I had class. So, I'm back...

As a gotcha moment, @mkeulemans argues that the sum of natural forcings (e.g., solar and volcanic) which caused the descent into the Little Ice Age (LIA) could not explain any of the recent warming [since 1970].

How does he [and the experts] know? Physics? Nope. Models!!

Indeed, climate models are incapable of reproducing observed temperature trends over the last 50-years, and as such, the models are pre-tuned, that is, fudged, to match the global mean surface temperature (GMST) record, and they do this by ignoring all natural forcings and variability (assuming their net contribution to the observed GMST change are close to zero, or even negative) and arbitrarily adjusting the impacts from anthropogenic forcings until modeled GMST change comes into agreement with the target range. Science!!

Climate modelers assume that because their general circulation models (GCMs) can't produce the observed GMST change with natural forcings or internal variability, then the overall contribution from natural forcings and internal variability must sum to zero. So, they then look at potential human contributions (e.g., aerosols and greenhouse gases, GHGs). Some models suggest that aerosols warm the climate by as much as 0.1°C, while others cool it by up to 1.0°C. For GHGs, the contribution to GMST change is estimated to vary from +1.0°C to +2.2°C. Well, what is it? That's a wide range, far from “settled science.”

Take for example, the Hadley Centre's climate model, Global Environment Model version 3 (HadGEM3). It is assumed that natural forcings and internal variability sum to zero, and that aerosols have caused a net 1.0°C of cooling to GMST since 1850-1900. So, to match observations, the model was tuned to show GHGs contributed to 2.0°C of warming. The result? A net change of GMST in the HadGEM3 model of +1.0°C, right on target and claimed model success. Except, these values are theoretical. They are not determined from observations (hence a large model spread), and have little basis in physics.

The assumption that natural forcings have had a net zero, or even net negative impact on GMST change is incorrect, because if that were true, then the warming observed from 1900 to 1945 couldn't have occurred. There weren't enough carbon dioxide emissions at the time to cause the early 20th century warming. That is a fact, not my opinion.

In essence, the models were forced to agree with the GMST record such that scientists can conclude that the models agree with the GMST observations. That is circular reasoning, not science.

(The annotations I made to Figure 3.8 from IPCC AR6 WG1, Chapter 3, were similar to those made by Dr. John Christy in a talk last year, but made my own for image clarity).

🧵 7/?Image
Image
Next up, climate models!!

Maarten Keulemans (@mkeulemans) asserts that climate model projections have been in line with global mean surface temperature observations, and that Dr. John Christy's graph is “misleading.”

Keulemans attaches a video animation prepared by Carbon Brief that overlays carefully selected climate model projections from a few studies with a number of GMST datasets (e.g., NASA, Hadley/UEA, NOAA and Berkley). This, of course, is a bad comparison because the models are pre-tuned to agree with the GMST observations and the forcings are adjusted to arbitrary values to fit within the target range when the models are then run.

In other words, the models are forced to agree with the global surface temperature record, such that scientists then say, “Ah ha! See, the models match the observations, the models are correct.” That is circular reasoning, not science. They start with a conclusion and work backwards to find or fudge data to fit their models. Junk science at its finest!

Still, if you compare the latest HadCRUT5 global mean surface temperature anomaly observations [relative to 1986-2005] to the CMIP5 models for various emission scenarios run in 2005, the majority of the 138 members (particularly RCP 4.5, RCP 6.0, and RCP 8.0) run too hot, predicting more than twice as much warming as has been measured. The multi-model mean (MMM) used to guide for global policymaking runs several tenths of a degree warmer than surface observations. So, while observations fit within the range of the model spread, HadCRUT5 observations are still on the very low-end of projections, suggesting that climate sensitivity estimates to carbon dioxide are too high.

The CMIP6 models (not shown) run even hotter than CMIP5, but climate scientists seem to have no interest in addressing that. The observations must be wrong, or something, in their eyes.

🧵 8/?Image
Image
Perhaps most grotesque of all, Maarten misaligns Dr. Roy Spencer's position by stating he thinks that global warming isn't occurring, and that any warming which has been observed is attributable to, in full, urban heat island (UHI) contamination.

Dr. Roy Spencer, like most scientists on either side of the aisle, does not disagree with the overall premise of global warming theory. He in fact has stated multiple times that at least some of the warming has been the result of GHG emissions slightly enhancing the Earth's natural greenhouse effect (GHE).

However, Dr. Spencer, along with other independent research conducted in @RossMcKitrick and Michaels (2004) [1]; McKitrick (2010) [2]; Fall et al. (2011), co-authored by Dr. Roger Pielke, Sr. and Anthony Watts (@wattsupwiththat); O'Neill et al. (2022) [4]; and Katata et al. (2023) [5], have all found that the most significant warming either in the U.S. or globally has occurred in urban areas, with significantly less positive temperature trends in rural settings. This is really significant in regions like the U.S. where there is a long, continuous, coherent temperature record as compared to other countries.

If you mix contaminated data with good data, you end up with more contaminated data. That's not saying rural areas haven't warmed, that is indeed the case in most areas, but the trends are less positive, which in effect suggests that much of the warming could in fact be an artifact of UHI, not GHG forcing.

References:
[1] McKitrick and Michaels (2004) - A test of corrections for extraneous signals in gridded surface temperature data:

[2] McKitrick (2010) - A Critical Review of Global Surface Temperature Data Products:

[3] Fall et al. (2011) - Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends:

[4] O'Neill et al. (2022) - Evaluation of the Homogenization Adjustments Applied to European Temperature Records in the Global Historical Climatology Network Dataset:

[5] Evidence of Urban Blending in Homogenized Temperature Records in Japan and in the United States: Implications for the Reliability of Global Land Surface Air Temperature Data -

🧵 9/?jstor.org/stable/24868718
papers.ssrn.com/sol3/papers.cf…
agupubs.onlinelibrary.wiley.com/doi/full/10.10…
mdpi.com/2073-4433/13/2…
journals.ametsoc.org/view/journals/…Image
Image
Oh, but this clown show gets better. 🤡

Supposedly, ocean surface warming proves that there is no warming in the instrumental global land surface temperature record due to the urban heat island (UHI) effect. Well, there you have it, folks. The science is settled because a Dutch science journalist says so.

While it is true that land surfaces [and the adjacent layer of overlying air] heat up at a faster rate than the ocean surface because water has a very high specific heat capacity (i.e., the amount of heat, in Joules, required to raise 1 gram of a substance by 1°C), what Maarten fails to mention is that virtually all general circulation models (GCMs) project that warming reaches a local maximum in the tropical troposphere at altitudes of 200 to 300 hPa (image 2), and that this occurs rapidly in response to GHG forcing (McKitrick and Christy, 2018 [1], 2020 [2]). The problem? Well, this hasn't happened (see image 3)!!

In fact, from global latitudinal cross-sections taken from all GCMs, the troposphere in general should warm faster than the surface under pure GHG forcing. This, of course, is not the case in actual observations (image 4). NASA GISSTEMP departure from average from 1979-onwards is plotted against the UAH V6.0 global lower tropospheric temperature anomaly, the surface temperatures are warming at a faster rate. That is not predicted by climate models.

In spite of specific heat capacity differences, GMST and global mean ocean temperature are more or less in unison throughout much of the late-19th and 20th centuries. Sea surface temperatures don't vary as much, but that's expected given the thermodynamic properties of water. However, the divergence between the two datasets begins around 1975 to 1980, a time when urban sprawl really began to take off in regions surrounding major cities (e.g., when my parents were growing up in the 70s/80s, much of the D.C. metro was farmland or forest; none of the modern high-rises in Fairfax and Falls Church, for instance, were there).

The ocean record more closely resembles both the satellite and reanalysis datasets (e.g., JRA-55, ERA5 and CFSv2), the land surface dataset is an outlier, and that can be linearly traced to urban heat island (UHI) effects, which has been documented in a number of studies that I linked in the previous Tweet.

References:
[1] McKitrick and Christy, 2018 - A Test of the Tropical 200- to 300-hPa Warming Rate in Climate Models:

[2] McKitrick and Christy, 2020 - Pervasive Warming Bias in CMIP6 Tropospheric Layers:

🧵 10/?agupubs.onlinelibrary.wiley.com/doi/full/10.10…
agupubs.onlinelibrary.wiley.com/doi/10.1029/20…Image
Image
Image
Image
Next, science journalist @mkeulemans suggests that the decline in global mean surface temperature (GMST) from 1945 to 1975 was caused by “massive air pollution.”

Specifically, he's referring to the notable increase in sulfur dioxide emissions during the 1940s and 1950s. Sulfur dioxide molecules oxidize high in the atmosphere, forming sulfate aerosols that are highly effective at blocking out incoming solar shortwave radiation. Over time, this causes an energy imbalance that results in the Earth cooling; that is, longwave radiation out > shortwave radiation in = warming. By the 1970s, there were strict regulations on emissions of sulfur dioxide, and as a result, they began to fall.

I'll admit that I have become more open to this theory, but the issue with Maarten's framing is that almost all of the observed global warming prior to 1945 had to be natural. Carbon dioxide emissions hadn't taken off by then, so their overall impact on the atmospheric radiation balance was negligible until after the 1950s, and that signal doesn't really emerge until after 1975. This early-20th century warming was part of a much longer-term recovery from the Little Ice Age (LIA), and if it weren't for sulfur dioxide emissions, one could in fact postulate that warming would have continued.

Since the 1970s, and reduction of sulfate aerosols in the atmosphere, it's likely the recovery warming from the Little Ice Age has continued, which is probably being slightly enhanced by GHG emissions, although the extent to which clearly isn't known by the IPCC et al. They estimate GHG forcing on GMST change to be anywhere from +1.0°C to +2.2°C. That's far from settled science, and this wide range results from the climate modelers pre-tuning their models to the GMST record, not actual physics or measurements, as I had already stated.

And, of course, there is the Great Pacific Climate Shift of 1976-77 which coincided with the reduction in sulfate aerosols, so how much has that had an effect? What about the Atlantic Multidecadal Oscillation (AMO)? The IPCC just assumes that these natural or internal variability mechanisms sum to net zero effect on GMST change. Why? Their general circulation models (GCMs) suck at simulating natural variability because it is poorly understood.

🧵 11/?Image
Image
Image
If you thought it wasn't bad already, the wheels really fall off the wagon when we get to the extreme weather portion of his “debunking.” This guy really has no clue what he's talking about. This is really, really bad.

Right off the bat, Maarten confuses U.S. wildland fire burn acreage (shown in “Climate the Movie”) with fire count, then decides that all of the pre-1960 data is inconvenient, and it should be ignored because of the “Dust Bowl” or something. Apparently, the only thing that matters are trends over the last 50-years. If that isn't cherry-picking, I don't know what is. 🍒

Now, as a disclaimer, fire burn acreage has decreased since 1926 because we have far more advanced fire-fighting capabilities. However, the recent increase over the last 50-years is a sign of a much more urgent issue than longer fire seasons; poor forest management. The 100% fire suppression policies adopted in the early-1900s on federal lands has allowed western forests to become an overgrown tinderbox come the dry season. Drop a cigarette carelessly, an arsonist lights a match or a tree gets struck by lightning, disaster is on the horizon. This critical tidbit was neither mentioned in the movie, nor addressed by the journalist. So, I did.

🧵 12/?Image
I'm afraid the junk science gets worse, however...

While Maarten correctly points out that there has been no increase in global hurricane-strength tropical cyclone (TC) frequency since 1980, he says that the movie fails to mention that there have been “faster increases in hurricane STRENGTH.”

This is also, well, a lie. In the Supporting Information section of Klotzbach et al. (2022) published in Geophysical Research Letters (GRL), there is a plot of the global number of ≥30 knot TC rapid intensification (RI) and rapid weakening (RW) events between 1990 and 2021. The results show no significant uptrend in events meeting RI criteria.



🧵 13/?agupubs.onlinelibrary.wiley.com/doi/full/10.10…Image
Image
Science journalist @mkeulemans wrote a 59-post thread post attempting to debunk @ClimateTheMovie, but over the last 24-hours, I did a deep dive into the data myself to debunk the debunker’s key points in this 14-post thread, and I provide some important context that couldn’t be fit into an 80-minute documentary.

All in all, I found the message to be on point and rooted in scientific fact, quoting some of the top experts in the field. The movie is insanely popular, and that is why there is so much pushback and pressure to get it censored. Once again, @Martin_Durkin and my friend @TomANelson did a fantastic job.

For those interested, scroll up through the thread. References were cited for my points for further reading. ⬆️

🧵 14/14 END

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Martz

Chris Martz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(