New Study Evaluates Climate Models — Best Ones Also Show the Most Global Warming
>
"Climate models that simulate the current climate the best [also] tend to project the most global warming." See text below for explanation of the chart.
by Gaius Publius
This is a climate story with two pieces.
The first piece is the by-now-obvious "Everything's happening faster than anyone thought it would" observation, of which the immediate corollary is, "OMG we're still screwed."
It's true that everything is happening faster than anyone thought it would (anyone who had a prominent public voice, that is). This part of the story is, as noted above, "by now obvious." Changes are happening a lot faster — big storms are more frequent than anticipated, even by those who anticipated them; wildfires are burning hotter and later in the season (December?!) than even those who predicted more and hotter wildfires; and the cost to insurance companies of climate-caused damage is rising faster than insurance companies anticipated — and anticipating increasing costs is their whole business model.
But those of us without power have already gotten that message. The real resistance to that message is among people who do have power, but also have money to protect from that message as well.
The second piece of this story is much more interesting — it's about how the message we all understand to be true is now supported. A group of climate scientists have published a paper (subscription required) that puts statistical data round the observation that things are happening faster than most models predict.
In other words, they're analyzing the models statistically in order to see which models are making the best predictions. Instead of waiting for events to prove which climate models (projections) ended up being right because events proved them right, this study looks at models and figures out ahead of time which ones are most likely to "get it right" in advance of the observational data.
In other other words, all models are not created equal, so taking the average of a large set of models tells us less that looking at the best models first. The study attempts to identify those models.
How did the researchers test which models were best? They looked for models that made the most accurate statements in the past about the earth's energy imbalance — models that most correctly anticipated the difference between energy-in (from the sun) and energy-out (radiation of that energy back into space).
Our whole problem is that difference — too much energy-in relative to energy-out, and the planet heats. So models that made predictions about the energy imbalance that turned out to be right are likely to be right about the effects of that imbalance, such as the amount of increased global warming.
From the website of the lead researcher, Patrick Brown:
The study addresses one of the key questions in climate science: How much global warming should we expect for a given increase in the atmospheric concentration of greenhouse gases?RCP8.5 is the IPCC's worst-case climate scenario; it's roughly the same as "business as usual" forever with respect to emissions. It's the red line in the chart below:
One strategy for attempting to answer this question is to use mathematical models of the global climate system called global climate models. Basically, you can simulate an increase in greenhouse gas concentrations in a climate model and have it calculate, based on our best physical understanding of the climate system, how much the planet should warm. There are somewhere between 30 and 40 prominent global climate models and they all project different amounts of global warming for given change in greenhouse gas concentrations. Different models project different amounts of warming primarily because there is not a consensus on how to best model many key aspects of the climate system.
To be more specific, if we were to assume that humans will continue to increases greenhouse gas emissions substantially throughout the 21st century (the RCP8.5 future emissions scenario), climate models tell us that we can expect anywhere from about 3.2°C to 5.9°C (5.8°F to 10.6°F) of global warming above pre-industrial levels by 2100. This means that for identical changes in greenhouse gas concentrations (more technically, identical changes in radiative forcing), climate models simulate a range of global warming that differs by almost a factor of 2.
The primary goal of our study was to narrow this range of model uncertainty and to assess whether the upper or lower end of the range is more likely.
The IPCC's four "representative concentration pathways" or RCP (source). The red line is RCP8.5. These four scenarios represent four "stories" of future concentration of greenhouse gases in the atmosphere, not outcomes or results in the form of warming itself. That's what the models do.
Back to Brown (my emphasis):
So, what variables are most appropriate to use to evaluate climate models in this context? Global warming is fundamentally a result of a global energy imbalance at the top of the atmosphere so we chose to assess models in their ability to simulate various aspects of the Earth’s top-of-atmosphere energy budget. We used three variables in particular: reflected solar radiation, outgoing infrared radiation, and the net energy balance. Also, we used three attributes of these variables: their average (AKA climatological) values, the average magnitude of their seasonal variability and the average magnitude of their month-to-month variability. These three variables and three attributes combine to make nine features of the climate system that we used to evaluate the climate models (see below for more information on our decision to use these nine features).And the finding:
We found that that there is indeed a relationship between the way that climate models simulate these nine features over the recent past, and how much warming they simulate in the future. Importantly, models that match observations the best over the recent past, tend to simulate more 21st-century warming than the average model. This indicates that we should expect greater warming than previously calculated for any given emissions scenario, or it means that we need to reduce greenhouse gas emissions more than previously thought to achieve any given temperature stabilization target.In even plainer English, those models that best represented the energy imbalance were also the models that projected the greatest future warming.
The RCP8.5 Example
Brown has an extended discussion of the models that use RCP8.5 as a base from which to predict warming, which is explained by the chart above, taken from the paper linked earlier. So let's take a look at that chart and what it tells us.
First, note the "envelope" starting around 2015 that surrounds the red line and the blue dashed line. Together these show the range of predictions for the RCP8.5 emissions scenario for every model studied. Quite a range.
Next, ignore the difference between the blue part of the data envelope and the purple part. That's not relevant to the point made here. Look instead at the very thin pink sliver that sits on top of the entire envelope; it's labeled "Observationally-informed projections." Models in this sliver "got it right" in the past with respect to the earth's energy imbalance.
What this paper is saying is that, if we stay on the RCP8.5 business-as-usual emissions path, the best models predict (a) a very narrow range of warming outcomes, and (b) the worst warming outcomes.
Why This Matters
This matters for two reasons. One, it adds to the certainty that we're cooking the planet — a truly serious matter. But two, it also gives a science answer to the warming deniers' and delayers' counter charge, "But look at the uncertainty. Look at the range of predictions. These models are all over the place. How can you trust them?"
This important paper shows that that "range of predictions" can be narrowed considerable, from a broad fat funnel of outcomes to a tiny, toothpick-wide sliver of them. Goodbye "uncertainty."
GP
Labels: climate, Gaius Publius, global warming, Science
3 Comments:
Too bad you didn't get their formulas or methodologies, including all presumptions. You can get an IDEA of future emissions from plotting a trend line through the historical data points. But you'll always undershoot unless you infer certain future data.
This has been the primary flaw in most of the models. They don't infer the effect of increasing populations and all the burning and deforestation that entails; They don't factor in the 11 year luminosity cycle of the sun; Some fail to factor in the increase in water vapor as temps increase (H2O is a very good greenhouse gas) nor the increase in releases of sequestered CH4 nor the REDUCTION in terrestrial sinking of C as forests disappear to make way for people, farms and building materials.
History is a rather poor predictor of future observations in a highly resonant system with dozens of variables.
But as observations pile up and they all tend to be higher than predictions, maybe the models will be adjusted and future predictions will be more accurate (that is: more Armageddon-like). If they're not suppressed/redacted by government of/by/for fossil fuels, that is.
Let's cut to the chase, 12:29. ALL of the models show everything getting worse. How this is determined is nitpicking.
1:55, no it's not nitpicking. A truer model would show the trends accelerating faster than any current model does. This is my point. Maybe that's also GP's point.
But, just being pragmatic for a change, if we can't prevent a trump admin from happening in this shithole, how are we even going to recognize, much less ameliorate the climate crisis. We're all about a year away from not being able to wipe ourselves after taking a shit.
Post a Comment
<< Home