Climate models in distress: Problems with forecast performance give cause to worry
By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt
(German text translated/edited by P Gosselin)
The 2015/16 El Nino 2015/16 is over, and so are the celebrations by the climate alarmists. It’s becoming increasingly clear that the projections made by the climate models were wildly exaggerated. Already in April 2015 a Duke University press release stated that the worst IPCC temperature prognoses need to be discarded immediately:
Global Warming More Moderate Than Worst-Case Models
A new study based on 1,000 years of temperature records suggests global warming is not progressing as fast as it would under the most severe emissions scenarios outlined by the Intergovernmental Panel on Climate Change (IPCC).“Based on our analysis, a middle-of-the-road warming scenario is more likely, at least for now,” said Patrick T. Brown, a doctoral student in climatology at Duke University’s Nicholas School of the Environment. “But this could change.” The Duke-led study shows that natural variability in surface temperatures — caused by interactions between the ocean and atmosphere, and other natural factors — can account for observed changes in the recent rates of warming from decade to decade. The researchers say these “climate wiggles” can slow or speed the rate of warming from decade to decade, and accentuate or offset the effects of increases in greenhouse gas concentrations. If not properly explained and accounted for, they may skew the reliability of climate models and lead to over-interpretation of short-term temperature trends.
The research, published today in the peer-reviewed journal Scientific Reports, uses empirical data, rather than the more commonly used climate models, to estimate decade-to-decade variability. “At any given time, we could start warming at a faster rate if greenhouse gas concentrations in the atmosphere increase without any offsetting changes in aerosol concentrations or natural variability,” said Wenhong Li, assistant professor of climate at Duke, who conducted the study with Brown. The team examined whether climate models, such as those used by the IPCC, accurately account for natural chaotic variability that can occur in the rate of global warming as a result of interactions between the ocean and atmosphere, and other natural factors.
To test how accurate climate models are at accounting for variations in the rate of warming, Brown and Li, along with colleagues from San Jose State University and the USDA, created a new statistical model based on reconstructed empirical records of surface temperatures over the last 1,000 years. “By comparing our model against theirs, we found that climate models largely get the ‘big picture’ right but seem to underestimate the magnitude of natural decade-to-decade climate wiggles,” Brown said. “Our model shows these wiggles can be big enough that they could have accounted for a reasonable portion of the accelerated warming we experienced from 1975 to 2000, as well as the reduced rate in warming that occurred from 2002 to 2013.”
Further comparative analysis of the models revealed another intriguing insight. “Statistically, it’s pretty unlikely that an 11-year hiatus in warming, like the one we saw at the start of this century, would occur if the underlying human-caused warming was progressing at a rate as fast as the most severe IPCC projections,” Brown said. “Hiatus periods of 11 years or longer are more likely to occur under a middle-of-the-road scenario.” Under the IPCC’s middle-of-the-road scenario, there was a 70 percent likelihood that at least one hiatus lasting 11 years or longer would occur between 1993 and 2050, Brown said. “That matches up well with what we’re seeing.” There’s no guarantee, however, that this rate of warming will remain steady in coming years, Li stressed. “Our analysis clearly shows that we shouldn’t expect the observed rates of warming to be constant. They can and do change.”
Paper: Patrick T. Brown, Wenhong Li, Eugene C. Cordero and Steven A. Mauget. Comparing the Model-Simulated Global Warming Signal to Observations Using Empirical Estimates of Unforced Noise. Scientific Reports, April 21, 2015 DOI: 10.1038/srep09957“
Modelers like to pat each other on the back. Well modelled, dear colleague! Calibration tests using the past of course are all part of checking models. This in many cases starts with the Little Ice Age, which was the coldest phase of the past 10,000 years. When the models appear to reconstruct the warming since then, the joy runs quite high: Look here, everything is super.
The main driver of warming, however, remains unclear. Isn’t it logical that a re-warming follows a natural cooling? Is it a coincidence that CO2 rose during this phase?
More honest would be using calibration tests going back to the Medieval Warm Period Only when the preindustrial warm phases are successfully reproduced can we say that the models are confirmed.
In 2015 Gómez-Navarro et al used the Little Ice Age trick. They began their test at 1500 AD. i.e. during the mentioned cold phase. The result is no surprise: The general trend is “confirmed”, but in detail it doesn’t work. Here’s the abstract from Climate of the Past:
A regional climate palaeosimulation for Europe in the period 1500–1990 – Part 2: Shortcomings and strengths of models and reconstructions
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.”
In January 2017 Benjamin Santer et al attempted to justify the validity of models. In the Journal of Climate they compared satellite data with the simulations of temperature over the last 18 years. The result: The models calculated a warming that was one and a half times more than what was measured in reality. Abstract:
Comparing Tropospheric Warming in Climate Models and Satellite Data
Updated and improved satellite retrievals of the temperature of the mid-to-upper troposphere (TMT) are used to address key questions about the size and significance of TMT trends, agreement with model-derived TMT values, and whether models and satellite data show similar vertical profiles of warming. A recent study claimed that TMT trends over 1979 and 2015 are 3 times larger in climate models than in satellite data but did not correct for the contribution TMT trends receive from stratospheric cooling. Here, it is shown that the average ratio of modeled and observed TMT trends is sensitive to both satellite data uncertainties and model–data differences in stratospheric cooling. When the impact of lower-stratospheric cooling on TMT is accounted for, and when the most recent versions of satellite datasets are used, the previously claimed ratio of three between simulated and observed near-global TMT trends is reduced to approximately 1.7. Next, the validity of the statement that satellite data show no significant tropospheric warming over the last 18 years is assessed. This claim is not supported by the current analysis: in five out of six corrected satellite TMT records, significant global-scale tropospheric warming has occurred within the last 18 years. Finally, long-standing concerns are examined regarding discrepancies in modeled and observed vertical profiles of warming in the tropical atmosphere. It is shown that amplification of tropical warming between the lower and mid-to-upper troposphere is now in close agreement in the average of 37 climate models and in one updated satellite record.”
See comments on this at WUWT.
Judith Curry reported on a PhD thesis in the Netherlands, where the author was involved with model results on a daily basis, which fired harsh criticism. An excerpt of the paper by Alexander Bakker:
In 2006, I joined KNMI to work on a project “Tailoring climate information for impact assessments”. I was involved in many projects often in close cooperation with professional users. In most of my projects, I explicitly or implicitly relied on General Circulation Models (GCM) as the most credible tool to assess climate change for impact assessments. Yet, in the course of time, I became concerned about the dominant role of GCMs. During my almost eight year employment, I have been regularly confronted with large model biases. Virtually in all cases, the model bias appeared larger than the projected climate change, even for mean daily temperature. It was my job to make something ’useful’ and ’usable’ from those biased data. More and more, I started to doubt that the ’climate modelling paradigm’ can provide ’useful’ and ’usable’ quantitative estimates of climate change.
After finishing four peer-reviewed articles, I concluded that I could not defend one of the major principles underlying the work anymore. Therefore, my supervisors, Bart van den Hurk and Janette Bessembinder, and I agreed to start again on a thesis that intends to explain the caveats of the ’climate modelling paradigm’ that I have been working in for the last eight years and to give direction to alternative strategies to cope with climate related risks. This was quite a challenge. After one year hard work a manuscript had formed that I was proud of and that I could defend and that had my supervisors’ approval. Yet, the reading committee thought differently.
According to Bart, he has never supervised a thesis that received so many critical comments. Many of my propositions appeared too bold and needed some nuance and better embedding within the existing literature. On the other hand, working exactly on the data-related intersection between the climate and impact community may have provided me a unique position where contradictions and nontrivialities of working in the ’climate modelling paradigm’ typically come to light. Also, not being familiar with the complete relevant literature may have been an advantage. In this way, I could authentically focus on the ’scientific adequacy’ of climate assessments and on the ’non- trivialities’ of translating the scientific information to user applications, solely biased by my daily practice.”
Read more at Judith Curry.
Good article. DECEMBER 29, 2015 Climate Models Have Been Wrong About Global Warming For Six Decades
Climate models used by scientists to predict how much human activities will warm the planet have been over-predicting global warming for the last six decades, according to a recent working paper by climate scientists.
http://houstonenergyinsider.com/climate-models-have-been-wrong-about-global-warming-for-six-decades/?utm_campaign=shareaholic&utm_medium=email_this&utm_source=email
Michaels and Knappenberger. I know ad hominem is frowned upon, but those two are paid by the oil industry lobbyist and because they should know better can be called straight out climate disinformers. They aren’t an objective source for whether or not climate models have been overpredicting change of temperatures or not.
You probably think of graphs like this one when telling people that models overestimated warming, right? http://www.drroyspencer.com/wp-content/uploads/CMIP5-global-LT-vs-UAH-and-RSS.png
Well, I extended the graph into the present for you: http://imgur.com/a/sXjtE
If you’d graph surface measurements to that mean model trendline it would almost match.
“If you’d graph surface measurements to that mean model trendline it would almost match.”
By adjustment and manipulation..
Glad to see your reliance on El Ninos as the only warming in the satellite record hasn’t waned, seb.
As we know, CO2 doesn’t warm the ocean, so nothing anthropogenic about the El Nino.
It will be hilarious watching over the next couple of years as temperature ease down slightly, making even more of a mockery of the climate crystal ball gazing !!
“those two are paid by the oil industry”
The oil industry in the USA – the big oil companies – are largely in the gas business. They know that come what may oil is critical for transport and faces no meaningful competition for the foreseeable future.
Coal and nuclear are effective competitors to gas.
The mission of the lobbyists of Big Oil has been to destroy the coal and nuclear businesses – not to protect this oil business since that is entirely secure.
Sebastian
Ad hominem attacks are frowned for a reason. Because it is a logical fallacy and doesn’t have anything to do with the argument. If you can’t argue like a reasonable smart person and instead have to resort to nonsensical ad hominem attacks then why should anyone take you seriously?
I strongly suggest you read this article: http://www.acsh.org/news/2016/09/08/argumentum-ad-aurum-follow-money-fallacy-10133
Honestly, I’m sick of these “oh, obviously he/she is just a paid shill” -nonsense.
He appears unable to argue without deception. Look at the graph which he has linked to prove his point. From the looks of it, the graph stops at the peak point of the last large El Nino. That is pure unadulterated deception.
Well if they lower climate sensitivity (and it is obvious climate sensitivity to CO2, if any at all, is towards the low end of the IPCC spectrum), the case for mitigation goes. It becomes obvious that the sensible course is adaption.
This was recognised in the Climategate emails, and why the Team wanted to promote the high end of Climate Sensitivity.
Since I have mentioned one of the Climategate emails, it is probably worth recalling:
And of course:
Isn’t that all we are seeing?
We have no worthwhile data on Southern Hemisphere temperatures (a point noted by Phil jones, and endorsed by Hansen), and hence there is no worthwhile global data. The only data of any worth, is that of the Northern Hemisphere.
Both Hansen and Phil Jones in separate papers in 1981 held the view that as at 1980, the Northern Hemisphere was some 0.3 to 0.4degC cooler than it was in 1940. NAS put it at 0.5deg cooler than 1940.
it would appear that since 1980, the temperature in the Northern Hemisphere has now risen by about 0.3 deg C such that the Northern Hemisphere is today about the same temperature as it was back in 1940.
Thus we have seen about 95% of all manmade CO2 emissions (1940 to date) and no change in temperature in Northern Hemisphere temperatures thereby putting a figure for Climate Sensitivity to CO2 at zero, or close thereto. That is the finding from the empirical observational data.
All we have seen in the 20th and 21st century is the run out of multidecadal natural variation.
And that’s the problem for the advocates. If modern climate changes are not unusual or unprecedented and still well within the range of natural variability, and if the uncertainty in our estimates of radiative forcing are 10-100 times greater than the assumed anthropogenic forcing value itself, then detecting an anthropogenic signal is effectively no more than speculation. Since they can’t say we are “deniers” of speculation, they have to make up hockey stick graphs and a 97% “consensus” and alter the past temperature record and report on “unprecedented” glacier melt and sea level rise to get our attention. It’s worked in the sense that the non-skeptical like SebastianH and sod believe them.
And there is that word again … “believe”. Maybe we should just wait it out like the Zoologist from the previous blog post wrote? “Was richtig war, zeigt sich erst im Nachhinein”.
Do you think there is any possibility that you are wrong and/or your beliefs could be wrong? Or are you really convinced that most of the (climate) science has it wrong?
“Wait it out and see who is right and who is wrong” … is what I wanted to write.
The short answer to that is YES. Indeed, Climate Science itself considers the answer to be YES
In your comment above, you take the Spencer graph and extend it with the 2016 El Nino warming (but do not show the latest cooling).
But the main point is this. There are some 44 models detailed in that plot, no two showing thee same result. It is like the Dire Straits song:
Given that all these models project different outcomes, we know as fact that some 43 of the 44 models must be wrong. Indeed, not one of the projections is in line with observation, and thus all are wrong.
One often sees a plot with 102 climate models, again no two produce the same outcome, so we know as fact that 101 of the 102 models must be wrong. Again, not one is consistent with real empirical data obtained from observation.
Of course, the IPCC averages the output of these models to obtain an assembly mean, but of course as any mathematician knows only too well, the average of incorrect data is always incorrect, save but fluke.
It is worth emphasising that one cannot average a set of incorrect answers and get the correct answer, save by fluke.
Until such time as Climate Science has only one model (or only three model each one dealing with the different CO2 scenarios of BAU, reduced CO2 emissions, no further CO2 emissions), and until such time as that one single model closely follows empirical observational data, all logically and rationally thinking people will hold the view that Climate Science has it wrong.
The ADMITTED uncertainty and errors in models of heat flux forcing are, by themselves, 10-100 times larger than the assumed radiative forcing values associated with models of anthropogenic forcing. So with that much uncertainty/error bars in assumptions of anthropogenic forcing, where is the “right” vs. the “wrong” that I am supposedly convinced of?
Now lets all keep an eye on this loony experiment. There is nothing like the real thing!
MARCH 24, 2017 Scientists in Germany fire up the world’s largest artificial sun
Now, German scientists are testing what they term as “the world’s largest artificial sun,” which they hope can make way for producing hydrogen to use as a green fuel in the future.
https://www.techworm.net/2017/03/scientists-germany-fire-worlds-largest-artificial-sun.html
Reinventing the wheel?
Lavoisier et al developed a process to make hydrogen by passing steam over red hot iron. The iron removes oxygen by rapidly rusting, leaving the hydrogen. I do hope “Synlight” has a plan to remove the O2. But it would have to be as it’s produced, due to it’s high reactivity.
Also, Lavoisier invented a device he could have used to vaporize water…
https://upload.wikimedia.org/wikipedia/commons/thumb/e/e8/Lentilles_ardentes_Lavoisier.png/320px-Lentilles_ardentes_Lavoisier.png
…but for some reason he didn’t use it for that.
Lesson? For me it is that just because one has superior modern technology, doesn’t mean he has the genius to know how to use it to best advantage.
PS – When reinventing the wheel, don’t forget the bearings and the axle.
Also bouncing the ball back towards the natural variation court, via observations on atmospheric water, clouds and precipitation, is a new paper on the Iris effect — ‘Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models’
by Thorsten Mauritsen1, Bjorn Stevens,
J. Curry also covers it in https://judithcurry.com/2015/05/26/observational-support-for-lindzens-iris-hypothesis/
From the Benjamin Santer paper above —
“A recent study claimed that TMT trends over 1979 and 2015 are 3 times larger in climate models than in satellite data but did not correct for the contribution TMT trends receive from stratospheric cooling.”
So the temperature of the mid-to-upper troposphere (TMT) are said to be affected by stratospheric cooling. Umm…
Don’t stop at 2015 … include 2016 and 2017: http://imgur.com/a/sXjtE
3 times larger? Not quite.
El Nino, you propaganda ass.
You KNOW its come back down already.
A single year event, coming from the oceans, therefore absolutely NOTHING anthropogenic about it.
because as you have more than adequately proven, seb…
… CO2 DOES NOT cause ocean warming.
But we all know that El Nino and other ocean energy releases are ALL YOU HAVE to show any warming.
That is why you are so utterly desperate to show that CO2 causes ocean warming.
Massive FAIL there, seb. a BIG FAT ZERO !!!
Even you must KNOW by now, that apart from the two major El Ninos…
… there is ABSOLUTELY NO WARMING in the last 39 years in any reliable data set.
There is absolutely NO anthropogenic warming signature in the whole of the satellite data.
Everybody KNOWS that..
even you would know that… if you could ever face REALITY..
From Gómez-Navarro et al…
“Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions.”
Surely such leaps in anomalies are exactly what should be expected in a chaotic system as it transitions from one quasi-stable state to another? And is it not that very mechanism (along with assessing probable timescales to change, probable trajectories, and regional effect) they should be researching with the utmost urgency, as that is where the climate will damage humankind (and change the biosphere we depend on) the most.