German scientists Dr. Sebastian Lüning and Prof. Fritz Vahrenholt — and new recent studies — show climate models have a long way to go before they can be used for advising policymakers.
Climate sciences cannot reliably advise policymakers as long as model uncertainties cannot be reduced
By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt
(German text translated by P Gosselin)
On November 3, 2017, the Institute of Atmospheric Physics, Chinese Academy of Sciences put out a press release concerning the quality check of climate models:
A new method to evaluate overall performance of a climate model
Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.
Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.
“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.” Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.
According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”
It is highly satisfying to see that climate modelers are now taking the quality checks of their models seriously. For the time before the Little Ice Age, the models unfortunately have practically no skill at all. With the effective use of quality checks, this quickly becomes clear.
Ancell et al. (2018) has shown that some models are dominated by chaos. Small changes in the input values lead to very different results:
Seeding Chaos: The Dire Consequences of Numerical Noise in NWP Perturbation Experiments
Perturbation experiments are a common technique used to study how differences between model simulations evolve within chaotic systems. Such perturbation experiments include modifications to initial conditions (including those involved with data assimilation), boundary conditions, and model parameterizations. We have discovered, however, that any difference between model simulations produces a rapid propagation of very small changes throughout all prognostic model variables at a rate many times the speed of sound. The rapid propagation seems to be due to the model’s higher-order spatial discretization schemes, allowing the communication of numerical error across many grid points with each time step. This phenomenon is found to be unavoidable within the Weather Research and Forecasting (WRF) Model even when using techniques such as digital filtering or numerical diffusion. These small differences quickly spread across the entire model domain. While these errors initially are on the order of a millionth of a degree with respect to temperature, for example, they can grow rapidly through nonlinear chaotic processes where moist processes are occurring. Subsequent evolution can produce within a day significant changes comparable in magnitude to high-impact weather events such as regions of heavy rainfall or the existence of rotating supercells. Most importantly, these unrealistic perturbations can contaminate experimental results, giving the false impression that realistic physical processes play a role. This study characterizes the propagation and growth of this type of noise through chaos, shows examples for various perturbation strategies, and discusses the important implications for past and future studies that are likely affected by this phenomenon.”
Also see the discussion at WUWT on this paper.
Also Eos looked at the limits of climate modeling on February 26, 2018. A team led by Kenneth Carslaw wrote:
Climate Models Are Uncertain, but We Can Do Something About It
Model simulations of many climate phenomena remain highly uncertain despite scientific advances and huge amounts of data. Scientists must do more to tackle model uncertainty head-on.
Model uncertainty is one of the biggest challenges we face in Earth system science, yet comparatively little effort is devoted to fixing it. A well-known example of persistent model uncertainty is aerosol radiative forcing of climate, for which the uncertainty range has remained essentially unchanged through all Intergovernmental Panel on Climate Change assessment reports since 1995. From the carbon cycle to ice sheets, each community will no doubt have its own examples. We argue that the huge and successful effort to develop physical understanding of the Earth system needs to be complemented by greater effort to understand and reduce model uncertainty.Without such reductions in uncertainty, the science we do will not, by itself, be sufficient to provide robust information for governments, policy makers, and the public at large.”
15 responses to “Model Uncertainties Too Great To Reliably Advise Policymakers, German Scientists Say”
This echos Pat Frank’s work on error propagation that, in my estimate, is a reasonable analysis but has been rejected by the journals as somehow incorrect not by pointing out errors but by declaration of “it can’t be right”. I hope he is able to update his work and resubmit it for publication.
Can anything that is randomly variable ever be ‘modelled’ with even the smallest degree of certainty?
I thought the whole point of the skeptic world view is that it isn’t random at all, but following certain patterns that repeat themselves and are totally natural. Are you now claiming that is not the case? 😉
Poor seb, still ZERO-comprehension of what drives climate systems, and the cycles therein.
Cycles with random variability on top.. do you comprehend ???????
Obviously something caused the COOLING down to the LIA anomaly, but the recovery out of that really cold period looks like it has stalled, more’s the pity.
seb… Always UNAWARE… and so, so, desperate to stay that way. !!
There is absolutely NOTHING to show any of the changes in climate, like warming from the cold anomaly of the LIA, more benign weather, cyclic warming from coldest period since 1900 (the late 1970s)..
….. none of this has been caused by humans.
Certainly NOT by human released CO2, which by now you should be FULLY AWARE, has zero provable effect on climate.
The only scientifically proven effect of enhanced atmospheric CO2 is enhanced plant growth.
Does keep chanting this mantra of yours work? Have you convinced yourself that what you say is true? 😉
Do you have any evidence at all that says I am incorrect ??
Or will you remain, as always.. ZERO-EVIDENCE seb.
Climate cycles… correct
(read up on the AMO, try to learn)
LIA anomalously cold… correct
Beneficial warming… correct
No human effect on Arctic… correct (no UHI up there)
No warming from CO2… correct
See here: https://notrickszone.com/2015/05/14/natural-cycles-in-a-random-world-are-unmistakable-future-holds-nothing-to-fear/#sthash.BCyrIss9.yLGMEyaj.dpbs
Someone laughed at me for using a linear trend for Arctic sea ice extent of the last few decades. Using random values modulated by sine waves to “predict” the future (including El Ninos) … why is nobody laughing at this? 😉
Poor seb , you really have ZERO comprehension of any of this , do you.
There is a large amount of evidence that climate has cycles which are fairly regular.
NO evidence of linearity.
Your comment is LAUGHABLE in its ignorance.
A very good question with a virtually certain answer.
@Don from OZ
It’s not just random, but also chaotic, with dependence of any measured parameter on multiple random variables. And that’s for each point of space and time, of which there are so many that even our most powerful computers are out of their league. And those are just a few of the problems. It’s not just difficult. It’s impossible.
Agreed whole heartedly and I should have included CHAOTIC in my question
Your last sentence ‘It’s impossible’ is entirely understandable
Here we go, Don.
holy mackeral! Yonason I’m not sure whether to say thanks or not for this second response I don’t think my poor ol’ brain can handle Judith Curry’s determinism and predictability.
Meanwhile in America.
The EPA will use data which is available to the public to back up decisions on environmental issues.
This will interest you Sebastian as it explains the scientific method.