Models still remain “crude statistical tools”…”not at all capable of reliably informing the world’s decision makers”
What Do The Current Climate Models Really Do?
A recent article describes the performance of upcoming models, made possible by an improvement in computing power over current high performance computers: the “exaflop” generation, i.e. 1 exaflop =10 to the power of 18 = 1 trillion floating point operations per second. They should then also make local climate calculations possible, primarily through a narrower grid and stored physics where today parameterization is still required.
That is pie in the sky, much more interesting are the statements about the models so far in the article.
Tim Palmer, Oxford professor puts it this way:
A highly nonlinear system where you have biases which are bigger than the signals you’re trying to predict is really a recipe for unreliability.”
Many of the laws and physical equations of the climate system are known, it’s just that people haven’t been able to implement them, for computational time reasons. Björn Stevens of Hamburg’s Max Planck Institute for Meteorology (MPI-M) puts it this way:
We were somehow forbidden to use this understanding by the limits of computation.”
People sometimes forget how far away some of the fundamental processes in our existing models are from our physical understanding.”
This in no way refers to only local phenomena. Stevens describes it this way:
When we finally succeed in physically describing the pattern of atmospheric deep convection over the warm tropical seas in models, we can understand more deeply how this then shapes large waves in the atmosphere, guides the winds, and affects the extratropics.”
After all, we keep hearing that the models represent “the physics.” They obviously do not, even for large scale phenomena. These are very frank words about the current models.
In the light of what may one day be possible, they seem to be rather crude statistical tools than representations of reality. Thus the article describes at the end the goal of modeling with the help of “exascale computing”. A real image of the real terrestrial climate system, a “digital twin” is to be created. What we have today is characterized by Bjorn Stevens as:
…that the world’s decision makers will have to rely on climate models in the same way that farmers have to rely on weather reports, but that change will require a concerted – and expensive – effort to create some kind of common climate model infrastructure.”
What have we not been told about the performance of climate models! And now it turns out that so far they are not at all capable of reliably informing “the world’s decision makers.” In fact, they are far from it.
In this context it is also understandable that the last state report of the IPCC just for the first time did NOT refer to the many models that were created especially for it, called CMIP6 family. Rather, the IPCC obtained the most important core information, “How sensitive is our climate system to CO2 increase?” from one paper that combined various estimates without using models.
This one paper unfortunately contained some flaws and could also be updated, Lewis (2022) reduced the most likely value assumed by IPCC AR6 significantly, by over 30%.
Thus, in climate science, much remains in flux and (as is always the case in science) nothing is set in stone. I wonder if this will also find its way into our media at some point? Or to the frightened ones, who are convinced they are the “last generation” before a climate collapse, informed by just these media and as they pretend: “The science”? Or was it instead of science, such works as “Hothouse Earth” and “Climate-Endgame“?
Let us remain optimists!