By Frank Bosse and Fritz Vahrenholt
In February the sun was very quiet in activity. The observed sunspot number (SSN) was only 44.8, which is only 53% of the mean value for this month into the solar cycles – calculated from the previous systematic observations of the earlier cycles.
Figure 1: Solar activity of the current Cycle No. 24 in red, the mean value for all previously observed cycles is shown in blue, and the up to now similar Cycle No. 1 in black.
It has now been 75 months since cycle No. 24 began in December, 2008. Overall this cycle has been only 53% of the mean value in activity. About 22 years ago (in November 1992) Solar Cycle No. 22 was also in its 75th month, and back then solar activity was 139% of normal value. The current drop in solar activity is certainly quite impressive. This is clear when one compares all the previous cycles:
Figure 2: Comparison of all solar cycles. The plotted values are the differences of the accumulated monthly values from mean (blue in Figure 1).
The solar polar magnetic fields have become somewhat more pronounced compared to the month earlier (see our Figure 2 “Die Sonne im Januar 2015 und atlantische Prognosen“) and thus the sunspot maximum for the current cycle is definitely history. It’s highly probable that over the next years we will see a slow tapering off in sunspot activity. Weak cycles such as the current one often follow. Thus the next minimum, which is defined by the appearance of the first sunspots in the new cycle 25, may first occur after the year 2020. The magnetic field of its sunspots will then be opposite of what we are currently observing in cycle 24.
The radiative forcing of CMIP5 models cannot be validated?
A recent paper by Marotzke/ Forster (M/F) is in strong discussion here at climateaudit.org with more than 800 comments. Nicolas Lewis pointed out the question: Is the method of M/F for evaluating the trends infected by circularity?
There is not only a discussion about the methods, but also about the main conclusion: “The claim that climate models systematically overestimate the response to radiative forcing from increasing greenhouse gas concentrations therefore seems to be unfounded.”
Is the natural variability really suppressing our efforts to separate the better models of the CMIP5 ensemble from not so good ones?
Here I present a method to find an approach.
I investigated the ability of the 42 “series” runs of “Willis model sheet” (Thanks to Willis Eschenbach for the work to bring 42 anonymous CMIP5 models in “series”!) to replicate the least square linear trends from 1900 to 2014 (annual global data, 2014 is the constant end-date of these running trends). I calculated for each year from 1900 to 1995 the differences between the HadCRUT4 (observed) trends ending in 2014 and the trends of every “series” also ending in 2014. The sum of the squared residuals for 1900 to 1995 the differences between the HadCRUT4 (observed) trends ending in 2014 and the trends of every “series” also ending in 2014.
The sum of the squared residuals for 1900 to 1995 for every “series”:
Figure 3: The sum of the squared residuals for the running trends with constant end in 2014 from 1900,1901 and so on up to 1995 for every “Series” in “Willis sheet”. On the x-axis: the series 1…42.
We describe the same procedure described in Step 1, but this time with the trends up to 2004, only 10 years before the end of the trend series:
Figure 4: The sum of the squared residuals for the running trends with constant end in 2014 from 1900, 1901 and so on up to 1995 for every “series” in the “Willis sheet”. On the x-axis: the series 1…42. The ordinate scale is the same as in Figure 3.
Here one sees that the errors for the trends until 2004 on average are much smaller (Figure 4) than they are for the trends up to 2014 (Figure 3). That is no wonder as the parameters of most models for the time period up to 2005 were “set“. Thus the depiction of the trends of the models up to 2014 are also well in agreement with observations:
Figure 5: The trends of the model mean (Mod. Mean, red) in °C/ year since 1900, 1901 etc. up to 1985 with the constant end-year 2004 compared to observations (black).
Obviously the setting of the model parameters no longer “hold” as the errors up to the year 2014 rise rapidly.
We calculate the quotients of the errors for the 2014 trends divided by the errors for the 2004 trends (See Figure 4) for every single series and make a 2-dimensional check:
Figure 6: The single series as plotted points. The coordinates are determined by the trend error der until 2014 (X axis) and the ratio of the error up to 2014/2004 (Y axis). The red rectangle marks the “boundaries”, the “good“ series are inside, the “bad“ are outside.
The borders are represented by the standard deviations of both series.
The y-axis in Figure 6 above is the quotient of failures in trend estimations to 2014 (see Figure 5) divided by the trend estimations to 2004 (see Figure 4) with a standard deviation of 3.08; the x-axis is the accuracy of the series in trend estimation for the running trends with the constant end year 2014 (see Figure 5) with a standard deviation of 0.0038. The big differences of many series (up to a factor of 11) between the trend errors compared of 2004 and the trend errors to 2014 is impressive, isn’t it? The stability of the series with great differences seems to be in question, that’s why they are “bad”.
Now comes the most interesting part: From the 42 runs of different series, I selected the “good” ones which are within the borders of the red rectangle in Figure 4 and calculated their average. The same procedure was done with all the “bad” ones.
Figure 7: The selected “good” series (see step 1-3), the series mean of all 42 series, the “bad” ones and the observations for rolling trends with constant end-year 2014 in K per annum.
The “good” (blue) series produce a remarkably better approach to the observations than the model mean (red) and the “bad”( green) show the worst performance.
Up to this point we didn’t know what model was behind what “series” in the “Willis sheet”. Thanks to the help from Willis Eschenbach and Nic Lewis we just learned the assignment and the properties of the models behind the “series”, also their corresponding sensitivity with respect to forcing by GHG. The mean value of the transient climate response (TCR), which is the expression for the calculated greenhouse gas effect, is approximately 1.6 for the “good“ models, the model mean (all models) is 1.8 and the “bad” model mean is 1.96.
As one observes is Figure 7, the selection of the “good” models “improves” the convergence towards the observations. For this a TCR of approximately 1.3 is assumed, compare to our blog post “Wie empfindlich ist unser Klima gegenüber der Erwärmung durch Treibhausgase? (How sensitive is our climate with respect to warming from greenhouse gases)“.
The mean of the models overestimates the radiative forcings in the global temperature to 2014. The objectively better models have a lower mean of TCR. The “bad” models have a higher mean of TCR. Many models are perhaps “over tuned” for the trends to 2005. The result is a dramatic loss in forecasting quality beyond the tuning period. Are Marotzke and Forster wrong? Will we ever hear them admit it? There are reasons for doubt.