Using 1999 GISS Data, Global Warming Trend Since 1866 Only 0.5°C Per Century!



How Much Global Warming?
By Ed Caryl

We are told over and over again that the globe has warmed by 0.8°C since 1880 or 1850. Lately we have seen article after paper after publication that states this number in Fahrenheit, 1.44°F, because that sounds larger. But is this number correct? What is it based on?

GISS and Google “way-back machine”

Recently, a file from GISS in Google’s “way-back” machine came to my attention. This file of global temperature dates from 1999, before James Hansen became more rabid in promoting global warming. Here is a plot of the 1999 data, along with the current file from GISS:


Figure 1 is a plot of global temperature as published by GISS is 1999 versus the current publication.

Note that GISS has removed the data from 1866 to 1880, placing the beginning of their published data closer to the bottom of the early 1900s cool period. This changes the trend from 0.42°C per century to 0.66°C per century, a 50+% increase in the trend. This alone changes the warming from 0.6°C from 1866 to the present, to 0.8°C from 1880 to the present, resulting in the higher trend. Here is a chart of the difference between the two files.


Figure 2 is a plot of the difference between the two plots in Figure 1.

In Figure 2 we can see that the cool period around 1910 was cooled further by 0.2 degrees, but the cool period around 1970 was warmed slightly. They also minimized the cool 1880s and ’90s by warming those years by 0.1 to 0.2°C. So what was the real global temperature from 1866 to the present? I took the 1999 file and spliced on the satellite data from UAH from 1979 to the present, using the period of overlap from 1979 to 1999 as a baseline, avoiding the recent GISS adjustments. The result is this: Ed_2

Figure 3 is a plot of GISS global temperature from 1999 with UAH satellite TLT global temperature spliced on from 1979.

The trend in Figure 3 is half a degree C per century, with a total rise since 1866 of about 0.6°C. Because of the year-to-year variation, and the sparse station data in the early years, both the trend and the total rise have errors that are in the neighborhood of ±0.3°C. So the bottom line is that the warming since the mid-19th century is about 0.6°C ±0.3°C, or somewhere between 0.3°C and 0.9°C. Much of that warming, about 0.4°C ±0.2°C has taken place since 1980. But some of that is due to the cyclic nature of temperature.

The cycle from 1866 to 1940 had an amplitude of about 0.3°C, which, if extended to the present, means that the present temperature is at the peak of a cycle, or 0.15°C too high. This puts the total rise between 0.15°C and 0.75°C, or from almost nothing to something less than has been stated, with a center at 0.45°C. The recent solar maximum has also inflated the temperature. In the next 30 years, decreasing ocean cyclic temperature and a waning solar input will likely reduce the global temperature by about 0.4°C ±0.2°C, either back to the 1990s or to the 1960s. If the latter, there will have been no warming in the last 160 years.


Climate Change Dying As An Issue In German Media…Empty Seats Pack Hamburg “9th Extreme Weather Congress”!

This past week the 9th Extreme Weather Congress took place in Hamburg. Curiously this year there was very little coverage by the German media. Doing a Google search of the event turned up very few stories from the mainstream media.

9th ExtremWetterKongress 2014

Empty seats pack Day 3 of the Hamburg 9th Extreme Weather Conference, just minutes before starting. Source: here (11:12 mark).

The above photo is a snapshot from a Youtube video, just minutes before the start of Day 3 of the Congress.

Looks like the German media have grown fatigued by climate science in general and have sensed that something isn’t right with what the “experts” have been claiming. Record high sea ice, lack of hurricanes, low tornado activity, spectacularly failed climate models and bitter cold winters have a way of sobering them up.

Some German public television networks showed up on the first day, see here for example, but there too we see many empty seats – unusual given the opening first day hype.

Not a peep about Antarctic sea ice record

We begin to sense the media is feeling increasingly embarrassed about the climate issue overall. Any reminder of how they’ve been duped gets avoided altogether. Little wonder when Googling “rekord eis antarktis 2014“, we quickly find that the German mainstream media have totally ignored this year’s record high south polar sea ice event altogether. Too embarrassing! Besides, German viewers are totally bored by the climate issue.

Some small sites have reported the event, though. The online German weather site writes that Arctic sea ice is “considerably greater than the record low year of 2012” and that the German Polarstern research vessel of the Alfred Wegener Institute “did not succeed in crossing the Northwest Passage of the American continent in the second half of August“. also looked at the situation at the Antarctic, which this year smashed the all-time satellite era sea ice record. The site however, avoided the use of the word “record” and wrote:

…the sea ice around Antarctica reached 20 million square kilometers. Thus the 30-year maximum of last year was exceeded by about 0.4 million square kilometers.”

Of course this new satellite record high has baffled climate scientists, who are left stumped and only to speculate what is behind the unexpected trend. Wetteronline writes:

The reason for this, scientists suspect, among other factors, is a weakening sea current around Antarctica. Thus there is less mixing of the water masses which favours the growth of the sea ice.”

There’s no data to back this up, and so it just means the scientists don’t have a clue, are just shooting in the dark, and they should just say so.

Meanwhile, alarmist site Klimaretter presented its polar sea ice summary for 2014, but forgot to mention anything at all about the South Pole.


IPCC Models Fail Abominably In Projections of Northern And Southern Hemisphere Temperature

What follows is a modestly abbreviated version in English. The first part is the brief solar activity report, and the second part is about IPCC model failure.

The Sun in September 2014. Attention: X-Flares!

By Frank Bosse and Prof. Fritz Vahrenholt
(Translated/edited by P Gosselin)The sun in September was considerably more active than in the previous months. The sunspot number was 87.6, which was 89% of what is typical in the 70th month into a cycle. The current solar cycle 24 (SC 24) began in December 2008. Figure 1 shows the current cycle compared to the mean of SC 1-23, and solar cycle no. 1:

Fig. 1: The current SC 24 is shown in red, the mean of the previous 23 cycles is depicted by the blue curve, and the current cycle SC 24 strongly resembles SC 1, which is shown by the black curve.

The current cycle resembles SC 1, and should it continue to behave like SC 1, a trailing off of activity cannot be anticipated anytime soon. Indications, however, do point to a longer than normal cycle. Japanese researcher Hiroko Miyahara and his team examined this in 2013 (Influence of the Schwabe/Hale solar cycles on climate change during the Maunder Minimum). They were able to show that the length of the solar cycle correlates with solar activity. “The mean length of the Schwabe cycle during the Maunder Minimum was approx. 14 years, and during the Medieval Warm Period the average cycle length was only about 9 years.”

The sun today is relatively active, though slightly below normal. On September 10 there was an X 1.6 – flare, a a high category explosion on the sun. Flare are designated as follows: C for common, M for medium, and X for strong. See the following image, Figure 2:

Figure. 2: X-flare on 10 September 2014. Source:

With such powerful explosions, material gets ejected from the sun, what is known as a Coronal Mass Ejection (CME). When such plasma strikes the earth’s atmosphere, it leads to polar lights and other effects. The strength of X 1.6 was too weak to have any massive impact on the earth’s atmosphere and magnetic field.

Models fail to project temperature

Now let’s take a detailed look at the warming scenarios of the earth’s surface temperature, this time taking local particularities into account:

 Fig. 3: The mean surface temperature since 2000 compared to the period of 1950-1980, Source: GISS

Most of the warming took place in the northern extra-tropics at latitudes between 25°N and 90°N. This is consistent with the expectations one would have with the effects of greenhouse gases.  However, let’s take a look at the temperature series of the northern hemisphere extra-tropic region. Here we also see a “pause” since about the year 2000.

Figure. 4: The temperature curve of the northern extra-tropics as to GISS.

Indeed the trend from 1983 to 2013 differs significantly (0.33 +/- 0.06 °C/decade) from the 2000 to 2013 period (0.09 +/- 0.14°C/decade), which is no longer a significant warming. This hefty deceleration has occurred even though greenhouse gas emissions continued to rise linearly unabated. We reported multiple times on what the reasons could be for thi.

How do models handle the problem of the asymmetry in warming that we observe between both hemispheres? A recent paper from 2013 by 4 authors of the universities of Berkeley and Washington led by Andrew R. Friedman examined the difference between the northern hemisphere (NH) and the southern hemisphere (SH), named ITA, and summarized what the newest models anticipated for temperature:

 Fig. 5: The temperature difference between the NH and SH determined with the CMIP5 models. Source: Figure 2 of the just mentioned paper.

The text of the paper describes: With today’s emissions scenario (close to the IPCC scenario named RCP8.5) there is a highly linear rise of 0.17°K/decade (Point 3, “future projections“ of the aforementioned paper).

But let’s do a reality check and compare it to the actual surface observations since 1900:

Fig. 6: ITA as GISS since 1900, (Data: GISS), Model (thick green curve).

The 1982-2013 period is indeed at 0.165+/-0.04°C/decade, but it has become significantly less. The 1998-2013 period shows a trend of only 0.055+/-0.067°C/decade – not rising significantly – and is barely 30% of what was registered since 1982. No one here can claim that the trend is “highly linear”.

The cause? One cause is offered up by the 2013 paper: The drop in the late 1960s was not replicated by the models and was likely caused by internal variability, very likely by the AMOC (see Fig. 8b of the paper), the authors maintain.

Also the steep rise after 1915 would be due to variability 1915 … and at least for a part of the rise beginning in 1985 – as we have often maintained at this blog.

So it remains: Reproducing internal variability has not been adequately possible by models up to now. The dependency of temperatures on the forcing by greenhouse gases is stronger in models than what it is in reality. The models overblow the anthropogenic impact and thus yield exaggerated prognoses for the future.


Rossi’s E-Cat Verified, But Mystifies Independent Reviewers…The Dawn Of An Energy Revolution?

Sun_in_X-Ray NASABy Ric Werme

Many have been watching the gradual development of Andreas Rossi’s “E-Cat,” a device Rossi claimed to produce heat from fusing nickel and hydrogen at commonly used temperatures, as opposed to those in core of a star.

Photo: NASA

The next big event, the release of a paper reporting on a month-long test in March by a group independent from Rossi and his partner, Industrial Heat, happened today. The results are pretty much what I was expecting and essentially completely positive.

In a nutshell, the device produced so much energy that only a nuclear reaction can explain it, reaction products were seen, but no nuclear radiation was detected.

The test ran with an E-Cat cell in three phases:

1) no fuel charge

This was to verify the test setup measurement equipment could accurately measure both the electrical power into the cell and the heat released from the cell by convective heating and black body radiation.

2) approximately 800 W input power for 10 days, this produced some 1600 W excess power.

3) approximately 900 W for the rest of the test, this produced some 2300 W excess power.

This confirms what supporters expected. While the COP (ratio of output power to input power) was lower than expected, the authors make it clear that they deliberately ran the cell at low power to reduce the chance of thermal power.  They point out that the adding a little more than 100 W input power increased output by about 700 W.  That incremental amount is more in line with what was expected.

That’s mostly all that’s important – put power in, get significantly more power out. From what I’ve read, Industrial Heat has not yet used E-Cats to make high pressure steam and then electricity.  That may merely mean they haven’t settled on the mechanical design of the reactor, there’s no point in making a boiler until then.

The most interesting part of the report is the isotopic analysis of the fuel before the test run and the “ash” afterwards. The bottom line is that the reviewers have no idea what is happening during the test run.  They are utterly mystified and reject most of their speculation.

The fuel charge, only one gram, was assayed before the start of the test. The key components were determined to be nickel (Ni), lithium (Li), aluminum (Al), iron (Fe), and hydrogen (H).  (Two assay methods found carbon (C) and oxygen (O), but the paper seems to dismiss them citing the tiny granules of powder they used.)  The Ni and H were expected per Rossi’s descriptions in the past.

He also referred to a catalyst, saying it was inexpensive and not an impediment to wide scale deployment. The assay suggests the catalyst is

LiAlH4 which releases monoatomic hydrogen when it is heated, fitting the speculation about the catalyst’s role.

Each element was found to have the naturally ocurring ratios of its isotopes.

There had been speculation that Rossi used nickel enriched with particular isotopes, but apparently not.

The ash after the test run was also assayed. The small samples involved seem to preclude measuring the actual weight of various isotopes, so the paper concentrates on the percentages.  It would have been nice to have accurate weights.

Natural nickel is primarily 58Ni and 60Ni. Those were nearly completely consumed, and the nickel in the ash was nearly all 62Ni.  I had expected Ni + H leading to Cu, but several of the relevant Cu isotopes are radioactive, 62Ni is stable.

Lithium may not be a catalyst at all – natural Li is nearly all 7Li, a surface assay of the ash showed the lithium was nearly all 6Li. I’m no nuclear physicist, I’ll refrain from any speculation.  The authors explore a couple paths, but ultimately throw up their hands and simply say more study is needed.  Hydrogen wasn’t assayed – did it even participate?

All in all, this is a great, maybe historic, result. There has been plenty of evidence that the E-Cat works, but Rossi has always been directly involved.

Now we have an independent team working in their own space and with tools from their universities. They see it work and present multiple lines of evidence confirming it is a nuclear process.

That there is no explanation for the process is annoying, but won’t block commercialization of the E-Cat. The shouting isn’t over, the science has barely begun, but we may be at the start of civilization’s next major energy source.

Interesting times.

The paper is at

The best starting point is report-released/


German Federal Analysis Sees “Massive Threats To Security And Reliability Of Electric Power Supply System”!

So much for Germany’s transformation to “green” energies.

Germany’s Bun­desnet­za­gen­tur (Federal Network Agency for Electricity, Gas, Telecommunications, Post and Railway) is the federal authority overseeing and regulating the German electrical power grid, among other networks.

At its site it has a link to an expert assessment report that analyzes the needs of and risks to the German power grid for the coming 2014/15 winter.

The name of the report: “Examinations for the winter of 2014/15 with respect to risks for system security and the necessity for reserve power plants”.

The 102-page highly technical assessment examines a variety of scenarios in order to see how well Germany’s electrical power grid will hold up this winter. Looking at the report’s conclusion, one can only conclude that the power grid is more unstable and prone to a collapse than at any time in Germany’s post-war period. It’s a debacle knocking at the door.

In the summary on page 97 for example it writes (link added):

Scenarios were parameterized on the basis of historical data and realistically form expected critical situations, but do not necessarily show the worst-case scenario.
Considered scenarios show massive threats to the security and reliability of the electric power supply system which are not manageable without a substantial intervention by the ÜNB and the use of a secured redispatch-potential.
There are no safety reserves for managing additional critical or unexpected situations.”

On page 98 the report re-emphasizes.

In critical situations a substantial threat to system security is to be feared.”

The report’s summary adds: “Ssecure management of the expected critical situations requires comprehensive measures.”

This all means that on a cold winter day, Germany’s power grid could could very well collapse and citizens be left in the cold and dark for hours or even days. Parts of the report have been blacked out, which is hardly reassuring to the reader.

So why has Germany’s power grid, once one of the world’s most stable, become so vulnerable? An editorial piece at the Financial Times sums it nicely. It writes: “Merkel’s decision to phase out nuclear power has been a huge mistake.”

The FT piece writes that Germany has added a huge amount of intermittent wind and solar energy. Not only does this energy act to destabilize the power grid, but it also is costing German citizens and the economy a bundle. What a bargain: Poor quality for high cost! The FT writes that the Energiewende is “designed to make the economy predominantly dependent on renewable sources such as wind and solar power“, and adds that these are “burdens on households and businesses“, something that “Germany can ill afford”, the FT writes.

What’s worse for clean-energy-minded Germans is that the elimination of nuclear energy has led to an increase in coal burning. In the end, Germany’s power system is now dirtier, more unstable than ever, and now costs consumers far more. Does that sound like a great deal? Sounds to me like a monumental mismanagement.

Those of us living in Germany may want to consider installing a wood-burning stove in the weeks ahead as winter quickly approaches.

Hat-tip: 2 readers

Undeniable Mood Change With Regards To Quality Of Modelling Grips Climate Science…Trust Gone!

Mood change in climate modeling: Trust in the scientific community is disappearing
By Sebastian Lüning and Fritz Vahrenholt
(German text translated/edited by P Gosselin)

In the last few days we wrote two posts on the shocking deficits seen in the current climate models (see here and here). In our last part today we will look at how scientists estimate the modeling situation and look to see if there are new ideas to solve the problems.

In August 2014 a lead author of the 5th IPCC climate report, Richard Betts, publicly commented in a surprising manner. Betts directs the climate impact department of the UK Met Office, and at his website he describes himself as a climate modeling expert. In a comment at Bishop Hill, Betts wrote:

Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy. Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas. Everyone* agrees that CO2 rise is anthropogenic. Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either.”

In a footnote Betts added the 2 comments:

*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence.”

Betts no longer gives climate models a central role in climate policy. There are still too many unknowns, he admits. Quite obviously even IPCC authors are now getting cold feet and are no longer able to exclude that CO2 may have only a minor impact on climate.

A month earlier in July 2014 in the Wall Street Journal climate modeler Robert Caprara conceded that a variety of freely selectable parameters exist in climate models, which allow the desired result to be “modeled in”. Caprara writes:

My first job was as a consultant to the Environmental Protection Agency. I was hired to build a model to assess the impact of its Construction Grants Program, a nationwide effort in the 1970s and 1980s to upgrade sewer-treatment plants. […] When I presented the results to the EPA official in charge, he said that I should go back and “sharpen my pencil.” I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn’t change much. At our next meeting he told me to run the numbers again. After three iterations I finally blurted out, “What number are you looking for?” He didn’t miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy.”

In the climate debate Caprara recommends having an open discussion and listening to the arguments of the other side instead of cursing the other side in an attempt to disqualify them:

So here is my advice: Those who are convinced that humans are drastically changing the climate for the worse and those who aren’t should accept and welcome a vibrant, robust back-and-forth. Let each side make its best case and trust that the truth will emerge. Those who do believe that humans are driving climate change retort that the science is “settled” and those who don’t agree are “deniers” and “flat-earthers.” Even the president mocks anyone who disagrees. But I have been doing this for a long time, and the one thing I have learned is how hard it is to convince people with a computer model.”

Already in a paper from October 2012 a team of scientists led by Clara Deser in Nature Climate Change admitted that the strong natural climate variability that had been underestimated had been poorly accounted for by the climate models and so the models could not fulfill the high expectations of the political decision makers. The paper’s abstract states:

Communication of the role of natural variability in future North American climate
As climate models improve, decision-makers’ expectations for accurate climate predictions are growing. Natural climate variability, however, poses inherent limits to climate predictability and the related goal of adaptation guidance in many places, as illustrated here for North America. Other locations with low natural variability show a more predictable future in which anthropogenic forcing can be more readily identified, even on small scales. We call for a more focused dialogue between scientists, policymakers and the public to improve communication and avoid raising expectations for accurate regional predictions everywhere.”

Also well-known climate scientist Judith Curry has little trust in climate modeling. In October 2013 she complained in her blog about the missing estimations of climate historical studies – to the benefit of climate models. Huge sums had been invested in the models, without a correct result. The falsely claimed consensus by the IPCC catapulted the climate sciences backward at least a decade, said Curry:

My point is that ambitious young climate scientists are inadvertently being steered in the direction of analyzing climate model simulations, and particularly projections of future climate change impacts — lots of funding in this area, in addition to high likelihood of publication in a high impact journal, and a guarantee of media attention. And the true meaning of this research in terms of our actual understanding of nature rests on the adequacy and fitness for purpose of these climate models. And why do these scientists think climate models are fit for these purposes? Why, the IPCC has told them so, with very high confidence. The manufactured consensus of the IPCC has arguably set our true understanding of the climate system back at least a decade, in my judgment. The real hard work of fundamental climate dynamics and development and improvement of paleo proxies is being relatively shunned by climate scientists since the rewards (and certainly the funding) are much lower. The amount of time and funding that has been wasted by using climate models for purposes for which that are unfit, may eventually be judged to be colossal.

A more precise knowledge of paleoclimatology is essential and should have absolute priority ahead of free-style modeling because historical data are important calibration and check data for climate models. When the formulae are not correct, then even the largest super-computers are unable to deliver anything useful.

Also astrophysicist Richard Lindzen of the Massachusetts Institute of Technology (MIT) has no trust in climate models, as he explained at an event at Sandia National Labs, a research and development facility of the US Department of Energy.

The IPCC should finally open itself up to alternative models. In our “The Neglected Sun” book we presented a semi-quantitive approach where solar and ocean cycles played an important role. The awful accuracy rate of the IPCC models shows that it is time for a change. A serious check of the ideas of IPCC critics has to be conducted. Here models by Nicola Scafetta and Frank Lemke, which reproduce the temperature curve better than the IPCC forecasts, must be given serious attention. When it comes to oceans cycles, scientists have already given in and have even started to insert them into the models, thus making reliability of the climate prognosis dramatically better. An approach is for example DelSole et al. 2013 in a paper in the Geophysical Research Letters.


World’s Largest Re-Insurer “Munich Re” $ponsors 2014 “Extreme Weather Congress” In Hamburg!

Today parts of the German mainstream media have begun reporting on the 9th Extreme Weather Conference in Hamburg, which began today and is slated to end on Friday. The direct link to the program-pdf here. Of course the focus of the Congress will be on the claimed “increasing frequency of extreme weather events”.

In all 3000 experts are attending, along with 2500 Hamburg school pupils.

What especially raises eye brows is the fact that the event is sponsored by the Munich Re reinsurer, the biggest in the world. The Munich-Germany-based reinsurer of course stands to profit handsomely from the spreading of extreme weather fear; it makes it it a lot easier to jack up premiums (see “Spiegel Online doubts the catastrophe scenarios of the Munich Re“).

Also very murky is the identity of the organizer of the Congress: Institute for Weather and Climate Communication” (IWK). At their Die kalte Sonne website here, German skeptics Sebastian Lüning and Fritz Vahrenholt looked into who is behind the mysterious IWK? They write:

The result is sobering: Apparently the institute really does not have its own Internet platform. So we took a look at the legal page of Here indeed the “Institute for Weather and Climate Communication GmbH” is given. Listed as the managing directors are Frank Böttcher and Alexander Hübener. Frank Böttcher? Indeed we’ve written about him as well: “Extreme weather “expert” Frank Böttcher does not not know about the latest literature: Latest research results on global cyclone activity are damaging is climate-alarmist business“. Without climate fear and extreme weather alarm, the number of visitors for the commercialized Extreme Weather Congress would certainly be limited. Thus it’s little wonder that Böttcher, because of promotional reasons, is fervently preaching climatic doomsday to attract visitors into his auditorium.”

Lüning and Vahrenholt also look at a list of persons slated to appear: “Paul Becker of the German Weather Service, a climate alarmist to the bone. Peter Höppe of the Munich Re reinsurer will attend the introductory press conference, thus getting his sponsoring money’s worth.”

No skeptic was invited to the present at the Extreme Weather Congress. For the second year in a row Lüning and Vahrenholt were denied the opportunity to present there.

Vahrenholt and Lüning also ask themselves whether Mojib Latif will be able to muster the courage to disclose some inconvenient things that up to now he only has quietly admitted in the scientific literature. See “Mojib Latif: the proof of an anthropogenic climate contribution is difficult because the natural ocean cycles dominated” and “Mojib Latif in the presentation in USA: CO2 sensitivity was set too high by the IPCC“.

There is a bit of hope: “honest broker” and alarmism-critic Hans von Storch has also been invited, and so maybe he will infuse a little sobriety into the Hamburg climate panic-fest, see “Climate scientist Hans von Storch: Climate models possibly do not take solar activity sufficiently into account” and “Judith Curry prognosticates warming pause until the 2030s: Hans von Storch in such a case demands a vote of no-confidence against CO2“.

Overall, however, in view of the sponsorship by the world’s biggest player in the re-insurance industry, the murkiness surrounding the event’s organizers, and their exclusion of scientists with other views and data, the event has everything to do with serving corporate special interests rather than those of science.


Vienna Is Actually Now Cooling…And Not Warming As Media And Some Scientists Are Claiming

Vienna Climate Waltz More Data
By Ed Caryl

This is my comment on the Vienna Climate Waltz article Pierre just posted. I found the Vienna temperature data from GISS. It is listed under Wien/Hohe War, whatever that is.

My map application drops the pin nearly in the center of downtown from the GISS latitude and longitude numbers (which are notoriously inaccurate). The data file is complete from 1880 to the present, which is unusual. There are only two files in the GISS database, GHCN data before and after GISS homogenization. Here is a plot of those two files.


Figure 1 is a plot of Vienna annual temperature data before and after GISS homogenization, with the difference (the green trace). The difference scale is on the right, all scales in °C.

The homogenization has warmed the past and left the last 10 years unadjusted. This is unusual, as GISS usually cools the past and warms the present with their adjustments. The adjustment is probably for urban heat island effects, though they should be cooling the present and leaving the past alone.


Figure 2 is a plot of the last 16 years with trend lines.

There is a tenth of a degree adjustment for homogenization in the years 1998, 1999, and 2000. These adjustments change the cooling trend by more than 50%.

Curious George requested a 10-year average plot. Here is a centered 9-year average plot on the annual data. (It preserves the time accuracy better.)


Figure 3 is the annual homogenized temperatures with a 9-year centered average.


Figure 4 is a plot of seasonal temperatures (homogenized data).

While all seasons have gotten warmer in the last century, winter has the largest variation, but not much of a trend. I didn’t add the trends to Figure 4, but the numbers are: spring 1.29°/century, summer 1.41°/century, fall 0.73°/century, and winter 1.14°/century.


Figure 5 is a plot of the seasonal trends over the last 16 years.

The cooling in the last 16 years is all from the winters getting colder, at -0.382°C per decade.

Viennese Climate Waltz…Austrian Media/State Officials Still Using Faulty Models, Misleading The Public

UPDATE: Ed Caryl provided the following:


Based on GISS data


Even though IPCC climate models and expected climate trends have proven themselves to be completely false and useless, see here and here, parts of the Austrian media and state sector have no qualms using them, and in doing so they are misleading the public.

A recent example is the climate-alarmist Vienna-based Der Standard online daily in a recent piece titled: Climate in Vienna: More heat days, new plants.

The whole premise of the story is based on the climate models being right, which in fact today we know they have been universally wrong.

Palm trees in Vienna in a few decades!

In the article written by Christa Minkin and Julia Schilly, it is claimed that palm trees are to be expected in “a few decades in the Viennese forest – thanks to climate change,” citing ecologist Franz Essl of the Austrian Federal Ministry of Environment.

Minkin and Schilly also warn that Vienna is going to be hot in the future, all exacerbated by the urban heat island effect, citing “a new Austrian expert report on climate change“.

In 1910 there were only two heat days – i.e. temperatures over 30°C. In 2000 already 17 were measured.”

Der Standard also looks very deeply into the climate crystal ball…all the way to the year 2070 to 2100. Minkin and Schilly write:

For the period of 2070 to 2100 researchers anticipate a rise to more than 35 heat days per year on average. At the same time nights in Vienna will cool down less.”

Moreover, foreign plants will begin their invasion and displace domestic ones, the experts warn.

Slight cooling over the last 16 years

So with all the warnings of more unbearable heat days in the future, one might assume that temperatures in Austria must be currently on the rise. I searched the Internet for the temperature data series for Vienna, but unfortunately I wasn’t successful finding it. So I contacted the European Institute for Climate and Energy to see if they might be able to help out. They answered promptly by e-mail (slightly paraphrased):

Unfortunately we do not have the more recent data because the Austrian Weather Service does not make them public, only up to 2003. That’s why it’s not possible to show the last 15 years graphically, and so climatologists in Austria can claim whatever they want.”

Fortunately, EIKE was able to provide the recent data for Graz city center. Here we see despite the urban location there’s been a slight cooling.

Graz temperature trend

Mean annual temperature for Graz city center over the last 16 years.

The trend in Graz matches the overall trend of a slight cooling over central Europe over the last two decades.

So with the IPCC models having performed so horrendously, and in view of the fact there has been no warming trend in Austria for 16 years, it is truly a mystery how anyone could claim that summer heat days will just keep on rising linearly until the end of the century.

When the models are failures, then the future projections based on them are worthless.


Review: Yet More Expert Peer-Reviewed Papers Tell Us Why Climate Models Should Land In The Dustbin

If you weren’t convinced by Lüning’s and Vahrenholt’s essay on the failures of climate models posted here two days ago, then here’s more!

More fun with climate models: Nowhere do they fit reality

By Sebastian Lüning and Fritz Vahrenholt
(Translated/edited by P Gosselin)

In March 2013 journalist Joachim Müller-Jung put it aptly in an article in the FAZ:

Whoever simulates the world, has not understood the truth
How much reality is in the models? Fact is: The complexities increase, but so do the uncertainties. How can science remain credible here?”

Read more at the

Today once again we wish to take a cruise through the world of modeling. The field currently finds itself deep in crisis. Earlier they had energetically produced a number of models but the numerous misses are now taking a heavy toll. A first wave of self-criticism is sweeping across the field. by far not everything is as rosy as what the state funders once claimed.

On February 21, 2013 the University Göteborg issued a press release with the title: “Climate models are not good enough“. Within the framework of a promotion project, it was discovered that climate models have been unable to reproduce the observed changes in extreme rainfalls in China over the last 50 years:

Climate models are not good enough
Only a few climate models were able to reproduce the observed changes in extreme precipitation in China over the last 50 years. This is the finding of a doctoral thesis from the University of Gothenburg, Sweden. Climate models are the only means to predict future changes in climate and weather. “It is therefore extremely important that we investigate global climate models’ own performances in simulating extremes with respect to observations, in order to improve our opportunities to predict future weather changes,” says Tinghai Ou from the University of Gothenburg’s Department of Earth Sciences. Tinghai has analysed the model simulated extreme precipitation in China over the last 50 years. “The results show that climate models give a poor reflection of the actual changes in extreme precipitation events that took place in China between 1961 and 2000,” he says. “Only half of the 21 analysed climate models analysed were able to reproduce the changes in some regions of China. Few models can well reproduce the nationwide change.”

Problems with the rainfall modeling are found at every location. Also in the USA models have failed to reproduce the historical precipitation development, as Mishra et al. (2012) and Knappenberger and Michaels (2013) were able to show. Stratton & Stirling (2012) and Ramirez-Villegas et al. (2013) found the same at the global level. Here’s an excerpt of the latter:

Climatological means of seasonal mean temperatures depict mean errors between 1 and 18 ° C (2–130% with respect to mean), whereas seasonal precipitation and wet-day frequency depict larger errors, often offsetting observed means and variability beyond 100%. Simulated interannual climate variability in GCMs warrants particular attention, given that no single GCM matches observations in more than 30% of the areas for monthly precipitation and wet-day frequency, 50% for diurnal range and 70% for mean temperatures. We report improvements in mean climate skill of 5–15% for climatological mean temperatures, 3–5% for diurnal range and 1–2% in precipitation. At these improvement rates, we estimate that at least 5–30 years of CMIP work is required to improve regional temperature simulations and at least 30–50 years for precipitation simulations, for these to be directly input into impact models. We conclude with some recommendations for the use of CMIP5 in agricultural impact studies.”

Soncini & Bocchiola (2011) examined snowfall in the Italian Alps. Also here they found the same: the real, measured development cannot be reproduced by models. What’s even worse: the future projections of various models deviate widely from each other. Here’s an excerpt from the noteworthy paper:

General Circulation Models GCMs are widely adopted tools to achieve future climate projections. However, one needs to assess their accuracy, which is only possible by comparison of GCMs’ control runs against past observed data. Here, we investigate the accuracy of two GCMs models delivering snowfall that are included within the IPCC panel’s inventory (HadCM3, CCSM3), by comparison against a comprehensive ground data base (ca. 400 daily snow gauging stations) located in the Italian Alps, during 1990–2009. The GCMs simulations are objectively compared to snowfall volume by regionally evaluated statistical indicators. The CCSM3 model provides slightly better results than the HadCM3, possibly in view of its finer computational grid, but yet the performance of both models is rather poor. We evaluate the bias between models and observations, and we use it as a bulk correction for the GCMs’ snowfall simulations for the purpose of future snowfall projection. We carry out stationarity analysis via linear regression and Mann Kendall tests upon the observed and simulated snowfall volumes for the control run period, providing contrasting results. We then use the bias adjusted GCMs output for future snowfall projections from the IPCC-A2 scenario. The two analyzed models provide contrasting results about projected snowfall during the 21st century (until 2099). Our approach provides a first order assessment of the expected accuracy of GCM models in depicting past and future snowfall upon the (Italian) Alps. Overall, given the poor depiction of snowfall by the GCMs here tested, we suggest that care should be taken when using their outputs for predictive purposes.”

Out of thin air?

In June 2013 Axel Lauer and Kevin Hamilton in the Journal of Climate looked at cloud models. Also here nothing is different: Every model does its own thing and the real development simply refuses to play along. Next is the abstract of the paper:

Clouds are a key component of the climate system affecting radiative balances as well as the hydrological cycle. Previous studies from the Coupled Model Intercomparison Project Phase 3 (CMIP3) showed quite large biases in the simulated cloud climatology affecting all GCMs [global climate models] as well as a remarkable degree of variation among the models, which represented the state-of-the-art circa 2005. Here we measure the progress that has been made in recent years by comparing mean cloud properties, interannual variability, and the climatological seasonal cycle from the CMIP5 models with satellite observations and with results from comparable CMIP3 experiments. We focus on three climate-relevant cloud parameters: cloud amount, liquid water path, and cloud radiative forcing. We show that intermodel differences are still large in the CMIP5 simulations. We find some small improvements of particular cloud properties in some regions in the CMIP5 ensemble over CMIP3. In CMIP5 there is an improved agreement of the modeled interannual variability of liquid water path as well as of the modeled longwave cloud forcing over mid and high latitude oceans with observations. However, the differences in the simulated cloud climatology from CMIP3 and CMIP5 are generally small and there is very little to no improvement apparent in the tropical and subtropical regions in CMIP5. Comparisons of the results from the coupled CMIP5 models with their atmosphere-only versions run with observed SSTs show remarkably similar biases in the simulated cloud climatologies. This suggests the treatments of subgrid-scale cloud and boundary layer processes are directly implicated in the poor performance of current GCMs [global climate models or general circulation models] in simulating realistic cloud fields.”

No matter which modeling parameter one looks at, the result is off. Another example is ground moisture ground moisture, which according to an analysis by Tim Ball is not correctly given in the IPCC models. There are also problems with thunderstorms, as Anthony Watts at WUWT showed. Or take a look at atmospheric pressure. According to Przybylak et al. 2012, it has not changed much since the beginning of the 19th century. The models, on the other hand, foresaw a significant trend, which scientists sold as an “anthropogenic fingerprint”. That is now turning out to be completely wrong.

Also when looking back further into the geological past, climate models do not make a good impression at all. During the last interglacial, i.e. the warm period of 120,000 years ago, it was warmer than today. Yet, for whatever reason, climate models are unable to reproduce this, as a paper appearing in the 29 August 2014 Climate of the Past journal criticized:

We find that for annual temperatures, the overestimation is small, strongly model-dependent (global mean 0.4 ± 0.3 °C) and cannot explain the recently published 0.67 °C difference between simulated and reconstructed annual mean temperatures during the LIG thermal maximum. However, if one takes into consideration that temperature proxies are possibly biased towards summer, the overestimation of the LIG thermal maximum based on warmest month temperatures is non-negligible with a global mean of 1.1 ± 0.4 °C.”

Another paper that appeared just days before by Dolan et al. looked at the Pilocene 3 million years ago. The task for the 9 modeling groups was to calculate the ice cover over Greenland for the conditions of the Pilocene warm period. Back then it was considerably warmer than today and the sea level was higher, i.e. about the same conditions that are predicted by today’s climate models for the end of the 21st century. The study ended with a big surprise: Everything between “ice-free” and “ice somewhat like today” were given by the models. The reason for the divergence: Every model simulated the local albedo properties in Greenland differently. Yet this is decisive for the ice cover extent of the island. In summary future modeling resembles playing the lottery.

Finally we go back yet another step further, to the mid-Milocene 14 million years ago. A paper by Goldner et al. from March 2014 found that the climate models were off by a full 4°C. Among other things, the authors suspect that certain climate factors are missing in the models. An interesting thought…

What follows is an excerpt from the abstract of that paper:

The mid-Miocene climatic optimum (MMCO) is an intriguing climatic period due to its above-modern temperatures in mid-to-high latitudes in the presence of close-to-modern CO2 concentrations. We use the recently released Community Earth System Model (CESM1.0) with a slab ocean to simulate this warm period, incorporating recent Miocene CO2 reconstructions of 400 ppm (parts per million). We simulate a global mean annual temperature (MAT) of 18 °C, ~4 °C above the preindustrial value, but 4 °C colder than the global Miocene MAT we calculate from climate proxies. […] Our results illustrate that MMCO warmth is not reproducible using the CESM1.0 forced with CO2 concentrations reconstructed for the Miocene or including various proposed Earth system feedbacks; the remaining discrepancy in the MAT is comparable to that introduced by a CO2 doubling. The model’s tendency to underestimate proxy derived global MAT and overestimate the Equator to pole temperature gradient suggests a major climate problem in the MMCO akin to those in the Eocene. Our results imply that this latest model, as with previous generations of climate models, is either not sensitive enough or additional forcings remain missing that explain half of the anomalous warmth and pronounced polar amplification of the MMCO.”

Extreme Stupidity: Bremen’s ‘Weser Kurier’ Daily Now Claiming Climate Change Is Damaging Church Organs

This story is a perfect illustration of how today’s journalists will print anything they told by swindling climate scientists.

Bremen’s Weser Kurier daily is very sure: “Climate change is damaging organs”

By Sebastian Lüning and Fritz Vahrenholt
(Translated, edited by P Gosselin)

The newspaper of the German port city of Bremen, the Weser Kurier surprised its readers on September 8 with almost unbelievable news:

Climate change is damaging organs

It sounds like a paradox, but the consequences of climate change were visible to see with the St. Andreas Church organ a few weeks ago. ‘The instrument was beset by mold and mildew“, says organ builder Martin Hillebrand from Isernhagen. For about six weeks he and his team have been working in the church to meticulously clean the organ and to fine-tune the pipes.”

We already suspected climate change for many things, but its role as a vandal of church organs is something new. So how does it cause mold and mildew to infest the organ? How does that work? The Weser Kurier tells us:

The mold infestation in the instrument is not a single occurrence – quite to the contrary. According to Hillebrand about 70 percent of the organs in so-called village churches are under attack. ‘This involves mainly churches where the people congregate for mass only every other week or less often. These churches are thus heated less and so in the wintertime they quickly become very prone to moisture,’ said the expert. Fungus attacks are also helped by warm, moist climate in the summertime. ‘Research instituites have shown that this and the relative humidity will increase in the future,’ says Hillebrand.”

So it’s not really due to climate change, but moreso people are going to church less and less? Those who do not heat have to expect mold – that’s been a well-known rule for a very long time. Warm, humid summers in Germany are also known. In summer it’s warm, and in winter it’s cold. What’s new? Only the relative humidity remains. Has it really risen over the last decades because of climate change? Here we take a look at the data from Braunschweig at the norddeutschen Klimamonitor website (Figure 1).

Oh dear, the relative humidity has actually trended downwards over the last 50 years. Climate change is not guilty! The problem actually appears to be caused by the lack of heating in churches, which promotes mold infestation. The climate-activist Weser Kurier once again regrettably has told its readers nonsense. The editors would surely welcome some letters from readers. Here is their contact page.

Figure 1: Trend of relative humidity in Braunschweig. Source: norddeutscher Klimamonitor.


2 German Scientists Calling For Climate Modelling Moratorium: So Far Only “Failures, Flops And Fumbles”!

Two German scientists describe what many western governments have been basing their energy and environmental policies on. It’s not pretty. What follows is an excellent review of climate modeling so far.

Fun with Climate Models: Flops, Failures and Fumbles
By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt
(Translated, edited by P Gosselin)

What’s great about science is that one can think up really neat models and see creativity come alive. And because there are many scientists, and not only just one, there are lots of alternative models. And things only get bad when the day of reckoning arrives, i.e. when the work gets graded. This is when the prognoses are compared to the real, observed measurements. So who was on the right path, and who needs go back to the drawing board?

When models turn out to be completely off, then they are said to have been falsified and thus are considered to have no value. The validation of models is one of the fundamental principles of science, Richard Feynman once said in a legendary lecture:

Failed hypotheses have been seen very often in science. A nice collection of the largest scientific flops is presented at WUWT. Unfortunately the climate sciences also belong to this category. Roy Spencer once compared an entire assortment of 73 climate models to the real observed temperature development, and they all ended up overshooting the target by far:

And already yet another model failure has appeared: In August 2009 Judith Lean and David Rind made a daring mid-term climate prognosis in the Geophysical Research Letters. They predicted a warming of 0.15° for the five-year period of 2009 to 2014. In truth it did not warm at all during the period. A bitter setback.

Over the last years it has started to dawn on scientists that perhaps something was missing in their models. The false prognoses stand out like a sore thumb. Not a single one of the once highly praised models saw the current 16-year stop in warming as possible. In September 2011 in an article in the Journal of Geophysical Research Crook & Forster admitted that the superficial reproduction of the real temperature development in a climate model hardly meant the mechanisms were completely understood. The freely adjustable parameters are just too multifaceted, and as a rule they are selected in a way to fabricate agreement. And just because there is an agreement, it does not mean predictive power can be automatically derived. What follows is an excerpt from the abstract by Crook & Foster (2011):

In this paper, we breakdown the temperature response of coupled ocean‐atmosphere climate models into components due to radiative forcing, climate feedback, and heat storage and transport to understand how well climate models reproduce the observed 20th century temperature record. Despite large differences between models’ feedback strength, they generally reproduce the temperature response well but for different reasons in each model.”

In a member journal of the American Geophysical Union (AGU), Eos, Colin Schultz took a look at the article and did not mince any words:

Climate model’s historical accuracy no guarantee of future success
To validate and rank the abilities of complex general circulation models (GCMs), emphasis has been placed on ensuring that they accurately reproduce the global climate of the past century. But because multiple paths can be taken to produce a given result, a model may get the right result but for the wrong reasons.”

Sobriety in the meantime has also spread over to IPCC-friendly blogs. On April 15, 2013, in a guest post at Real Climate Geert Jan van Oldenborgh, Francisco Doblas-Reyes, Sybren Drijfhout and Ed Hawkins made it clear that the models used in the 5th IPCC report were completely inadequate for regional climate prognoses:

To conclude, climate models can and have been verified against observations in a property that is most important for many users: the regional trends. This verification shows that many large-scale features of climate change are being simulated correctly, but smaller-scale observed trends are in the tails of the ensemble more often than predicted by chance fluctuations. The CMIP5 multi-model ensemble can therefore not be used as a probability forecast for future climate. We have to present the useful climate information in climate model ensembles in other ways until these problems have been resolved.”

Also Christensen and Boberg (2012) were critical about the AR5 models in a paper appearing in the Geophysical Research Letters. The scientists presented their main results:

– GCMs suffer from temperature-dependent biases
– This leads to an overestimation of projections of regional temperatures
– We estimate that 10-20% of projected warming is due to model deficiencies”

In January 2013 in the Journal of Climate Matthew Newman reported in an article “An Empirical Benchmark for Decadal Forecasts of Global Surface Temperature Anomalies” on the notable limitations of the models:

These results suggest that current coupled model decadal forecasts may not yet have much skill beyond that captured by multivariate red noise.”

In the prognosis time-frame of multiple decades, they do not perform better than noise. An embarrassment.

Also Frankignoul et al. 2013 expressed serious concerns in the Journal of Climate because of the unimpressive performance of the climate models. They graded the models plainly as “unrealistic” because they did not implement the role of ocean cycles correctly.

In July 2013 Ault et al. looked at a paper in the Geophysical Research Letters and at the models for the tropical Pacific region. They made an awful discovery: Not one of the current models is able to reproduce the climate history of the region during the past 850 years. Excerpts from the abstract:

[…] time series of the model and the reconstruction do not agree with each other. […] These findings imply that the response of the tropical Pacific to future forcings may be even more uncertain than portrayed by state-of-the-art models because there are potentially important sources of century-scale variability that these models do not simulate.”

Also Lienert et al. (2011) found problems with the North Pacific. And in July 2014 in an article in Environmetrics, McKitrick & Vogelsang documented a significant overestimation of the warming in the climate models for the tropical region over the past 60 years.

In March 2014 Steinhaeuser & Tsonis reported in Climate Dynamics on a comparison of 23 different climate models and the extent to which they were able to reproduce temperature, air pressure and precipitation over the 19th and 20th centuries. The surprise was great when the scientists found that the model results deviated widely from each other and were unable to give a correct account of reality. A more detailed discussion is available at The Hockey Schtick.

In a press release from September 17, 2012, scientists of the University of Arizona complained that as a rule climate models failed when looking at periods of three decades and less. Also attempts at prognoses for regional levels were unsuccessful:

UA Climate Scientists put predictions to the test
A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale but lose their edge when applied to time frames shorter than three decades and on sub-continental scales.”

In October 2012 Klaus-Eckart Puls at EIKE warned that up to now the temperature prognoses of the climate models have been false for every atmospheric layer:

For some decades now climate models have been projecting trends (“scenarios”) for temperature for different layers of the atmosphere: near surface layer, troposphere, and stratosphere. From the near surface layer all the way to the upper troposphere it was supposed to get warmer according to the AGW hypothesis, and colder in the stratosphere. However meteorological measurements taken from all atmospheric layers show the exact opposite!”

So what is wrong with the models?

For one they still have not found a way to implement the empricially confirmed systematic impact of the ocean cycles into the models. Another problem of course is that the sun is missing in the models as its important impact on climate development continues to be denied. It’s still going to take some time before the sun finally gets a role in the models. But there are growing calls for the taking the sun into account and recognition that something is awry. In August 2014 in the Journal of Atmospheric Sciences a paper by Timothy Cronin appeared. It criticized the treatment of solar irradiance in the models. See more on this at The Hockey Schtick.

The poor prognosis-capability of climate models is giving more and more political leaders cause for concern. Maybe they should not have relied on the model results and developed far-reaching plans to change society. To some extent they have already began to implement these plans. Suddenly the very credibility of the climate protection measures finds itself at stake.

The best would be a moratorium on models. Something needs to be done. It is becoming increasingly clear that the present wild modeling simply cannot continue. It’s time to re-evaluate. The climate models so far are hardly distinguishable from computer games on climate change where one sits comfortably on the couch and shoots as many CO2 molecules out of the atmosphere as he can and then reaps the reward of a free private jet flight with climate activist Leonardo di Caprio.

Fritz Vahrenholt and Sebasian Lüning authored the climate science book The Forgotten Sun. In this book they examined the poor quality of climate models and why they will always fail.

Science With “97% Consensus” and “99% Certainty” Sees 0% Of Its Scientists Willing To Bet On It!

About 10 days ago a press release by the Alfred Wegener Institute gave readers the impression that the Arctic sea ice would keep on melting. In response I sent e-mails to the two scientists cited in it, Marcel Nicolaus and Lars Kaleschke, and asked if they would advise a bet on it. I even posted my bet here.

My proposed bet: mean September sea ice for the period of 2017-2022 will be higher than the September mean for the period 2007 – 2012.

Not surprisingly I got no response.

I also sent versions of that e-mail to other institutes, some having a chronic habit of sounding alarm when it comes to global sea ice. My e-mail:

Dear —–

I’ve read with great interest the latest AWI press release on global sea ice.

One easily gets the impression the experts believe the Arctic sea ice trend will continue its downward trend over the coming years. This surprises me. Myself I think the Arctic will actually recover over the next decade or two. I’m convinced enough to bet $1000 on this. I believe that the average September sea extent for the years 2017 – 2022 will be greater than the mean September 2007-2012 sea ice extent.

Would you advise me against making such a bet? Would you bet? Surely the science can provide a probability here. Your short comment on this would be very much appreciated.
Kind regards

Pierre Gosselin

The following scientists/institutes were sent the e-mail:

1. Marcel Nicolaus, AWI
2. Lars Kaleschke, University of Hamburg
3. Stefan Rahmstorf, PotsInstitute
5. Dirk Notz, Max Planck Institute
6. Leif Riemenschneider, Max Planck Institute
7. Rebecca Rolf, Max Planck Institute
8. Frank Sienz, Max Planck Institute
9. Peter Wadhams, University of Cambridge
10. Dr. Andrey Proshutinsky, Woods Hole Institute
11. Anders Levermann, Potsdam Institute
12. Mojib Latif,

Those who replied are printed in bold, and I’d like to thank them for taking the time to respond, particularly Dr. Dirk Notz and Dr. Andrey Proshutinsky. They both took the time to provide a real reply, see here and here.

Three of the 4 replying advised against betting the Arctic would melt, warning there is too much natural variability involved. Dr. Andrey Proshutinsky even hinted that the Arctic may in fact just do the very opposite.

Mojib Latif also sent a brief reply advising against a bet, citing “natural decadal variability”. He wrote:

I don’t bet. There is of course natural decadal variability which superimposes the long-term downward sea ice extent trend, but this decadal variability is hard to predict.”

Lisa at the NSIDC also sent a reply, providing two links: here and here. Scientists at the NSIDC also declined to bet.

The other eight scientists did not even reply. It seems some like shouting from the rooftops the sky is falling, but suddenly get real quiet when asked to put money on it.

In summary, no one expressed any interest in accepting the above bet and not one even advised anyone to accept it. Result: From a science with a “97% consensus” and “99% certainty”, 0% of the scientists are ready to bet on it.

Three of the 4 scientists who did reply say it is not possible to predict the Arctic sea ice over the next 8 years (yet many scientists claim that predicting the Arctic 50, 100 or 200 years into the future is 99% slam dunk?).

Of course we can understand scientists’ reluctance to bet on Arctic sea ice. But on the other hand why are so many of these scientists insisting that the rest of us bet our modern prosperity on their models being right (when obviously they themselves don’t even trust them 8 years out)?

It all smacks of a sham to me.


Climate Change Delusion Has Always Been Popular Throughout History…And A Money-Making, Power-Grabbing Scam

Climate Change As A Popular Delusion
By Ed Caryl

N’en déplaise à ces fous nommés sages de Grèce,
En ce monde il n’est point de parfaite sagesse;
Tous les hommes sont fous, et malgré tous leurs soîns
Ne diffèrent entre eux que du plus ou du moins.”

“No offense to those crazy appointed sages of Greece,
In this world there is no perfect wisdom;
All men are fools, and despite all their care
is differentiated as the more or less.”

From: The title page, Charles MacKay (portrait above). Memoirs of Extraordinary Popular Delusions and the Madness of Crowds (Kindle Location 1046). Published in London, 1852. Available free in your chosen format here. Image from Wikipedia Commons, here.

My wife asked me the other day if there were other examples in history of the Climate Delusion presently infecting the world. I immediately thought of the above famous work from the Nineteenth Century, though I had not read it in its entirety until now. I downloaded the Kindle edition and will be quoting it extensively. MacKay devoted two volumes to the history of this phenomenon, though he said he could have devoted 50.

The present may be considered more of a miscellany of delusions than a history— a chapter only in the great and awful book of human folly which yet remains to be written, and which Porson once jestingly said he would write in five hundred volumes!” (Kindle Locations 1152-1153)

Every age has its peculiar folly; some scheme, project, or phantasy into which it plunges, spurred on either by the love of gain, the necessity of excitement, or the mere force of imitation. Failing in these, it has some madness, to which it is goaded by political or religious causes, or both combined.” (Kindle Locations 6961-6963).

Most of the delusions MacKay describes somehow involve money. Many describe financial “bubbles” like Tulipomania, the South Sea Bubble, and the Mississippi stock scheme. Mesmerism/Magnetisers and Alchemy made money for the direct practitioners. Pilgrimages to the Holy Land profited the Islamists for hundreds of years until the crusading knights attempted to retrieve the treasure. Whole forests of oaks were reduced to fragments of the True Cross and imported to Europe at huge expense.

In many of these delusions, the practitioners, as well as those practiced upon, were themselves deluded. This was especially true for those delusions associated with the practice of medicine such as the Magnetisers. These practices probably held back the advancement of medicine for many years. Alchemy both advanced and retarded the study of chemistry, leading practitioners into blind alleys and dead ends in search of the “Philosopher’s Stone,” but developing equipment and techniques that would be later used to advance the science.

Men go mad in herds

The parallels with Climate Science are clear: the delusions of the “97%” climate scientists leads to cherry-picking of data and confirmation bias that poison all the work. The bias is driven constantly by the knowledge that only “correct” answers will keep the paycheck coming. In an ancient example from MacKay, Kepler:

In sending a copy of his Ephemerides to Professor Gerlach, he wrote, that they were nothing but worthless conjectures; but he was obliged to devote himself to them, or he would have starved.” (Kindle Locations 5632-5633).

Why don’t the climate scientists see the error of their ways? They sometimes do; usually after retirement, when the press of money no longer holds, or when they are in a position of safety.

Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.” (Kindle Locations 1143-1144).

So it is not surprising that we see some older and wiser heads in Climate Science, one by one, leaving the clique. So, why the constant projections of calamity from Climate Science? Here too, MacKay has an answer:

Omens. Among the other means of self-annoyance upon which men have stumbled, in their vain hope of discovering the future, signs and omens hold a conspicuous place. There is scarcely an occurrence in nature which, happening at a certain time, is not looked upon by some persons as a prognosticator either of good or evil. The latter are in the greatest number, so much more ingenious are we in tormenting ourselves than in discovering reasons for enjoyment in the things that surround us. We go out of our course to make ourselves uncomfortable; the cup of life is not bitter enough to our palate, and we distil [sic] superfluous poison to put into it, or conjure up hideous things to frighten ourselves at, which would never exist if we did not make them.” (Kindle Locations 5742-5746).

The mountaineer makes the natural phenomena which he most frequently witnesses prognosticative of the future. The dweller in the plains, in a similar manner, seeks to know his fate among the signs of the things that surround him, and tints his superstition with the hues of his own clime. The same spirit animates them all— the same desire to know that which Infinite Mercy has concealed. There is but little probability that the curiosity of mankind in this respect will ever be wholly eradicated. Death and ill fortune are continual bugbears to the weak-minded, the irreligious, and the ignorant; and while such exist in the world, divines will preach upon its impiety and philosophers discourse upon its absurdity in vain.” (Kindle Locations 5849-5852).

Is there no hope? Are we doomed to live in fear of the future? Will the doomsayers win? The above quote continues:

Still it is evident that these follies have greatly diminished. Soothsayers and prophets have lost the credit they formerly enjoyed, and skulk in secret now where they once shewed their faces in the blaze of day. So far there is manifest improvement.” (Kindle Locations 5852-5854).

As the predictions of the modern climate soothsayers fall into repeated failure, so will climate science slowly improve. One by one, the workers in this field will correct their errors. Some will fall from a great height and be destroyed. Some will suffer large and small epiphanies and ease into the light.

Like Churchill said, “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”


4 Of The Arctic’s (80°N+) Coldest Summers Occurred In The Last 6 Years…Global Sea Ice Now Normal 2 Years!

The North Pole region north of 80°N, monitored by the Danish Meteorological Institute (DMI) since 1958, has in 2014 seen yet another colder-than-normal summer, placing it in the Top 4 coldest since the DMI started monitoring in 1958. The chart below shows the four coldest summers: 2009, 2010, 2013 and 2014.

DMI 4-6

Temperature vs calendar day for the 4 coldest summers above 80°N latitude. Blue horizontal line is 0°C . Four of the last six Arctic summers have been the coldest on the DMI record. Graphics snipped from the

Arctic summers can be examined going back to 1958 using data from the Danish Meteorological Institute here. The 4 coldest Arctic summers have all occurred in the last 6 years. Not even the summers of the 1960s were colder.

Ocean currents impact the Arctic

It’s obvious that ice melt involves a lot more than just surface atmospheric temperature, and claims it is due to greenhouse gases are misleading and naïve. Other major factors involved are ocean cycles and weather and wind conditions.

From 1980 to 2000, both the AMO and PDO saw warm phases, and so the Arctic ice melt of the late 2000s was likely a reaction to the warmth. The PDO has since flipped to its cooler phase and the AMO is starting its entry into its cool phase. Many experts are expecting an Arctic summertime sea ice recovery over the next 2 decades.

Even Arctic sea ice experts are now backing off their projections of continued sea ice decline over the next 20 years, see here.

Global sea ice normal 2 years

Not only the North Pole is cooling, but so is the South Pole, which has been smashing record highs for sea ice extent a second year in a row, see here. Global sea ice has been slightly above average for almost 2 years now, thus flying in the faces of alarmists who obstinately cling to the belief of a warming planet.

Hat-tip Nick Beal at Twitter.