I was notified by reader Ken of an interview conducted by German daily Augsburger Zeitung (AZ), appearing in the hard copy edition. Now posted here in German.
44-year veteran meteorologist Klaus Hager. Photo credit: University of Augsburg
Interviewed was meteorologist Klaus Hager. He was active in meteorology for 44 years and now has been a lecturer at the University of Augsburg almost 10 years. He is considered an expert in weather instrumentation and measurement.
The Augsburger Zeitung writes that “hardly any of his colleagues are as familiar with the weather as the 73-year old is“.
“Fluctuations dominate climate, not trends”
The Augsburger Zeitung wanted to know Hager’s views on climate change. Hager doesn’t hold back any punches, claiming that “people are being deceived” on the subject and that man’s influence on the climate is very small.
On whether temperatures are warming in the Augsburg region, Hager says there is “no detectable trend showing this is so” and that it’s been cooling since 2005. When it comes to the climate variability, he agrees with Professor Lauscher of the University of Vienna: “Fluctuations dominate climate, not trends“.
Warming an artifact of new instrumentation
One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ (my emphasis):
For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.”
Hager confirms to the AZ that the higher temperatures are indeed an artifact of the new instruments.
Hager also calls climate change and climate protection “ideologically charged topics“.
“People are being deceived especially when it comes to reducing CO2.” He tells the AZ that weather depends on dozens of single factors – all of various weighting.
The AZ, seemingly stunned by it all, asks Hager: “So you’re saying that the calls for climate protection connected to CO2 are not serious?” Hager confirms, answering:
The CO2 taxes that are being levied are actually a sin against national wealth. If you want to stop the alleged climate change, then you need to ask what it’s all about and who profits from it at the expense of the citizens.”
Hager then explains how CO2 is only a trace gas and that its role in climate is overhyped.
When asked about how his position is completely contradictory to that of the mainstream, Hager scoffs at the notion:
You know I check facts and I want others to think about it and not to just swallow everything unfiltered- just because its the current zeitgeist. Manmade climate change will turn out to be a climate bubble. It’s going to pop like the forest die-off scare – and will do so because of nature – here I mean when solar activity falls again.”
I wonder how much longer he’ll lecturing at the university of Augsburg. Expect the warming-jihadists to go after him. Kudos to the Augsburger Zeitung for printing the interview!
This appears to be the online version of the interview
Thanks Bernd. Link now posted.
I’ll add this man to my list of favourite people! Another honest man, speaking the truth.
I wouldn’t be surprised if he got a backlash and forced to resign. “Ich bin Hager!”
well a thermometer gives its own temperature…one step beyond that requires assumptions.
Hager may have an interesting solution of Mann’s divergence problem.
I particularly like Hager’s point that nature will eventually burst this hot air bubble. Indeed. With no fewer than three solar cycles bottoming out in the next few years, it’s hard to imagine nature will do otherwise.
Not to mention the AMO going cold in the next few years
How widespread is the use of these “electronic” temperature instruments? Why weren’t these calibrated to provide readings consistent with the earlier glass thermometers. Clearly here in the US, most early temperature records were later revised DOWNWARD. That would further exaggerate the problem !
It seems Germany made the complete transition to electronic measurement in the 1980s/90s (I am told), which strangely coincides with the “sudden warming” recorded there in the same period. I’m quite sure we’ll be hearing more about this in the coming days. We note that Germany has not seen any warming since 2000. It would be quite stunning if that warming indeed turns out to be mostly due to instrument change! My next post will be exploring this.
The official name for this cooking of the books is ‘homogenisation’ of the temperature record, to improve it.
http://jennifermarohasy.com/temperatures/
WMO were aware of the problem between the electronic and glass thermometers measuring different things. WMO failed to determine or define how the discontinuity in measurement records could be dealt with rigorously and scientifically.
WMO left it to individual, national meteorological organizations to establish “alignment” (PDF). It seems that none did.
The result is that we have no continuous, surface temperature records. It’s a sequence of disjunct data sets; a new one every time an instrument is replaced, a weather station re-located or the proximity of the station altered substantially.
I blogged on this aspect of metrology 2 years ago. It’s good to note that some meteorologists with experience see the same problem and are speaking out.
Bernd, I am sure you would know that their should be no difference between a mercury in glass thermometer and a resistance thermometer if properly calibrated. Two calibration points are zero for the freezing of water and 100C for the boiling point of water both at 1 atm pressure. I can not remember all the other calibration points but I do recall the the highest density of water occurs at 4C. One can calibrate below zero with solutions containing salt. The boiling point of water at reduced pressure are also well known. The need for calibration of mercury in glass (or alcohol in glass) thermometers was due to variations in tube thickness. The need for calibration of resistance thermometers is are due to changes is the thickness and composition of the make-up material -nothing is perfect. Thermocouples are some what different as they measure the temperature difference from the tip to the point where the voltage is measured ie the actual temperature needs to be corrected for ambient temperature or base temp. also there must be a correct for voltage drop in leads. I calibrated a thermocouple for a pottery kiln (0 to 1200C) by using a range standard cones which melted at particular temperatures
I did not have time to add that when I calibrated a special mercury in glass thermometer in a laboratory at university I did in intervals of 5C between -10 to 100C to an accuracy of 0.05C more accurate than those used in weather stations but it could then be used to calibrate other thermometers. Mercury in glass thermometers have a significant period of response to equilibrium which is several minutes. Most industrial resistance thermometers have a longer response time than mercury in glass while thin wire thermocouples can be almost instantaneous. A difference comes in the measurement of max. & min. With a mercury thermometer it max & min is read at one time of the day. With electronic measurement and recording (particularly digital recording) the max & min can be tripped at anytime of the day. The change in enclosure is another factor. The smaller enclosures holding modern temperature measuring instruments is much more likely to be effected by short term variations than the larger Stevenson screen. Maybe thermometer measurements should be integrated over ten minutes or only daily averages used. UAH measurements are averaged.
@cementafriend:
They are probably using platinum wire thermistor type thermometers, which have a fairly fast response, certainly faster than a mercury max-min thermometer which may not record transient changes. These types work to about 250℃ so the change in resistance is ‘spread’ over a bigger scale with lesser accuracy per degree. I have known these types to be out by 1.6℃ near 25℃, and this being ignored (by a qualified chemist) because the readout was a digital figure which looks accurate. Was the error in the probe or the electronics? Don’t know, but I refused to use that meter.
The other problem is their insistence (when it suits them) on using an average of the maximum and minimum readings, because that is what was recorded in the past.
There should be no difference. In steady state.
Unfortunately, air is constantly moving. And the instrument’s response to the exchange of heat with the surrounding, moving air, is determined by the technology of the instrument. Especially the dynamic response which will pick up more and greater spikes in temperature as its bandwidth is increased (time constant decreases).
The first thing that should be ruled out when there’s a different measurement detected, is that the method of measurement prodcued the differing result. i.e. make sure that the system has really changed. That’s only practical if the instruments are run in parallel over the widest, practical range of operating conditions.
Some meteorologists like Hager seemed to care enough about what they were doing to run that sort of experiment.
I would prefer the empirical approach by Hager him self. We could take an early version of the GHCN data base. If it were possible to obtain for each station the moment of thermometer change, we may for both the min and max temperatures compute the mean in the week before and after the change. If the moments are not random in time, we could use two weeks before and after. Computationally a simple job. I have on an USB stick the GHCN base downloaded on 15 October 2010. The base contains surface temperature measurements by 13486 stations all over the world, during 1701-2010. Beside this I have WMO information about the stations and programs to process the data. If the thermometer change data are available, we may have here a short discussion about the design, before the analysis is done (assuming that it has not been done already).
I have been noticing the spikes in electronically measured temperatures where a temp can rise (and then fall) up to a degree, sometimes in a minute. When Sydney recorded its highest temp of 45.8C in Jan 2013, the temp rose from 44.9C in four mins and then dropped to 44.8C in six minutes.
it is very hard to measure the varying temperature of moving air! in fact you must assume some kind of equilibrium to say ” i am measuring temperaure of the air ” so….i don’t know, a measure is made,i t is not reallythe temperature of the air as the themrometer is not necessarely at the same temperature as the air around.
It is a measure mostly define by process and device, not by physical meaning. To change the device is risky a calibration is not enough, because of inertia …
such a mess…
well let s just ignore problems.
It’s possible to measure the temperature of air reliably and consistently by always using a constant volume of air, measured over a fixed period in an insulated “flask” (calaorimeter-like).
One employs e.g. an insulated, double-ended cylinder with a small, insulated piston on an operating rod passing through both ends. The rod is operated slowly to draw in air at one end with the opening being sealed off as the piston reaches the end of stroke. The piston is held stationary for 1 minute; enough time for internal currents to “settle” and for the embedded temperature sensor to reach equilibrium with the air.
At the completion of that measurement, the piston is moved in the opposite direction expelling the measured sample and “inhaling” a fresh sample of the air from the other end of the cylinder.
(It may be easier to implement with a moving cylinder so that the temperature sensors can be fixed and their wiring through the rod static; for reliability.)
one question then ,can you gives me either therorical explanations why the measured temperature is the temperature of the gaz … …or a series of results with different flasks size …
because , let’s make assumptions on a pure theorical point of view..
the only thing i know, is the temperature of thermometer is steady when exchanges of energy with the gaz and flask via thermal exchange and radiations are balanced…
so when you put gaz in a flask it first exchanges heat with the thermometer device and the flask… ( don’t put a hot thermometer in a small amount of cold gaz to measure its temperature..!!!)
And well if eventually , you need to assume that the gaz and the flask are in thermal equilibrium…
why not measuring the temperature of the flask?
sorry to be so stupid…but the only way for me to be quite sure is to use an amount of gaz large enough to be able to neglect the points i raised..i have to say i never made the calculations..so i have no idea at all…
i am an ignorant but a skeptic one.
In theory; equilibrium will never be reached because the insulated flask is not a perfect insulator.
Instrumentation is usually characterised by a time-constant which gives the time that it takes for the instrument to reach a value close to that of the step change in whatever is being measured. It’s an indicator of the bandwidth of the device; the ability to follow changes at a given rate.
Note that the instrument doesn’t indicate the “true” value at the end of the period defined by the time-constant. It’s typically two-thirds of the step change (67%). If you want to get closer then another period equivalent to the time constant gets you two-thirds closer again (in theory) to about 89%. Another time period; 96%.
The question that one must answer before deciding how long to wait for the instrument to “settle”, is how accurate does one need to be as an absolute value and how large is the plausible step change between samples?
The lower limit of target accuracy is determined by the accuracy of the instrument. Even after linearization and calibration, better than ±0.1⁰C over the whole range of temperatures is very difficult to achieve and to maintain in practice. In terms of “step” changes in air temperature, they are unlikely to be larger than (say, arbitrarily) 5⁰C/minute.
The difference in temperature reflects (in part) the change in enthalphy of the air; the useful amount of heat that the air has available to “heat” a temperature sensor/thermometer. If the air is dry, with little water vapour content, then its temperature will change more from heating the sensor than if it were humid as the respective specific heats are substantially different.
One obvious way to minimise the amount of heat transfer before “equilibrium” is (near enough) reached between sampled air and temperature sensor, is to pre-condition the sensor by exposing it to the bulk, “agile” volume from which the air is to be sampled. The nett amount of heat transfer during the measurement process is then substantially reduced; as is the sampling volume required within the cylinder.
This might help to explain how to perform stationary sampling.
Skepticalscience embarasses self with arithmetic. h/t WUWT
http://www.bishop-hill.net/blog/2015/1/12/sans-science-sans-maths-sans-everything.html
There have been books written about the various temperature measuring systems and enclosures. Here is one:
https://books.google.com/books?id=yPfWfTe01FwC&pg=PA122&lpg=PA122&dq=stevenson+screens+vs+MMTS&source=bl&ots=K0eIxrBK18&sig=Hjlgt-K-gKCll2GJZKhL3_McPQM&hl=en&sa=X&ei=gE20VJSwNYGBsQTju4K4Cw&ved=0CEQQ6AEwBw#v=onepage&q=stevenson%20screens%20vs%20MMTS&f=false
“The law that entropy always increases holds, I think, the supreme position among the laws of Nature. … if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.”
—Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)
Until readers recognize that all forms of energy (including gravitational potential energy) play a role in entropy and thus in determining the state of thermodynamic equilibrium (which the Second Law tells us will evolve) then you are barking up the wrong tree with radiative heat transfer theory as your only concept in your beliefs about temperatures on all planets and satellite moons.
See http://whyitsnotco2.com
He says “… here I mean when solar activity falls again.”
Up to that point the issue was about measuring temperature. All of a sudden it appears to be about “global warming” that is related to the Sun. So at this point one has to ask does he think Earth recently warmed or has the temperature record been compromised? Now he seems to be expecting solar activity to fall and the Earth to cool. That is not a function of how temperature is measured.
I’m I the only one confused by this?
Klaus Hager made a CLS – Career Limiting Statement – and will be able to enjoy his pension soon. What is the bet that he will booted out of his current position before the end of this year? He is a brave man.
Oh no, not another simultaneous change of temperature measurement technology. Australia did it mostly in 1907/8 with Stevenson screens, then there was sea water surface temp measurement methods, then Automatic Weather Stations, then ARGO buoys. All of these changes give scope for false claims of “unprecedented” warming.
The lesson is clear: Do NOT change everything at the same time.
It doesn’t matter what measuring devices are used, Mikky. The ACORN data set and fill-in shading will correct all temps in line with AGW.
Pierre, this is Ben from Suspicious0bservers. In case you didn’t know, our 210,000 subscribers (and myself) are big fans of yours. I would like to chat sometime but it is difficult to find the best way to contact you. I would love an interview, informal chat, or your input on a climate-based literature review I am writing.
Please contact me.
Ben Davidson
Example of electronically sensitive thermometers.
Casino NSW AUST 14 Jan 2015
2:30pm 31.5C
2:38pm 33.8C
3:00pm 31.5C
2.3C in 8 mins, then back down 2.3C in 22 mins.
Bet the old M in G therms couldn’t do that.
[…] “Fluctuations dominate climate, not trends” […]
[…] Today’s Featured Links: Exoplanet Weather: http://uanews.org/story/extrasolar-st… 2014 Senate Minority Report: http://www.epw.senate.gov/public/inde… No Tricks Zone: https://notrickszone.com/2015/01/12/un… […]
They will purge these Kulaks, of course.
Pierre,
I wonder if you have any comment on why there seems to be so much more coverage of non-consensus ideas in the German press, as compared to the UK. In the UK we have the occasional piece by David Rose in the Mail. And that’s it. (The issue of insane energy policy gets a bit of mileage in the Telegraph – but that’s a rather different matter.)
James Evans
You should see how sparse the reporting is in the USA
There’s been significant warming of Alberta winters in the last 50 years. In industrial areas at least. Man has large infuence on temperature in the winter here. The summer weather hasn’t changed much at all. I do not think it is because of co2 though. Also a strange phenomenon winter is starting later in the year, and ending later in the year. I can go snowboarding at the end of April and a couple times in tye beginning of may.
I’m gratified to see that many readers here have been among the 1,300 who visited my website last week. The word is getting around that it contains mind-blowing physics which explains all temperatures in tropospheres, surfaces, crusts, mantles and even cores of all planets and satellite moons. Find out what keep the core of the Moon hotter than 1300C. Find out how the required energy gets to the surface of Venus and raises its temperature. Find out what’s really happening on Earth. See http://whyitsnotco2.com