# 2005 James Hansen: No Agreement On What Is “Surface Air Temperature”…Few Observed Data Filled In With “Guesses”

NASA has an interview with James Hansen (still) up at its site here.

Here we see that “surface air temperature” (0 to 50 feet) is not even yet defined, let alone can it be determined. This does not only present lots of uncertainty in its determination, but also plenty of opportunity for measurement and interpretation mischief. Hat/tip: Reader Dennis.

Here The NASA interview (my emphases added):
==================================

## GISS Surface Temperature Analysis

### The Elusive Absolute Surface Air Temperature (SAT)

The GISTEMP analysis concerns only temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to the base period 1951-1980. The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Q. What exactly do we mean by SAT ?
A. I doubt that there is a general agreement how to answer this question. Even at the same location, the temperature near the ground may be very different from the temperature 5 ft above the ground and different again from 10 ft or 50 ft above the ground. Particularly in the presence of vegetation (say in a rain forest), the temperature above the vegetation may be very different from the temperature below the top of the vegetation. A reasonable suggestion might be to use the average temperature of the first 50 ft of air either above ground or above the top of the vegetation. To measure SAT we have to agree on what it is and, as far as I know, no such standard has been suggested or generally adopted. Even if the 50 ft standard were adopted, I cannot imagine that a weather station would build a 50 ft stack of thermometers to be able to find the true SAT at its location.

Q. What do we mean by daily mean SAT ?
A. Again, there is no universally accepted correct answer. Should we note the temperature every 6 hours and report the mean, should we do it every 2 hours, hourly, have a machine record it every second, or simply take the average of the highest and lowest temperature of the day ? On some days the various methods may lead to drastically different results.

Q. What SAT do the local media report?
A. The media report the reading of 1 particular thermometer of a nearby weather station. This temperature may be very different from the true SAT even at that location and has certainly nothing to do with the true regional SAT. To measure the true regional SAT, we would have to use many 50 ft stacks of thermometers distributed evenly over the whole region, an obvious practical impossibility.

Q. If the reported SATs are not the true SATs, why are they still useful ?
A. The reported temperature is truly meaningful only to a person who happens to visit the weather station at the precise moment when the reported temperature is measured, in other words, to nobody. However, in addition to the SAT the reports usually also mention whether the current temperature is unusually high or unusually low, how much it differs from the normal temperature, and that information (the anomaly) is meaningful for the whole region. Also, if we hear a temperature (say 70°F), we instinctively translate it into hot or cold, but our translation key depends on the season and region, the same temperature may be ‘hot’ in winter and ‘cold’ in July, since by ‘hot’ we always mean ‘hotter than normal’, i.e. we all translate absolute temperatures automatically into anomalies whether we are aware of it or not.

Q. If SATs cannot be measured, how are SAT maps created?
A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a ‘climatology’) hopefully represents a typical map for the particular month or day of the year.

Q. What do I do if I need absolute SATs, not anomalies ?
A. In 99.9% of the cases you’ll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you have to pick one of the available climatologies and add the anomalies (with respect to the proper base period) to it. For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.

### 12 responses to “2005 James Hansen: No Agreement On What Is “Surface Air Temperature”…Few Observed Data Filled In With “Guesses””

1. And yet NOAA reports that 2014 global surface temperature as “…easily breaking the previous records of 2005 and 2010 by 0.04°C (0.07°F).”

I guess that’s the magic of anomalies, models and statisitics. What would James Hansen say about that?

2. When did they stop using the max and min temp average?

3. Veeery interestink……

For the Extended GHE to exist with 157.5 W/m^2 mean surface IR thermalisation in the local (~30 m) atmosphere**, its mean temperature would have to be ~0.3 deg C. You get this assuming 16 deg C mean surface temperature, the atmosphere emitting 238.5 W/m^2 at 0.75 Emissivity.

So, the Enhanced GHE cannot exist. It’s interesting that Dr John Houghton explained in Figure 2.5 of his 1977 ‘Physics of Atmospheres’ why there can be no net surface to local atmosphere temperature drop because of the convection needed to maintain lapse rate.

Looks to me that in 2005, the Hansen group had realised they could be scuppered at any time if they couldn’t prove this local air temperature, lower than at any time since the Ordovician Ice Age, 444 million years ago.

for comparison, the Last Glacial maximum had a mean surface temperature of ~10 dg C, 6 K lower than now, not 15.47 K!

**’Clear Sky Atmospheric Greenhouse Factor’

4. Good that we have cleared that up after 40 years of caterwauling about CO2.

5. ok it is not a temperature but as the principle of climatology says, the more hypothesis the more certainty because errors average to zero and can be ignored ..sooooooooo

we have the right temperature anomaly from a crazy statistical biased objet measured with help of thermometer…

hey guy if we don’t use the crap data we have we cannot do anything so don’t be a party killer let s play with us…

1. “because errors average to zero and can be ignored”

I get your sarcasm, but : averaging temperatures to exclude errors does not work. This assumes the law of large numbers holds, and it holds only for normal distributions. Temperatures are not normally distributed.

1. yes you re right, as you noticed it is a sarcasm, i don’t mean actual errors but the sum of all assumptions and caveats that are ignored, (the very examples is the previous production of mr gosselin or what hansen says in article) …
Because sat is only a part of climate system, SAT in the best case measures temperature of close ground air ..but it is affected by environmental changes not to mention experimental devices in a way nobody can know , even it is not conventional way it is the errors i wanted to adress and they are unknown.

Well imagine you want to measure the heat content of the whole climate system..would you use thermometers close to the ground? Of course you would not!

first woul be the oceans… then atmosphere…and certainly not close to the ground!

I do agree it is all we have on an historical point of view, it must be looked at..but…the point is without another “measure” of the sat to cross the result i just don’t understand how can people trust the result.

Since we have balloons and satellites we can cross a bit the result though it doed not measure exactly the same thing! ( Tmax +Tmin)/2 on 24 hours is different from T…therefore even if we see a difference we can assume it comes from a change in daily distribution of temperature or a atmospherical circulation change…

well sorry every body knows that..

let s make it short…
before satellite era…
i have no way to redo the measure…
i have no way to check the “device”.
why should i trust the result?

oh sorry , i am stupid, because for magical reasons the anomaly is still “good”.

And it is the first step..after that you have to make assumptions that sat is relevant to guess global heat content anomaly…

scarce and dubious data science….and at the end 95% certainty…

2. To add to this: the definition of error terms is that they have zero mean and do not correlate with each other and the variable of interest. Otherwise, bias is more appropriate.

6. AlexM,

What exactly is a “Extended GHE” or a “Enhanced GHE”? … Just more fantasy BS that cannot exist since a GHE doesn’t exist on its very own.

Similarly, what the hell is a “Clear Sky Atmospheric Greenhouse Factor”? … Just more sophistry.

1. Sorry, meant Enhanced GHE; the claim originating from Arrhenius that ‘black body’ surface IR heats the adjacent atmosphere (~30 m) and this then heats the surface, causing extra evaporation hence the amplification of intrinsic CO2 warming. Let me explain further.

Like all professional scientists and engineers I did a sniff test and an energy balance when looking at the IPCC Climate Models.

Sniff test: if the Earth’s surface were to heat local air at the claimed mean 157.5 W/m^2, its temperature must be ~ 0 deg C – averaged OVER THE WHOLE PLANET; colder than at any time in the past 444 million years.

It is really near surface temperature, kept there by convective processes that maintain ‘lapse rate’. Houghton showed why in 1977. He then gave up Science to co-found the IPCC. In 2005, Hansen bemoaned the fact they had no measurements of local air temperature; he knew they were vulnerable to clear thinking opposition.

Conclusion; Climate Alchemy Stinks.

Energy Balance: Hansen et al in 1981 faked an imaginary single -18 deg C IR emitter in the upper atmosphere. This was a ‘bait and switch’, exchanging real 238.5 W/m^2 with imaginary 333 W/m^2 ‘back radiation’, 40% rise in energy. They did another fraud in hindcasting to purport extra evaporation from oceans. When he presented his rabid claims to Congress in 1988, they were all based on modelling artefacts.

Conclusion: the modelling has been fraudulent for 34 years.

7. […] 2005 James Hansen: No Agreement On What Is ?Surface Air Temperature??Few Observed Data Filled In Wit…  […]

8. As a result, GISS’s director Gavin Schmidt has now admitted Nasa thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond. Another analysis, from the Berkeley Earth Surface Temperature (BEST) project, drawn from ten times as many measuring stations as GISS, concluded that if 2014 was a record year, it was by an even tinier amount.