AGW Observer

Observations of anthropogenic global warming

Thomas Karl – a lecture on NOAA surface temperature analysis

Posted by Ari Jokimäki on June 10, 2010

This was originally written in Finnish for Ilmastotieto-blog. This is just a summary of Karl’s lecture so English speaking audience would be better off just watching the lecture (link is given below), but I’ll put this summary out anyway just to emphasize some of the things Karl said.

Recently, AMS Policy Program lecturer was Thomas Karl who talked about NOAA surface temperature analysis. Specifically the subject was the global and the U.S. surface temperature, its analysis and the problems with their solutions.


Global surface temperature anomaly in NOAA analysis (monthly values in grey and 1-year running mean in black).

NOAA is the National Oceanic and Atmospheric Administration. Thomas Karl is the director of NOAA’s National Climatic Data Centerin (NCDC). NOAA’s NCDC is one of the few providers of global surface temperature analysis. David Easterling and Thomas Peterson have been doing the NOAA’s surface temperature analysis with Thomas Karl. NOAA’s data and methods are available for anyone to see and evaluate.

Three global surface temperature analyses have been made (sometimes also a fourth is mentioned – a russian analysis which is only an analysis of land-based records and additionally there’s also a japanese analysis but Karl didn’t mention that one). The differences between these three are the greatest in the oldest data. Oldest data has the biggest uncertainties so finding greatest differences there is not a surprise. All three analyses have different adjustments and the analysis differs also in other aspects, but still all three produce quite similar end result.

When studying global surface temperature analysis and especially the land and ocean temperatures separately, it becomes evident that land temperatures have more short-term changes. Ocean temperature is increasing more steadily and ocean temperature is lagging the land temperature by couple of tenths of a degree per century. This is most likely due to larger heat capacity of the oceans.

The geographical distribution of the temperature anomalies reveals that in the north there are areas of strong warming but there also still are some areas showing some cooling. An example of an area showing cooling is the southeast USA. The situation in the southeast USA has been suggested to be caused by the cooling effect of aerosols, especially sulfur-based aerosols. However, at any case, most of the Earth is warming.

Yet there has been some claims that warming has stopped. According to Karl that isn’t true. There is a long-term warming trend evident. Two last decades have been warmer than previous decades and the last decade has been warmer than its preceding decade. All the annual means in the 2000’s have been higher than the decadal mean of 1990’s and all the annual means in 1990’s were also higher than the decadal mean of 1980’s (Karl doesn’t mention it, but his graph shows that also all annual means of 1980’s were higher than the decadal mean of 1970’s).


Annual means (black dots) and decadal means (red lines) in NOAA analysis. After 1979 all annual means are higher than the decadal mean of the preceding decade (red dashed lines). In the lower right corner are the numerical values of the decadal means (middle column) and their difference from the mean of their preceding decade (right column).

Temperature measurements in the USA have been assembled to USHCN (United States Historical Climatology Network), where 1200 measurement stations has been selected based on the availability of long-term measurements. In the past the stations have experienced lot of changes. Some stations have had their measuring equipment changed. Some stations have changed location. Urban heat island has affected some stations. Some stations have been handled better than others and the environment has changed for some stations. Because of these changes all the stations are no longer ideal for measuring climate change. Stations like that have to be evaluated carefully and some adjustments has to be made to their data where possible, so that the stations can be used in the analysis. Karl shows an example where certain station location has changed several times in the past. He shows the original measurements and how the data has changed after the adjustments.

In the global analysis (GHCN, Global Historical Climatology Network) there is 7280 stations of which 4400 (all stations with 25 years or more data) are being used in the actual analysis. There is a new GHCN version (version 3) coming out soon. The new version has for the first time exactly the same analysis globally as the analysis in the USHCN. According to Karl, the methods used in the new version have been independently tested. In the new version the areas without measurement stations are being filled by an algorithm that uses the known relationships of nearby stations during comparable temperature changes. The Earth is warming a little bit more in the new version than in the current version.

Results of the temperature analysis are given as temperature anomalies (deviations from a reference value – usually a mean of certain time period) instead of absolute temperature values. There are some problems with the usage of absolute temperature values. For example the measurement stations are located at different heights compared to the sea level and the height of the station affects considerably the measured temperature. When stations are removed from the network, the height distribution of the stations in the network changes which may cause bias to the end result of the analysis. But when anomalies are being used, the removal of the stations doesn’t affect the end result so much.

Also the location of stations in the north-south direction causes an effect which shows in the absolute temperature values but doesn’t affect the anomalies that much. For example if many stations located in northern cold areas are removed from the network the average temperature rises due to number of “cold stations” decreasing. This kind of thing doesn’t affect the anomalies. Temperature anomalies are additionally area-averaged instead of calculating simple mean of all stations. The removal of northern stations actually causes a cooling bias when anomalies are being used, because there has been most warming in the northern areas so the removal of northern stations actually removes stations that have warmed the most.

The number of measurement stations has been decreasing strongly in last decades. Some have claimed that it distorts the analysis. When the stations still measuring today (there are about 2300 of them) are being compared to the whole network, the results are almost the same. This means that the removal of the stations haven’t affected the end result of the analysis. This is due to usage of anomalies and area-averaging. One doesn’t actually need much stations for global analysis. We want lot of stations because we also want to know how things work at smaller scales (they become handy for example when determining station adjustments).

The time of the observation also causes a problem for the analysis. Early in the morning temperature usually is lower than in the afternoon. If the observation time of some station changes for example from morning to afternoon, it causes a warming bias to the data of the station in question. This has caused a false urban heat effect. There is practically no time of observation bias in urban-based stations which have taken their measurements punctually always at the same time, while in the rural stations the times of observation have changed. The change has usually happened from the afternoon to the morning. This causes a cooling bias in the data of the rural stations. Therefore one must correct for the time of observation bias before one tries to determine the effect of the urban heat island. Karl shows a comparison between urban and rural stations after the time of observation bias has been corrected, and there’s hardly no difference when the situation of the USA is considered. In the global analysis the rural stations even seem to show slightly more warming than the urban stations. Stations are being classified as urban or rural with assistance of satellite measurements where the amount of light pollution is measured in different areas. Also some other information are being used, such as maps, population statistics, etc.

Karl says that the situation with the handling of the measurement stations (for their location and environment) – i.e. the “siting” – is not very good at the moment. NOAA is working on the problem. In the studies of NOAA only 70 out of 1200 stations in USA turned out to have good siting. Fortunately it seems that this doesn’t affect the analysis much because the result from the mentioned 70 stations is almost the same as the result from the rest of the stations. There’s also the Climate Reference Network, which has 114 well sited measurement stations that are being monitored closely. Climate Reference Network has been operational only for some time but during that time the USHCN and Climate Reference Network differ from each other only very little.

Above it was mentioned that there’s a new GHCN version coming soon and how in that version the Earth warms a little bit more than in current version. One reason for it is that the new version corrects the data for a new “anti-urban heat effect”. For a long time there has been a migration going on among the measurement stations from cities to airports. Most of the airports are located outside the cities so the stations are moving out of urban heat islands. Karl shows the difference between the airport measurement stations and other stations. It shows that the stations located in airports show less warming than other stations. It is due to many stations moving from cities to airports and it has been corrected in the new GHCN version.


Some of the graphs Karl showed. Upper left is the change in the time of observation in the US measurement stations. Upper right is the comparison between urban and rural stations in the USA after the time of observation bias has been corrected. Lower right are the 1979-2008 trends from different analyses. Lower center is the effect of the airport correction. Lower right is the effect of station removal.

In the sea surface temperature measurements the amount of samples taken has changed a lot in the past. When mapping the old measurement sites the common shipping routes show up clearly and the ocean areas outside the routes have not been measured much. In the present the ocean areas have been covered well starting from the middle of 20th century. The problem of sampling amount has been well-studied and its effect has been found to be much smaller than the global warming signal. In smaller regions the sampling is bigger problem. Much of the modern measurements are being done from measurement buoys with which the oceans have been covered rather well.

It has been observed that the buoy measurements are systematically cooler than the ship-based measurements. The buoys have been used only in recent times and the earlier measurements were made from ships. This causes a cooling bias to the analysis. Originally the measurements were taken with buckets – first with wooden ones (good thermal insulator), then with canves buckets (bad thermal insulator), and the modern buckets are insulated with rubber. Today the ship measurements are commonly done from the intake of the engine cooling water. Measurement is done in the engine room so the heat from the engines has already warmed the intake water a little before measurement. According to current best estimate the too warm measurements from the ships quite accurately cancel the too cool measurements from the buoys. The issue is currently under investigation.

The uncertainty of sea surface temperature measurements is larger in oldest measurements and then decreases but grows again in recent times due to above mentioned problems. The combined analysis from land and ocean measurements has lot of uncertainties which makes it difficult to say anything certain about individual years but the overall trend is statistically significant.

Karl shows briefly also a few other indicators of global warming. Today lakes and rivers are freezing later and the ice breaks up from them earlier. The volume of glaciers is decreasing all over the world. Arctic sea ice is steadily decreasing over long period. Heat content of the oceans has increased. Global sea level is rising. Plants start blooming 1-3 days earlier per decade. Ranges of many species are moving towards the poles.

In the end Karl returns to the surface temperature analysis and compares different analyses from surface measurements and from satellite measurements and from radiosonde measurements. Three surface temperature analyses (NOAA, GISS, and HadCRUT) give almost the same result for the temperature trend of 1979-2008. They are using same data, though, but their analyses differ from each other. There are also three satellite analyses (RSS, UAH, and NOAA has its own “Star”), but for them the 1979-2008 trend differs clearly from each other. The weather balloon radiosondes (4 analyses) also gives quite large spread for the trends. So the satellite and radiosonde analyses have large spread while all three surface temperature analyses give almost the same result. This suggests that the surface temperature analyses are currently more reliable.

There are large short term differences for example due to El Niños which warm the troposphere more than the surface. However, there’s no systematic difference between satellite and surface temperature analyses. In July, NOAA is going to publish a new report “State of the Climate Report 2009”, which gives some fresh information of the state of our climate. Report has been peer-reviewed and it has 275 authors from 45 different countries. Karl gives a brief review of the contents of the report by showing a few graphs from the report.

Source: AMS Policy Program, AMS Climate Briefing Series, Thomas Karl: The Temperature Fingerprint of Climate Change (page loads a 59 minute video). There’s also the PDF of the presentation material(>11MB file).

Additional information:
Smith, Thomas M., Richard W. Reynolds, Thomas C. Peterson, and Jay Lawrimore, Improvements to NOAA’s Historical Merged Land–Ocean Surface Temperature Analysis (1880–2006), Journal of Climate 2008; 21: 2283-2296, [abstract, full article]

2 Responses to “Thomas Karl – a lecture on NOAA surface temperature analysis”

  1. Excellent post. Thank you.

  2. […] Unlike most countries, the United States does not have a standard observation time for most of its observing network. There has been a systematic tendency over time for American stations to shift from evening to morning observations, resulting in an artificial cooling of temperature data at the stations affected, as noted by Karl et al. 1986.  In a lecture, Karl noted: […]

Leave a comment