Tuesday, February 23, 2010

GISS UHI adjustments

Yet another fevered analysis on adjustments from Willis Eschenbach, this time on GIStemp in Alaska. As usual, everything that he doesn't understand means someone is fudging ("Fudged Fevers").

Ironically, there was huge pressure from sceptics that led to GISS to completely release their code and data some years ago. They may have hoped that the fudge fans would at least look at what is in the code first to find where it's done. But no.

The adjustment in question is GISS's way of dealing with the Urban Heat Island effect. The sceptic chorus is that measured warmth is an artefact of measurements being taken near cities, at airports or whatever. There is indeed a UHI, and here's what GISS does about it.
GIStemp information

Update Apparently an issue now is whether GISS erred badly in classifying Matanuska AES as urban according to satellite-observed night brightness. I have added below a satellite map pic of Matanuska AES. It's not far from Wasilla. It is in fields, but about 1 km from a major freeway intersection, and about a mile from what looks like a car sales lot.

Tuesday, February 16, 2010

Irregular updates to GHCN

Here is the next stage in the study of GHCN updates. I took a v2.mean file that I had stored from 9 Dec 2009, and compared it with the current 15 Feb file on the NOAA site. For each station I calculated the number of months the station records had advanced over those two months. Of course, the common case is where a station is regularly updated, and the earlier file has a November reading and the later a January reading, with a difference of 2 months. But because both dates are early in the month, either of those could have slipped, so it could be 1 or 3.

Of course, even more common in the lower part of the list is where there is no update at all. But more interesting are the cases where there is a large update. This happens mostly in African countries, and can be as much as 91 months. This analysis doesn't say whether that whole period was filled; just vthat new data came in after that time.

There were just three cases (in Ethiopia) where there is a negative update. Presunably that means that a month's readings were removed from the record.

The form of the table is similar to the last post. It starts with the most recently updated stations. In fact, the only change is the additional column of update intervals. It's below the jump.

Monday, February 15, 2010

Updating GHCN - Stations aren't "dying"

E.M.Smith had a post at WUWT dramatically entitled NOAA langoliers eat another 1/3 of stations from GHCN database. What happened was that he looked at the v2.mean file at NOAA dated the 8th Feb, and found a whole lot of stations had no report for Jan 2010. They were deemed to be eaten, even though a whole lot more turned up that day. The simple fact is that GHCN adds data as they become available. It's a simple storage database, and there's no reason to do otherwise.

Anyway, I wrote a script to download and list the stations in the current v2.mean at NOAA. It lists them in order of date of most recent report (newest first). The list may be helpful, because it is also sublisted by country. The code (a linux shell script which calls 2 R scripts) is here. The list is below the jump.

Update 16/2
The date of the v2.mean.Z file I used was 15 Feb.

Here's a count of the numbers of stations terminating in various recent months. I found a v2.mean file I had for 23 Jan 2010, so I'm giving that count for comparison:
Year_Month_Count 2/15_Count 1/23

There's an interesting result buried in there. What looks initially like a cull in Sept 2009 now looks like a bunch of very slow reporters, running about 5 months late. I'll add below (when formatted) the list of Aug/Sep stations from Jan 23 for comparison.
Update This list now added at the bottom

Another update.
There's a batch of 39 stations from China which seem to have, at some time between 23 Jan and 15 Feb, been updated from Aug 2009 to Sept 2009. Also two from Afghanistan.

Monday, February 8, 2010

Testing the performance of GCM models

There has been some interest in matching performance of GCM data and observations in recent decades. Some is stimulated by a very public dispute between Douglass et al 2007 on tropical tropospheric temperatures, and Santer et al 2008 (a Who's Who list of authors) in response. Some have taken the test in the latter to be an example to follow.

For blog discussion, see eg (just a sample) CA, Lucia, Chad, Tamino

In another thread of activity, Lucia has a running series of checks on the GMST trends and the significance of their deviation from observations. There was a long series of "falsifications" of IPCC predictions listed here, and more recently a test over a longer period of GIStemp and Hadcrut.